-1- wrote: ↑Sat Mar 30, 2019 1:18 pm
Scott Mayers wrote: ↑Sat Mar 30, 2019 5:36 am
-1- wrote: ↑Sat Mar 30, 2019 1:58 amWhat's a Bot?
Short for "robot", they are programs that scout various sites to mine for data. They are intended to be used to be both automatic, quick, and also, intentionally to remove the component of actual people to read other's personal information when collecting such data. Thus, for a something like Facebook, the Bots will scan all your personal emails to seek for word or word phrases to be used as data that then get used to sell anonymous stats of the data company of those bots or to be used to 'guess' at your likes and interests to target you for advertising.
Thank, you, Scott, for a remarkably clear and comprehensive answer. I never knew what Bots were.
So a Bot doesn't only scan and collect data, but according to the foregoing posts, it can post as a human, furthermore, if it is given the required permissions, it can alter, edit, or censor others', humans', posts.
Fascinating.
If given the power by the site, the bots
can be used to check for content and act (like censor), but outsider bots, like those from Google, cannot do so without permission, of course. Google uses these to update the searches. So if you look up your own user name here, "-1-", for instance, through Google's bot that visits here, it will tally how often people look at your content. If it is more popular when someone types in "-1-" to Google, than others who might type in "-1-", your topics will be listed on the top of the list when people search. [at least this used to be the case until search optimizer companies figured out how to 'cheat' this and why we don't always find what we want when we use Google. I type in my own name and it tries to correct me by searching for "Scott Meyers" instead. This is because an author whose name is this is popular for his computer language texts. This could possibly be due to the publisher of his books paying for optimizing software that might link both spellings to his name because people might easily misspell his actual name and not find his book.
I checked your label, "-1-", without the quotes and it finds nothing. So this indicates they are not using a bot that links your name personally to your content. But if you wrote a thread that got used lots, the title of the thread might be something the bots use and so might place your thread discussions if someone looked up something with the title of some popular discussion that others here use and/or that others have used Google to find. This is sort of one way you can see they don't tie the person's name in collecting data. Of course this is a public forum and so the data of our threads is not 'private'. However, if it was your email, they may keep that private from even searches but can be used to still associate common links to advertising bots.
Technically these are supposed to be safe but we have to still question this when even governments are now demanding rights to have literal direct access to more information than we welcome them too. And if they are able to get this, then it implies that the bots can also be used to identify your private information to anyone. So the concerns are real. Even when we have the best intentions and clever logic to create these programs for non-intrusive purposes, they can also harm us by those who learn how to further use them for bad purposes.
Since 9/11, that Homeland security of the American government's purpose was to permit a division of government to say, use bots to seek people's private communications to seek for common words that terrorists may use often. They are supposed to also keep the information out of the eyes of actual people UNLESS the bots trigger an alert. But then they require the communication companies to KEEP records of people's names tied to the bots searches just in case the government needs to identify them. While 'good' to some degree, because people run Homeland security, it means they too can act with nefarious reasons unrelated to actual security but to particular political interests. This is where we DO have to question the limits of these bots.
My concern about the FACT that the sites are 'public', if governments are able to hold particular people's words accountable, the sites themselves should not have the power to edit content in any way OR the content itself should not be used by outside governments to be 'trusted' as the fault of those 'guests' of the site. At best, they could hold the site at fault. So the burden of the site to protect the integrity of its guests should be to preserve people's words without edit or censorship OR lose credibility as a 'public' forum.
The debate is mixed because many 'public' forums, like the government forum sites that taxpayers pay for, are moderating their forums on the excuse to prevent hate speech BY people. They might take advantage of 'bots' to do this so that the moderation isn't
personal but just a flag. But then even if the bots are used this way to evade human intervention, how can the public trust how the bots are actually being used as moderators without the people being able to have 'proof' that someone did or did not say something that triggered their censor? We cannot hold accountable those people who DO use hate speech, for instance, if we cannot publicly witness the content of said abuse because the people who might be held responsible may have been just falsely quoted as saying something they didn't say by those who hold the power to moderate even if the bots themselves are understood to be unbiased.