Phil Howard INFORMATION · TECHNOLOGY · SOCIETY

TechPresident: Bad News Bots

(This originally appeared as “Bad News Bots” on Techpresident. The article was no longer online and was recovered from the Web Archive.)

It’s no secret that governments and political actors now make use of social robots or bots—automated scripts that produce content and mimic real users. Faux social media accounts now spread pro-governmental messages, beef up web site follower numbers, and cause artificial trends. Bot-generated propaganda and misdirection has become a worldwide political strategy.

Robotic lobbying tactics have been deployed in several countries: RussiaMexicoChinaAustralia,the United Kingdomthe United StatesAzerbaijanIranBahrainSouth KoreaTurkeySaudi Arabia, and Morocco. Indeed, experts estimate that bot traffic now makes up over60 percent of all traffic online—up nearly twenty percent from just two years ago.

The ways politicos’ use social bots in efforts to sway public opinion are ever-evolving and ever-more sophisticated. Governmental actors have recently used bots to drown-out oppositional and democratic voices during political crises, elections, and conflicts.

Political groups use Twitter bombing to jam up the news feeds of activists and opponents. This practice also prevents useful civic conversations both on and offline, which is a major issue in countries where citizens use social media as their main form of organizational communication.

The time may come when citizens will be able to distinguish between human and auto-generated content. Or, as bots become more and more ubiquitous, particularly during election campaigns, the public may become inured to them. But for now, with most Internet users barely able to understand how HTTP cookies work, the ability of bots to influence public opinion during politically sensitive moments is a real concern.

Bots will be used in regimes of all types in the years ahead.

Bots threaten our networks in two ways. Firstly, they slow down the information infrastructure itself. With the power to replicate themselves, they can quickly clutter a hashtagged conversation, slow down a server, and eat up bandwidth. The second and more pernicious problem is that bots represent a form of political subterfuge since they can pass as real people in our own social networks.

Badly designed bots produce the typos and spelling mistakes that reveal authorship by someone who does not quite speak the native language. But well-designed bots blend into a conversation well. To avoid detection, they may even employ slang words, street idiom and socially accepted spelling mistakes. By blending in, they become a form of negative campaigning.42

Indeed, bot strategies are similar to push polling, an insidious form of negative campaigning that disguises persuasion as opinion polling in an effort to affect elections. However, in many ways bots are more detrimental to society. Indeed, the American Association of Public Opinion Researchers has a well-crafted statement about why push polls are bad for political discourse, and many of the complaints about negative campaigning apply to bots as well.

Bots work by undermining the confidence people have in information sources since it can be difficult to differentiate between feedback from a legitimate friend, and auto-generated content. Just as with push polls, there are ways to identify a bot:

● One or only a few posts are made, all about a single issue.
● The posts are all strongly negative, effusively positive, or obviously irrelevant.
● There are rarely links and photos of real people or organizations behind the posting.
● No answers, or evasive answers, are given in response to questions about the post or source.
● The exact wording of the post comes from several accounts, all of which appear to have thousands of followers.

The fact that a post has negative information or is an ad hominem attack does not mean it was generated by a bot. Politicians, lobbyists, and civic groups regularly engage in rabble-rousing over social media. They don’t always stay “on message,” even when it means losing credibility in a discussion.

Nonetheless, bots have influence precisely because they generate a voice, and one that appears to be interactive. Many users clearly identify themselves in some way online, though they may not always identify their political sympathies in obvious ways. Most people interact with others on a variety of issues. The interaction usually involves questions, jokes, and retorts, not simply parroting the same message over and over again. Pacing is revealing: different from a bot, a legitimate user cannot generate a message every few seconds for ten minutes.

Botnets generating content over a popular social network abuse public trust. They gain user attention under false pretenses by taking advantage of the goodwill people have toward the Internet’s vibrant social life.

When disguised as people, bots propagate negative messages that may seem to come from friends and family in a user’s crypto-clan. Bots distort issues or push negative images of a political candidate in order to influence public opinion. They go beyond the ethical boundaries of polling by bombarding voters with distorted or even false statements in an effort to create negative attitudes. By definition, political actors perform advocacy and canvassing of some kind, but bots grossly misrepresent such advocacy.

So, how can people combat political bots? How can activists avoid having their digitally mediated coordination efforts interrupted by inane bot chatter? Here are five tactics designed to help those with just these sorts of questions:

1) WATCH FOR BOTS
The ability to identify which accounts on sites like Twitter are bots is crucial for the sake of both effective online organizing, and reporting misuse on offending sites. Although advanced bots can navigate around identifiers, few bots are able to avoid them all. Watch out for accounts with no profile picture, garbled or misspelled handles, low numbers of Tweets, and more accounts followed than followers. If an account directly replies to your Tweet within a second of a post, it is likely automatically programmed. Tweeting from an API, rather than web, mobile, or Tweetdeck is standard bot modus operandi.

2) KNOW HOW TO TURN THEM IN
Activists using social networking to communicate with fellow citizens need to know how to report attacks. All major social media sites have teams waiting to respond to such misuse. Informing these sites of misuse and propaganda helps them improve their misuse-management tactics, and keeps feeds more open and democratic.

3) BE DYNAMIC WITH HASHTAGS
Don’t get stuck using a single hashtag to organize demonstrations and communicate with followers. Using one hashtag makes it that much easier for bots to co-opt the tag. Develop a game plan with fellow activists and have a list of multiple searchable hashtags associated with your cause. Send out multiple messages using multiple hashtags to other activists. In this way, activists who are targeted with Twitter-bombs or bot disruptions can respond fluidly, and have alternatives in place for action.

4) WATCH FOR FOLLOWER PADDING
If the accounts of politicians or other political actors unexpectedly gain thousands of followers overnight, it’s likely that many of these new followers are bots. Fake followers are incredibly cheap and easy to purchase. Politicians around the world have admitted to using them in order to seem more popular to voters. Get the word out about this phenomenon. High follower numbers do not equal popularity.

5) CODE YOUR OWN BOTS?
The more activists learn about coding and the workings of their digital tools, the better they become at using these tools. Bots can be used on social media for many positive and useful tasks, and it would be wise for democracy advocates to become more efficient and savvy with online organizing and publicity efforts. Bots are one way to streamline social media conversations and activity. Python is a straightforward, relatively simple, computer language to learn for these purposes. Democracy can only benefit from educators, coders, peacemakers, and policy-makers connecting to create and execute more dynamic plans for promoting free speech, equality, and justice online. Doing this may add to social media clutter, so civic groups should think about the advantages and disadvantages of doing this.

Sam Woolley is a Ph.D. student in the University of Washington’s Department of Communication. Phil Howard is a professor at UW’s Department of Communication. They, along with several colleagues, recently won an Early-concept Grant for Exploratory Research (EAGER) from the National Science Foundation (NSF) to track and understand political bots.

Explore Themes

Democracy
Technology and Society
Public Policy
Research Methods
International Affairs