Amid election fraud concerns: Could robots decide a US election?

A Trump bot strategy could build a false perception of hundreds of thousands of Americans alleging electoral fraud.

A supporter of President Donald Trump holds a sign stating "STOP THE STEAL" and a pin stating "Poll Watcher" after Democratic presidential nominee Joe Biden overtook President Donald Trump in the Pennsylvania general election vote count across the street from where ballots are being counted, three d (photo credit: REUTERS)
A supporter of President Donald Trump holds a sign stating "STOP THE STEAL" and a pin stating "Poll Watcher" after Democratic presidential nominee Joe Biden overtook President Donald Trump in the Pennsylvania general election vote count across the street from where ballots are being counted, three d
(photo credit: REUTERS)
 Before the dust had even settled, President Trump cried foul. In a closely contested election, where states are decided by just thousands of votes, the legitimacy of the election is already cynically being called into question. In previous elections, the burden of proof for plaintiffs was high: Major media outlets, judges and election officials had to be convinced of the existence of fraud. Now, because of social media, Trump can take his claims directly to people who know nothing about the election process, thanks to the use of bots.
A bot is a digital golem, lashed together from a few lines of code. Bots cloak themselves with verisimilitude. Profile pictures of real humans and a long history of posts can hide that, in reality, they exist only to amplify the message of their master. They re-tweet, like, share and post whatever they are told to, mindlessly generating the illusion of popular discourse. Their exact number is unknowable, but it could be as high as 50% of all users, and with sinister motives, they have begun to influence our world.
Bots are no laughing matter, as evidenced by the “Gerasimov Doctrine.” Sounding like a James Bond plot device, the doctrine is named after its author, Chief of the Russian General Staff Valery Gerasimov. The doctrine states that social media can and should be used to sow discord in foreign countries, paralyze political processes and enhance societal tensions. Such was the case during the annexation of Crimea.
During the 2014 Crimean crisis, Russia used thousands of bots to spread false news reports, among them allegations that Ukrainian soldiers had raped Crimean women and poisoned water wells. These brutal yet fictitious attacks turned Russia’s incursion into a humanitarian mission. This success fueled Russia’s ambitions. During the Brexit debate, according to the Times, Russian bots tweeted more than 150,000 pro-Brexit tweets in the build-up to the vote. This time, the goal was to build an army of millions of eager Brexiteers.
A Trump bot strategy could build a false perception of hundreds of thousands of Americans alleging electoral fraud. As the cry is repeated from a million mindless golems, it will appear that a wave of Americans share Trump’s delusion. Enough real humans might participate to make it impossible to distinguish from a mass movement.
The threat is not that this will lead to insurrection, but to obfuscation. By conjuring a smokescreen of illegitimacy, Trump will have created cover for dozens of frivolous lawsuits designed to jam the electoral process in key states. Even if the Republican challenges fail, they will have cast doubt on the legitimacy of the Biden presidency, an albatross around his neck and a slur for Republicans to rally around for the next four years.
The march of the bots, however, has its limits. Social media companies are cracking down, in part driven by demands from advertisers that will not pay to advertise to bots. Executives are tired of being hauled before panels in Washington to explain their actions. Most tellingly, even in countries that have sophisticated bot networks like Russia and China, governments have found bots to be no substitute for old-fashioned means of suppressing dissidents. Meanwhile, truly mass movements such as Black Lives Matter have proved more robust and influential than comparable right-wing groups. Oddly, it appears that the more reality intrudes, the weaker bots become.
This weakness is in part due to the fact that most individuals still enjoy a healthy “news diet.” Most Twitter users do not live in the Twittersphere. Instead, we gather information from different sources, debate with our families, and chat with our colleagues. Democracy relies on a broad base of volunteers: poll workers, election observers, even the postal workers who deliver the ballots. We know these people. Through them our faith in democracy is made more resilient. They help shape our sense of reality, limiting the power of bots. There are also limits to branding. If bots were able to shape public opinion so successfully, Coca-Cola and Disney would be greater threats than Russian programmers.
It is also important to remember that bots are not super-weapons. Governments, news organizations and social media companies are intimately familiar with them. The British Foreign Office boasts a big data unit created to counteract bots. The Israeli Foreign Ministry has established a code-writing unit to monitor Twitter. If anything, it is likely that the coming years will see the extensive powers and resources of government turned toward limiting the effect of bots, making them less of a threat than ever before. In the long run, these digital golems will find that influence is an art too subtle to be left to machines.
Louis Soone studies the Internet and future technology. Dr. Ilan Manor is a digital diplomacy scholar at the University of Tel Aviv.