Facebook: No coordinated efforts to skew Israeli elections identified

More than 35,000 Facebook employees are currently tasked with ensuring the safety and security of users on the platform, including overseeing the strategy of protecting election integrity.

Fosco Riani, Public Policy Associate Manager in Facebook's EMEA elections team. (photo credit: TOMER FOLTYN)
Fosco Riani, Public Policy Associate Manager in Facebook's EMEA elections team.
(photo credit: TOMER FOLTYN)
Facebook has not identified any coordinated efforts to unfairly influence the result of the upcoming Israeli election via its platform, a senior representative from the social-media giant said on Monday.
Prior to the Knesset election last April, Facebook removed almost 1,300 pages, groups and accounts connected to Iran for engaging in “coordinated inauthentic behavior” targeted against Israel. The efforts were led by two separate Iranian networks.
According to Fosco Riani, associate manager of public policy in Facebook’s EMEA elections team, the platform’s teams “did not identify any such behavior in the second round of elections and has not found any evidence in this one.”
More than 35,000 Facebook employees are currently tasked with ensuring the safety and security of users on the platform, including overseeing the strategy of protecting election integrity, he said. The company has tripled its security-oriented workforce since 2017, responding to intense scrutiny that followed the 2016 US presidential election.
“If we think of all the social activity happening on the platform, then the vast majority is innocent and positive use,” Riani told reporters at Facebook’s Tel Aviv office. “Only a tiny portion constitutes violations of our policies, and the majority will consist of spam and fraud.”
“A very small portion is what we call an ‘information operation,’ a coordinated inauthentic behavior to attain a specific goal,” he said. “In the context of elections, it could be with the objective of skewing the public debate towards a specific result. Despite the fact that it is a small part of the fraudulent activity, we are committed to making it as hard as possible for actors to abuse the platform because of the impact it can have.”
Facebook removed approximately 50 networks engaging in such behavior in 2019. It shared its findings with the public and with Washington-based think tank Atlantic Council’s digital forensic research lab on every occasion. Aiming to prevent the spread of misinformation, Facebook shut down about 1.7 billion fake accounts in the third quarter of 2019.
“Preventing voter suppression or incorrect voter information is important to us,” Riani said. “We have clear standards on the misrepresentation of modalities of vote. Whenever we find this kind of content, and we proactively search the platform, we remove it from the platform completely. A common behavior we see is claiming that a certain region or district will vote on another day than the rest of the country. That content may lead people in the region to not show up to the ballot.”
During the 2018 Israeli municipal elections, Facebook removed false content alleging that one mayoral candidate in Kiryat Motzkin had withdrawn from the race, he said.
Last March, Facebook launched a series of new political advertising transparency tools to help prevent foreign interference in Israel’s general election and make electoral advertising on Facebook more transparent. The tools disallowed electoral ads purchased from overseas, and local advertisers were required to complete a two-factor authentication process to purchase political ads.
The social network also created a publicly searchable library of electoral ads for up to seven years, including details on how much was spent on individual advertisements and demographic information about who the advertisement reached. Since August 2019, Israeli organizations have spent nearly NIS 21.5 million ($6.22m.) on almost 38,400 ads concerning social issues, elections or politics.
“Fighting misinformation is one of the most important things we do at Facebook,” said Jessica Zucker, the company’s product policy manager. “It is important to note that the manipulation of information is not a new phenomenon, but the scale that it can spread on social media is new.”
She presented Facebook’s policies to combat misinformation, including the removal of content judged to represent electoral interference, reducing the spread of viral misinformation and presenting users with additional information to better understand the origin of news-feed content.
Despite the measures, Zucker said, the social network refuses to fact-check or censor political advertisements, even if potentially misleading.
“We think that censoring politicians, or restricting what people can see, will limit what their elected officials are saying and leave people less informed,” Zucker said. “It also means that elected officials are less accountable for what they say.”
“That doesn’t mean that politicians can say anything they want,” she said. “If a politician violates our community standards, we’ll take that down. If a politician tries to share content that has previously been debunked by our fact-checkers, their content will be treated in the same way as anyone else.”