New steps needed against AI disinformation

The use of deepfakes has already succeeded in causing damage internationally.

 Israeli minister of National Security Itamar Ben Gvir at the scene where five people were shot dead in the Arab Town of Yafa an-Naseriyye, northern Israel, June 8, 2023. (photo credit: FLASH90/FADI AMUN)
Israeli minister of National Security Itamar Ben Gvir at the scene where five people were shot dead in the Arab Town of Yafa an-Naseriyye, northern Israel, June 8, 2023.
(photo credit: FLASH90/FADI AMUN)

Imagine the following scenario: A video is published on the Internet, in which National Security Minister Itamar Ben-Gvir makes the following statement: “Senior officers in the Israel Police have informed the minister that the WhatsApp and Facebook groups of the anti-reform movement contain large quantities of inciting messages which could lead to endangering the lives of elected coalition officials. As a result, the minister decides to shut down social media groups of opposition civil groups for 21 days, to ‘calm the spirits and maintain public safety.’”

Public opinion gets into a frenzy; opposition Knesset members quickly describe Netanyahu as a dictator; spontaneous demonstrations in Tel Aviv rapidly turn into violent clashes between policemen and demonstrators – and then, the video turns out to be fake, created using AI by the Hamas organization, to undermine internal stability in Israel.

Although this scenario is imaginary, the rapid technological developments in AI, on the one hand, and the rising public tension surrounding the legal reform, on the other hand, are bringing closer the day when this event could occur in Israel, which currently doesn’t have sufficient tools to counter such a serious threat.

In recent years, and especially since the launch of ChatGPT in last November, there has been wide public discussion regarding the extensive effects of AI on our lives. While AI may promote improvements in fields such as health, education, and agriculture, it may also allow malicious actors to harm people, business companies, and states in various ways. 

Harm can result from the ability to rapidly produce and spread high-quality false information. Using large language models (LLMs), such as ChatGPT, it is possible to generate video clips, images, and texts that supposedly present events and facts as real although they are fictitious. This content is described as “deepfake” (the word is a portmanteau of “deep-learning” and “fake”).

 A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023. (credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken Feb. 8, 2023. (credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)

AI-based disinformation is seen by experts as more worrisome than the usual “fake news.” Studies demonstrate that fake AI-based images, texts, and speech samples are often perceived as convincing and humans have difficulty in debunking them. Therefore the ability to detect and remove this information from social media becomes much more challenging. Also, the means used for creating high-quality false content become cheaper, thus allowing small groups (and even individuals) to carry out effective and wide-ranging influence operations with a tool that was previously the exclusive domain of state actors.

In the Israeli context, these developments may expand the disinformation landscape. While heretofore, Iran and Russia were the main sources of spreading political disinformation in Israel, AI may give terrorist groups, which have already recognized the propaganda profit in the intra-Israeli debate, the ability to empower their subversive activity. 

Deepfakes causing international damage

The use of deepfakes has already succeeded in causing damage internationally. Last May, an AI image showing heavy smoke billowing from the Pentagon building was published and spread on social media causing the temporary wiping of billions of dollars off the Wall Street stock market. Bloomberg news site estimated that this is probably the first instance of AI-based content moving the market.

Admittedly, countering disinformation is not a new challenge and it has already received extensive attention from the Israeli government, of with the Health Ministry’s handling of the spread of COVID-19 disinformation is a prominent example. However, based on a review of worldwide actions, the unique characteristics of AI disinformation require additional measures and steps that are still to be implemented in Israel. 

Advertisement

Digital literacy education programs can be launched, in which students will be trained to distinguish between false and true information, including AI-based content. These programs have already been initiated by MIT researchers among college and middle school students in the US. Second, technological research projects can be promoted to develop solutions for detecting AI-based disinformation. 


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Recently, the European Digital Media Observatory (EDMO) – a research project sponsored by the European Commission – announced it intends to cooperate with four European research projects, focusing on developing tools for detecting AI-generated content. Finally, a regulation for defining the permissible conditions for the use of AI in social media publications can be drafted. 

Many countries have enacted similar laws, but the recent initiative of the Federal Election Commission (FEC) – the body that enforces campaign finance law in US federal elections – is worthy of note. Ahead of the US 2024 election, when officials warned about the devastating potential of AI-based disinformation, the FEC launched, in early August, a public consultation on shaping guidelines for using AI in campaign advertisements. 

A public discussion of this type in Israel may balance the need to defend against the dangers of disinformation and the desire to ensure freedom of expression.

The writer is a researcher at Yuval Ne’eman Workshop for Science, Technology and Security, Tel Aviv University.