How bots and fake users threaten internet integrity and business security - opinion

Bots now make up 47% of internet traffic, impacting national security and businesses by distorting data and spreading misinformation.

 tiktok The avatar is on the way  (photo credit: Dr. Itay gal)
tiktok The avatar is on the way
(photo credit: Dr. Itay gal)

When you read a product review on Amazon, browse through the comments section of an article on CNN, or get annoyed at a provocative tweet, can you be sure the individual behind the screen is a living, breathing person?

Absolutely not.

A recent report by Imperva revealed that bots make up 47% of all internet traffic, with “bad bots” comprising 30%. These staggering statistics threaten the integrity upon which the open web has been built.

Yet even when a user is human, there’s a good chance that their account is operating under a fake identity, meaning “fake users” are currently as prevalent online as authentic ones.

We are no strangers to the existential risk of bot campaigns here in Israel. Following October 7, large-scale misinformation campaigns, orchestrated by bots and fake accounts, manipulated public opinion and policymakers.

Monitoring online activity during the war, The New York Times found that “in a single day after the conflict began, roughly 1 in 4 accounts of Facebook, Instagram, TikTok, and X, formerly Twitter, posting about the conflict appeared to be fake... In 24 hours after the blast at Al-Ahli Arab hospital, more than 1 in 3 accounts posting about it on X were fake.”

 Danny Akerman (credit: Key1 Capital)
Danny Akerman (credit: Key1 Capital)

With 82 countries seeing elections in 2024 the risk of bots and fake users is reaching crisis levels. Just last week, OpenAI had to deactivate an account belonging to an Iranian group using its ChatGPT bot to generate content aimed at influencing the US elections.

Election influence and the widespread impact of bots

As Rwanda prepared for its July elections, researchers at Clemson University uncovered 460 accounts disseminating AI-generated messages on X in support of incumbent president Paul Kagame. And in the past six months alone, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) has identified influence campaigns targeting Georgian protesters and spreading confusion about an Egyptian economist’s death, both powered by inauthentic X accounts.

Bots and fake users have a detrimental consequence on national security, but online businesses are also paying a heavy price.

Imagine a business where 30-40% of all digital traffic is generated by bots or fake users. This scenario creates a cascade of problems, including skewed data that leads to misguided decision-making, impaired understanding of customer funnels and website analytics, sales teams pursuing false leads, and developers focusing on products with illusory demand.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


The implications are staggering. A study by CHEQ.ai, a Key1 portfolio company and go-to-market security platform, revealed that in 2022 alone, over $35 billion in ads spent was wasted and more than $140b. in potential revenue was lost.

ULTIMATELY, FAKE users and bots undermine the very foundations on which modern-day business is built, creating distrust in the data, results, and in some cases, among teams.

The introduction of Gen AI into the mix has only added fuel to the fake web’s fire. The technology “democratizes” the ability to create bots and fake identities, lowers the attack barriers, increases their sophistication, and meaningfully expands their reach.

The scope of this growing problem cannot be overstated. But what, if anything, can be done to minimize the tremendous economic, geopolitical, and social damage?

It’s time for a global response to take back control and rebuild our trust in the internet.

Education is crucial in combating the fake online epidemic. By raising awareness of the tactics of bots and fake accounts, we can empower society to recognize and mitigate their impact. Understanding the telltale signs of inauthentic users – such as incomplete profiles, generic information, repetitive phrases, abnormally high activity levels, shallow content, and limited engagement – is a vital first step. However, as bots become increasingly sophisticated, this challenge will only grow more complex, underscoring the need for ongoing education and vigilance.

In addition, public policies and regulations must take effect to restore trust in digital environments. For example, governments can and should require large social networks to implement best-of-breed bot-mitigation tools to help police fake accounts.

Striking the right balance between the freedom of these networks, the integrity of the information posted, and the potential harm caused is not an easy task to accomplish. Yet establishing these boundaries is a necessity to preserve the longevity of these networks.

On the business front, various tools have been developed to mitigate and block invalid traffic. These range from basic bot mitigation solutions that prevent Distributed Denial of Service attacks to specialized software protecting APIs from bot-driven data theft attempts.

More advanced bot-mitigation solutions employ sophisticated algorithms that perform real-time tests to ensure traffic integrity. These tests analyze account behavior, interaction levels, hardware characteristics, and automation tools. They also detect nonhuman behavior, such as abnormally fast typing, and scrutinize email and domain history.

While AI has contributed to the bot problem, it’s also proving to be a powerful tool in combating it. AI’s enhanced pattern recognition capabilities allow for more accurate and rapid distinction between legitimate and illegitimate bots. Companies like CHEQ.ai are leveraging AI to help marketers ensure their ads reach human users and are placed in safe, bot-free environments, effectively countering the growing threat of bots in digital advertising.

From national security to business integrity the consequences of the “fake internet” are as broad as they are dire. Yet, there are several effective methods to mitigate the problem, methods that deserve a renewed public and private focus. By raising awareness, enhancing regulation, and instituting active protection, we can all contribute to a more accurate and far safer internet environment.

The writer is cofounder and partner at Key1 Capital.