Bot Traffic Surpasses Humans Online—Driven by AI and Criminal Innovation

AI is helping those who control internet bots to achieve larger-scale use, reduce expenses, and avoid detection more effectively.

Bot Traffic Surpasses Humans Online—Driven by AI and Criminal Innovation

AI is helping those who control internet bots to achieve larger scale use, reduce expenses, and avoid detection more effectively.

Currently, more than half of all internet activities (51%) are from bots. Out of this, 37% are bad bots causing harm, and only 14% are good bots. Criminals are using AI to create more bots, and this is expected to increase.

A chart from Imperva's report on bad bots shows internet traffic patterns over the last decade.

There's been a rise in bad bots, especially simple ones that attack in large numbers. AI allows even less-experienced people to create bots and use AI to deploy them. It starts with simple use as they learn, then moves to more advanced use as their skills improve. This means future attacks will come from dangerous bots, made quickly like simple ones today. The threat from bad bots is likely to grow.

Imperva sorts bots into ‘simple’ and ‘advanced’. Simple bots are easy to spot and trace. They look the same and are not hard to defend against, according to Tim Chang of Thales, which acquired Imperva in 2023. Advanced bots, though, are tricky, change constantly, and are harder to catch, causing more damage.

Currently, simple attacks are growing most, says Chang. As AI develops and attackers get better with it, evasion will improve, and attacks will become more advanced.

Imperva’s annual report looks into today's bot trends. Two key developments are the rise in API bot attacks (44% of advanced bots target APIs) and an increase in account takeover (ATO) attacks—up 40% from the previous year.

The main types of API bot attacks are collecting data (31%), payment fraud (26%), taking over accounts (12%), and buying items quickly (11%). These attacks exploit weaknesses in APIs, like setup mistakes and weak security.

Common AI-assisted bots include ByteSpider Bot (54% of AI attacks), AppleBot (26%), Claude Bot (13%), and ChatGPT User Bot (6%). ByteSpider Bot is often mistaken for the legitimate ByteSpider web crawler from ByteDance, used to gather data for AI models.

The morality or legality of such data gathering under GDPR and the AI Act is unclear—yet ByteDance isn't alone in collecting data to train AI. Blocking web crawlers is easy, but companies avoid it to not interfere with beneficial bots. The report notes criminals sometimes disguise bad bots as good crawlers to bypass security that only allows known crawlers.

AI use is changing how attackers work, improving their results. AI helps those with no prior knowledge to generate code and expand their attacks. It refines evasion and tactics. Bot operators use AI to analyze attack performance, explained Chang, and adjust tactics.

The role of AI in bots will grow. Expect more attacks and advanced techniques. For example, in 2024, Imperva blocked about 13 trillion bot requests, finding around 2 million AI-enabled attacks daily. These range from simple to complex. As AI advances, Chang warns, detecting these newer advanced bots will be even tougher.