SecurityBrief Ireland - Technology news for CISOs & cybersecurity decision-makers
Ireland
Bots make up 53% of web traffic, Thales report says

Bots make up 53% of web traffic, Thales report says

Thu, 30th Apr 2026 (Today)
Shannon Williams
SHANNON WILLIAMS News Editor

Thales' 2026 Bad Bot Report found that bots accounted for 53% of web traffic in 2025, with 40% of that traffic classified as malicious.

The findings point to a further shift in internet activity away from human users and towards automated systems, with artificial intelligence making malicious traffic harder to distinguish from legitimate machine interactions.

Human activity fell to 47% of web traffic in 2025, down from 49% a year earlier. Bot traffic rose from 51% over the same period, suggesting automated activity is becoming a permanent feature of online services rather than a pattern limited to bursts of credential stuffing or scraping.

The most significant change was not only the rise in volume but also the emergence of AI agents as a distinct source of traffic alongside conventional good and bad bots. These systems interact directly with applications and APIs to retrieve information and carry out tasks, complicating efforts to assess intent.

AI-driven bot attacks rose 12.5 times year on year, adding to pressure on security teams already dealing with automated traffic that may use valid credentials and normal-looking requests.

API focus

A growing share of attacks now targets APIs, which have become central to delivering digital services. The report found that 27% of bot attacks targeted APIs, allowing attackers to bypass user interfaces and communicate directly with backend systems.

That route can give bots direct access to business logic, data flows and transaction processes. Because such requests may appear technically valid, the activity can be harder to detect using older methods focused on blocking clearly suspicious traffic.

Identity systems are also under greater pressure as attackers seek access through account takeover and other forms of automated abuse. In sectors where digital accounts carry immediate financial value, the threat is especially acute.

Sector exposure

Financial services was the most targeted sector, accounting for 24% of all bot attacks and 46% of account takeover incidents. The figures suggest attackers are concentrating on businesses where automated intrusions can be quickly turned into financial gain.

The report places that trend within a broader shift in online security, where the key challenge is no longer simply whether traffic is generated by a human or a machine. Instead, organisations are being pushed to judge whether automated behaviour is authorised and whether it matches the intended use of a service.

"AI is transforming automation from something organizations try to block into something they must also manage," said Tim Chang, Global Vice President and General Manager, Application Security at Thales.

"The challenge is no longer identifying bots. It's understanding what the bot, agent, or automation is doing, whether it aligns with business intent, and how it interacts with critical systems," Chang said.

That shift matters for operators of consumer platforms, financial systems and online business services that rely on APIs to handle transactions and data exchange. As more digital systems depend on machine-to-machine communication, distinguishing between approved automation and malicious activity becomes harder without closer monitoring of behaviour.

Many organisations still have only a partial view of AI-driven traffic moving across their systems, leaving some activity unverified or difficult to separate from legitimate use and creating blind spots in risk management.

Governance model

Traditional approaches centred on identifying and blocking bots are becoming less effective in an environment where automation is widespread and not always hostile. The report argues that organisations should move towards governance models that combine visibility, policy enforcement and behavioural analysis.

That approach would include setting rules on which AI agents may interact with systems, placing controls around APIs and identity layers, and adapting defences as automated methods change. The argument reflects a broader trend in cyber security towards assessing intent and behaviour rather than relying on simple labels.

Thales based the report on full-year 2025 bot activity analysed by its Threat Research and Security Analyst Services teams. The study examined the effect of AI-driven automation on application security, API exposure and digital infrastructure.

The company said the findings show the internet is becoming fundamentally machine-driven, with automated systems increasingly shaping traffic patterns and interacting with online services in real time.