SecurityBrief Ireland - Technology news for CISOs & cybersecurity decision-makers
Ireland
AI inference becomes core operational workload in firms

AI inference becomes core operational workload in firms

Thu, 7th May 2026 (Today)
Sofiah Nichole Salivio
SOFIAH NICHOLE SALIVIO News Editor

F5 has published its 2026 State of Application Strategy report, which suggests AI inference is now a core operational workload for most surveyed organisations.

Organisations run an average of seven AI models in production, and 78% operate their own inference infrastructure. The findings point to a shift away from AI as a trial project and towards routine use in business systems.

Inference has overtaken model building and training as the main AI activity for 77% of respondents. That shift is pushing AI into the same operational category as other production applications, with greater focus on governance, availability and security.

Only 8% of organisations rely solely on public AI services. The rest use a mix of models and environments, increasing the need for routing, fallback systems and policy controls to manage cost, accuracy and uptime.

Hybrid pressure

The survey also points to the spread of hybrid multicloud setups. It found that 93% of organisations now work across multiple clouds, while 86% run applications across on-premises, public cloud and colocation environments.

That infrastructure mix is adding complexity as companies try to deliver and secure AI workloads in several places at once. More than half, 52%, said they are orchestrating multiple AI models, turning inference into a distributed systems issue rather than a single-platform task.

Security concerns are also rising as AI systems become more embedded in operations. Among respondents, 88% had faced AI-related security challenges, while 98% are preparing for agentic AI systems that require identities, permissions and controls similar to those used for human users.

The report said 77% expect identity and access problems linked to AI agents as automation expands. Nearly two-thirds already allow AI to adjust policies and configurations on its own, suggesting some organisations are giving automated systems a more direct role in IT operations.

Control layers

Control points are also moving higher up the stack. In the survey, 29% of organisations identified prompt layers as the main delivery mechanism for AI workloads, while 23% said token layers were their main priority for both delivery and security.

That suggests companies are focusing less on raw infrastructure alone and more on the interfaces and controls that govern how AI systems are used. It also reflects the challenge of managing access, cost and risk when models are distributed across several environments.

Kunal Anand, Chief Product Officer at F5, said the report showed a clear operational shift in how businesses are using AI.

"AI has moved from experimentation to operations. The question now is not whether companies will use AI, but whether they can run it reliably, securely, and at scale," Anand said.

He said the growth of inference workloads was changing the technical and governance demands around AI.

"This year's data shows a clear shift: AI inference is becoming core to the business, which means AI delivery is now a traffic management challenge, and AI security is now a governance and control challenge. The companies that understand this shift early will be the ones that move faster and more safely," Anand said.

Operational challenge

The findings are based on responses from hundreds of enterprise IT and security leaders around the world, according to F5. The company presents the results as evidence that AI is now tied more closely to day-to-day infrastructure decisions than to standalone innovation projects.

As organisations add more models and spread workloads across clouds and on-premises systems, the operational burden is widening. Distributed inference, hybrid infrastructure and automated decision-making are creating a more complex environment for security teams and technology leaders to manage.

For businesses, the issue is no longer simply whether to adopt AI tools. The central question is how to control cost, reliability, identity and policy enforcement when AI systems are treated as part of the production estate.

The figures suggest many organisations are already deep into that transition, with AI no longer confined to pilot schemes but woven into the infrastructure that supports everyday applications and services.