AI Agent Security Gaps Widen: Report
Organisations are deploying AI agents without adequate controls to govern them, according to new research from Rubrik Zero Labs.
The report, The State of the Agent: Understanding Adoption, Risk, and Mitigation, is based on a survey of more than 1,600 IT and security leaders. It finds enterprises are introducing autonomous decision-making systems they cannot fully observe or reverse.
The research found 86 per cent of respondents expect AI agents to outpace their organisation's security guardrails within the next year.
Only 23 per cent of respondents report full visibility into the agents operating in their environments. The report's authors note this figure is likely an over-estimation by respondents.
Non-human identities tied to agents are proliferating faster than enterprises can track or govern. The report describes these as a "shadow workforce" - identities operating with persistent access and limited oversight.
The researchers say these identities create new pathways for misuse, compromise, and lateral movement across systems.
"AI adoption is outpacing our ability to control it," said Kavitha Mariappan, Chief Transformation Officer at Rubrik. "Enterprises are struggling because they've deployed systems they can't fully observe, govern, or restore."
The operational case for AI agents is also under pressure. More than 80 per cent of respondents say agents require more manual oversight than the efficiency they save. Eighty-eight per cent say they lack the ability to roll back agent actions without system disruption.
Nearly nine in ten respondents expressed concern about meeting recovery objectives as agent-driven threats increase. The report found that recovery and prevention are emerging as primary points of failure.
Nearly half of respondents expect agentic systems to drive the majority of cyberattacks within the next year. The report says autonomous attack systems compress timelines, scale attacks, and blur the line between insider risk and external compromise.
Rubrik combines survey data with technical analysis of attack vectors across what it describes as the tool, cognitive, and identity layers of AI systems. The report argues security strategy must now include maintaining control over systems that operate without human input.
Download report here.
