AI Agents Pose New Governance Challenges

Australia's artificial intelligence landscape is entering uncharted territory as autonomous AI agents begin operating with unprecedented independence, creating a complex web of governance challenges that existing legal framework struggle to address, according to new analysis from Herbert Smith Freehills.
The international law firm's latest research reveals that AI agents - sophisticated systems capable of autonomously executing tasks with minimal human intervention - represent a fundamental shift from traditional AI tools, moving beyond simple prompt-response interactions to goal-oriented, adaptive operations that can evolve in realtime.
The analysis notes that while the term Agents is often used interchangeably with “Agentic AI”, it is actually only a subset.
“It may be helpful to think that Agentic AI is the ‘bigger picture’ of which Agents form part. The governance frameworks for managing risks in Agents may be different to that for Agentic AI,”
The Black Box Deepens
While AI transparency has long been hampered by the "black box problem," where algorithmic decision-making processes remain opaque, AI agents intensify this challenge exponentially. Unlike conventional AI systems, agents can execute countless micro-actions across multiple systems simultaneously, often beyond the visibility or control of deploying organizations.
"These micro-actions might not necessarily be within the control of, or visible to, the organisation deploying the Agent," the Herbert Smith Freehills analysis warns, highlighting how current transparency measures like disclosure notices and watermarking fall short of addressing agent complexity.
The firm's experts note that while techniques such as chain-of-thought reasoning and system logs can provide some visibility, they may not offer complete insights into agent behaviour, particularly for non-deterministic systems whose outcomes cannot be predicted with certainty.
Dynamic Risk in Real-Time
Perhaps most concerning is how AI agents can shift between risk levels during operation. The law firm illustrates this with a customer service scenario where an agent initially designed to handle basic inquiries could autonomously evolve to access external databases, transfer funds, or collect personal information - dramatically escalating risk profiles without explicit programming changes.
"They can fluctuate between risk levels throughout a workflow," the analysis explains, describing agents that can "dynamically evolve their workflows over time, learning from interactions to improve their responses and actions, possibly in unexpected ways."
This dynamic nature demands new governance approaches that move beyond traditional one-time risk assessments to continuous monitoring systems capable of real-time policy enforcement and adaptive guardrails.
Technical Safeguards Take Centre Stage
The shift toward autonomous agents is forcing organizations to rely more heavily on technical safeguards rather than human oversight mechanisms. Traditional risk mitigation strategies focused on human behaviour - such as input controls and human-in-the-loop processes - become less effective when agents operate independently.
Emerging solutions include AI compliance platforms offering realtime monitoring dashboards, standardized agent protocols for third-party system integration, and "multi-model generative AI arbiters" where panels of diverse AI models review and validate agent actions before execution.
Legal Teams Under Pressure
Legal professionals face particular challenges as liability frameworks for autonomous agents remain largely undefined. The Herbert Smith Freehills team emphasizes that proactive collaboration between legal, business, risk, and technology teams is now essential rather than optional.
"Legal risk management is particularly important especially as the law regarding liability for Agents is currently unsettled," the analysis states, predicting increased regulatory scrutiny as these technologies become more prevalent.
Legal teams must now establish new documentation requirements, develop specialized incident response plans, and ensure comprehensive audit trails for agent decisions - all while navigating vendor contracts that adequately address autonomous system deployments.
As Australia and other jurisdictions grapple with regulating this emerging technology landscape, the Herbert Smith Freehills analysis suggests that traditional AI governance frameworks may prove insufficient for the agentic era.
The firm advocates for dynamic risk classification systems, clear authority limitations, and robust technical safeguards to manage the unique challenges posed by autonomous agents.
Read the full analysis here