AI Safety Takes Backseat in National Plan

Australia's National AI Plan has drawn sharp criticism from legal and academic experts who warn the government's decision to rely on existing legislation leaves organisations exposed to emerging risks in high-stakes automation and automated decision-making systems. The plan abandons previously proposed mandatory guardrails for high-risk AI systems, instead establishing a light-touch regulatory framework built on technology-neutral laws covering privacy, consumer protection and workplace safety.
This approach has divided experts, with critics arguing existing frameworks are inadequate for managing AI-specific governance challenges while supporters praise the focus on innovation and economic opportunity.
"The absence of new AI-specific legislation means Australia still needs clearer guardrails to manage high-risk AI," said Associate Professor Sophia Duan from La Trobe University. "Trustworthy AI requires more than voluntary guidance."
Dr Rebecca Johnson, an AI ethicist at the University of Sydney, said relying on technology-neutral laws fails to address the unique autonomy of modern AI agents.
“It’s like trying to regulate drones with road rules: some parts apply, but most of the risks fly straight past,” Johnson said. “AI agents don’t just generate text; they carry out tasks… That is a fundamentally different safety landscape”
However, Professor Niloufer Selvadurai from Macquarie Law School praised the strategy for avoiding the complications of creating new laws for evolving tech.
“Given the complex and diverse applications of AI, I think this nuanced approach, premised on a regulatory gap-analysis, is to be welcomed,” Selvadurai said.
The plan focuses on three objectives - capturing economic opportunities through data centre investment, spreading benefits via workforce training, and keeping Australians safe through enhanced oversight. However, the government will rely on existing legal frameworks rather than introduce standalone AI legislation, a reversal from earlier consultation documents that proposed mandatory controls.
For organisations managing compliance processes and digital transformation, the decision creates uncertainty around governance requirements for AI systems used in automated decision-making, document processing and data analytics. The plan provides no specific guidance on how existing privacy, discrimination and consumer protection laws will apply to AI deployments in records management, information governance or business intelligence applications.
Dr Malcolm Thatcher, a digital risk governance specialist, said the lack of an AI-specific regulatory framework leaves organisations exposed.
"The Australian Government has dropped the ball on providing an AI-specific regulatory framework for the safe use of AI," he wrote. "This leaves organisations who don't fully understand AI and all its complexities including potential interaction with other laws, exposed to this increasing AI risk."
Legal experts from Bird & Bird noted the plan signals heightened regulatory scrutiny without creating new legal obligations. "For organisations operating in or into Australia, this Plan sets the direction of travel for investment, regulation, workforce policy and government procurement over the rest of this decade," the firm stated.
"Expect more public investment and procurement activity, alongside heightened expectations for responsible governance and transparency."
The plan establishes an AI Safety Institute with $A30 million in funding to monitor AI harms and advise on regulatory interventions. However, the institute will operate in an advisory capacity without statutory powers, relying on existing regulators to enforce technology-neutral legislation.
King & Wood Mallesons characterised the approach as reducing immediate compliance burden while requiring organisations to establish responsible AI governance frameworks.
"Although this does reduce the compliance burden on companies, they will still need to establish responsible AI Governance," the firm advised.
The regulatory gap analysis extends to critical areas for information managers and compliance professionals. Privacy law reforms remain incomplete, with the second tranche of amendments to the Privacy Act still without clear timeframes. Copyright protections for AI training data lack resolution, following the government's October 2025 decision not to introduce a broad text-and-data-mining exception.
For organisations deploying AI in information management, records processing or compliance workflows, the plan emphasises transparency and documentation requirements. The National AI Centre's "AI6" governance practices - released in October 2025 - provide baseline expectations for risk assessment, human oversight and accountability structures.
Minter Ellison noted the plan signals rising expectations even without new legislation.
"Expectations for governance and organisational readiness are rising, even without new laws," the firm stated. "While heavy regulation is paused, organisations will face higher expectations for transparency, testing, oversight and workforce capability."
The Australian Public Service AI Plan, released 25 November 2025, establishes baseline requirements for government agencies including mandatory Chief AI Officers, AI literacy training and transparent reporting on AI deployments. This creates reference standards likely to influence private sector expectations around governance maturity and documentation practices.
