NZ Experts Demand Urgent AI Regulation
More than 20 artificial intelligence experts have called on New Zealand's government to introduce binding AI regulation, warning that current laws are inadequate to address escalating harms including deepfakes, algorithmic bias and automated abuse.
The open letter, published 1 September, urges Prime Minister Christopher Luxon and Opposition Leader Chris Hipkins to establish a national AI oversight body. The experts cite low public trust and documented cases of AI-generated non-consensual intimate images, discriminatory decision-making systems and potential manipulation of Māori narratives.
"AI harm is already occurring. I am asked nearly daily about the latest harms caused by an AI system," said Dr Andrew Lensen, Senior Lecturer at Victoria University of Wellington and co-author of the letter.
The experts propose a risk-based regulatory framework similar to the European Union's AI Act, which scales oversight intensity based on potential harm. High-risk applications affecting vulnerable populations or fundamental rights would face stricter requirements than low-risk tools like spell checkers.
Dr Cassandra Mudgway from University of Canterbury highlighted specific inadequacies in existing legislation. The Harmful Digital Communications Act 2015 "was not designed with generative AI in mind" and struggles to address automated abuse across multiple accounts or synthetic sexual images.
Criminal offences under the Crimes Act 1961 also fail to cover deepfake abuse, as they require victims to be in places with reasonable privacy expectations - a concept that doesn't apply to synthetic content.
"Currently, it is unclear whether any New Zealand law can adequately protect women and children from such harms," Mudgway said.
Regulatory Uncertainty Stifling Innovation
The experts argue New Zealand's regulatory vacuum is hindering rather than helping innovation. Chris McGavin, Director of LensenMcGavin AI and co-author, said the lack of clear rules "has likely stifled innovation more than promoted it."
He warned that regulatory uncertainty could see AI deployed inappropriately "without proper governance or oversight," potentially causing widespread harm.
Dr Kevin Shedlock from Victoria University emphasised particular risks to Māori communities, including potential AI-generated false narratives about Te Tiriti o Waitangi and deepfakes of Māori leaders.
"AI systems are often trained on data encoded with the biases of their creators," Shedlock said, noting Western developers may lack understanding of Māori decision-making processes.
He warned of discriminatory systems potentially affecting justice, health and education sectors without proper oversight.
Professor Ali Knott noted New Zealand lacks an AI Safety Institute, meaning "we are absent from some important international discussions" on AI governance.
Dr Joshua Yuvaraj from University of Auckland identified three regulatory approaches: AI-permissive (Australia), AI-restrictive (EU) and AI-neutral (wait-and-see). He warned against uniformed policymaking, saying regulations could either "stymy innovation or fail to protect New Zealanders."
Dr Michael Daubs from University of Otago, who worked on digital policy at the Department of Internal Affairs until January 2025, stressed the need for "binding, statutory, risk-based regulation" addressing accountability and redress options.
The full open letter is available at regulateai.nz.