Articles

New guides for businesses published by the Office of the Australian Information Commissioner (OAIC) seek to articulate how Australian privacy law applies to artificial intelligence (AI) and set out the regulator’s expectations.

On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner, Inc.’s annual global survey of  CIOs and technology executives, and more than 1,100 executive leaders outside of IT (CxOs). A small cohort of CIOs and CxOs, known as the “Digital Vanguard,” has the highest achievement rate, where 71% of their digital initiatives meet or exceed outcome targets.

Generative artificial intelligence (GenAI) is everywhere, and so are widespread, cross-industry discussions about how humans fit into the equation. In the world of intelligent document processing (IDP), when considering the promise of faster, scalable, more accurate processing, it’s no surprise that GenAI technology is on the forefront of everyone’s minds.

The Digital Transformation Agency (DTA) has released new guidance to enhance the assessment and delivery of digital projects across the Australian Government. Developed in collaboration with the University of Sydney's John Grill Institute for Project Leadership, the guidance focuses on improving the accuracy of Delivery Confidence Assessments (DCAs) for digital initiatives.

Since 2019, the Australian Department for Industry, Science and Resources has been striving to make the nation a leader in “safe and responsible” artificial intelligence (AI). Key to this is a voluntary framework based on eight AI ethics principles, including “human-centred values”, “fairness” and “transparency and explainability”. Every subsequent piece of national guidance on AI has spun off these eight principles, imploring business, government and schools to put them into practice. But these voluntary principles have no real hold on organisations that develop and deploy AI systems.

Pages