• Posted on 8 Aug 2025
  • 5 minutes read
  • Artificial intelligence Technology and design Politics and society

There is a sting in the tail of the Productivity Commission's Interim Report on Harnessing data and digital technology.

Few people have mistaken the economist Stephen King of the Productivity Commission for a siren, and yet he is singing a seductive song about artificial intelligence.

Yesterday, Dr King released the Productivity Commission’s Interim Report on Harnessing data and digital technology. The report draws us in on the productivity and broader economic benefits that AI promises – perhaps as much as an additional $116 billion in GDP, though other experts such as Nobel Prize-winning economist Daron Acemoglu have much less optimistic predictions.

But there’s a sting in this economist’s fish-like tail: these riches will be ours only if we’re willing to give up some important protections. Artists and writers may lose control of the works they sweated over, without fair compensation. Citizens risk a further weakening of already-outdated privacy protections.

This Interim Report isn’t exactly anti-regulation. The Commission rightly recognises that ‘sensible’ regulation can build community trust and business confidence.

It’s just that the Interim Report is more explicit about the legal protections it doesn’t like, than those it likes. For example, the Commission acknowledges some privacy law reform may be helpful, but it emphasises that privacy laws can undermine innovation and it opposes Australians having the right to require companies to remove personal information about them. In a similar vein, it highlights that consent-based privacy protections don’t work very well, and proposes a more streamlined approach that is easier for business. It opposes the Government’s own draft mandatory guardrails for being too prescriptive.

There’s a risk that Australia’s public debate on AI becomes like a famous episode from the TV show, Yes Minister, in which a group of senior public servants fall over themselves to agree with the principle that more women should be appointed to senior roles. Then, as they go around the table, each senior man reports sadly a problem applying the principle to his own organisation.

“We couldn’t post women ambassadors to countries that are less advanced on women’s rights,” one says mournfully. “Prisons, police… quite probably women wouldn’t want these jobs anyway,” declares another. By the end, all of them agreed on the principle, and none applied it.

CEO of OpenAI Sam Altman (right) with Co-Founder Hacking Alex Cory.

In the AI context, the principle seems clear and unanimous – and it needs to be applied. Artificial intelligence offers enormous economic and broader benefits, and it also carries significant risk of harm. Just two years ago, Sam Altman – CEO of OpenAI, the company responsible for ChatGPT – declared in the US Congress that “regulation of AI is essential”. Altman continues to warn about AI’s existential risks, and more day-to-day risks such as deepfakes.

If the world’s greatest AI hype man accepts this tech brings both opportunities and risks, then surely a balanced approach to regulation is the only sensible way forward. The Treasurer Jim Chalmers has argued – rightly – for Australia to adopt a “middle course” on AI. He explained, “we cannot let AI rip, nor can we pretend it’s not happening.”

And yet there’s a growing view that Australia should be passive: to avoid law or policy reform that protects our community, lest this somehow inhibits AI innovation. This approach might work if companies and governments always deploy AI well, and if our existing law and policy are already fit for the age of AI. Yet neither of these statements is true.

We’re right to be excited about the many benefits AI can bring in areas as diverse as health care and financial services, but AI remains experimental technology. Some estimates have the AI failure rate as high as 80 per cent, and like every previous industrial revolution this one has the potential to cause enormous social pain, as well as gain, as employment is disrupted. This doesn’t mean we should fear AI, but it does mean we should apply a cost-benefit analysis to new AI policy, and rigorously test optimistic predictions about AI adoption.

“We cannot let AI rip, nor can we pretend it’s not happening.”

Treasurer Jim Chalmers

There are several areas – including privacy, copyright and automated decision making – where demonstrable harms are happening right now, and there’s widespread agreement that targeted reform is overdue.

However, the Interim Report could encourage Australian policy makers to rush forward with measures that help some businesses at the forefront of AI development and diffusion, while taking a much slower, wait-and-see approach on protections for content creators and the broader community. It would expect a much higher standard of proof for people already facing harms associated with AI, as compared with companies predicting economic benefits.

Surely, a balanced, pragmatic position is more sensible. This would give proportionate attention both to AI opportunities and harms, thereby reflecting the Treasurer’s preferred ‘middle course’.

The Productivity Commission has rightly expressed a preference for technology-neutral laws. This means that companies generally won’t face added legal barriers when they use AI, but nor does their use of AI permit them to ignore the law.

A number of Australia’s technology-neutral laws are crying out for reform. For example, a suite of well-tested reform proposals to modernise Australia’s privacy law, including by better equipping Australians for the era of AI, have been sitting on the Government’s to do list for years. Similarly, the Robodebt Royal Commission reported over two years ago on the need for reform on automated decision making.

Regulators also need powers and resources to explain how companies can use AI safely and responsibly, and where they don’t to ensure the law is enforced. The time for action is now.

This article first appeared in The Australian. 

Share

Written by Professor Edward Santow

Co-Director of the UTS Human Technology Institute.

Australia’s Human Rights Commissioner from 2016-2021.