Australia’s AI regulation discussion paper: Late to the party again …

Regulating AI has been on the agenda globally for some time, amid growing concern over potential individual and broader societal harms that might result from un-managed roll out of the technology. Think deep fakes, misinformation and disinformation, large scale use of facial recognition, credit scoring, Skynet … But Australia is taking a more leisurely approach to working out what to do in this complex area with a Discussion Paper released in June this year, by Industry and Science Minister Ed Husic.

The Discussion Paper outlines opportunities and challenges with AI technologies, providing illustrative case studies of the potential issues in a range of different contexts (autonomous vehicles, crime reduction and employment, just as an example). It then considers different regulatory models, surveying the current domestic and international landscape on AI regulation.

Based on this review, it proposes a risk management approach.


Eight core principles

Following a review of current AI frameworks., the paper puts forward eight core principles that should be used as ethical framework to guide organisations in the use or development of AI systems (developed from a review of the different frameworks currently used or proposed in Australia and overseas).

Those eight principles are:

  1. Generates net-benefits. The AI system must generate benefits for people that are greater than the costs
  2. Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.
  3. Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
  4. Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm to a person.
  5. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
  6. Transparency and explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
  7. When an algorithm significantly impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
  8. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm.

Each of these principles should be seen as goals that define whether an AI system is operating ethically and should be considered throughout the design and use of an AI system.


Risk management approach

As mentioned, the paper suggests a risk management approach to AI.   The higher the risk level, the more onerous risk management requirements apply.  The paper asserts this approach best caters to context-specific risks, allowing for less onerous obligations when appropriate and allows AI to be used in high-risk settings when justified.

This risk based approach draws heavily from other risk based AI  models like the proposed EU AI Act (which is inching closer to finalisation), the NIST AI Risk Management Framework and the Canadian Directive of Automated Decision-Making.

This is intentional. Although the introduction states that ‘Australia has strong foundations to be a leader in responsible AI’, the paper is clearly directed to aligning Australia’s approach to those already proposed in other jurisdictions (more like being a follower really …).

The Discussion Paper specifically recognises the importance of harmonising our governance frameworks with those used globally and by our major trading partners.

As a relatively small, open economy, international harmonisation of Australia’s governance framework will be important as it ultimately affects Australia’s ability to take advantage of AI-enabled systems supplied on a global scale and foster the growth of AU in Australia.”[1]

It is unfortunate that the same rationale was not applied in the Privacy Act review, when considering the exemptions from the Act that ultimately make it unlikely that the Australian privacy regime will be considered ‘adequate’ by the EU, one of our major trading partners.


To regulate or not?

The question at the heart of this paper is: what is the correct governance mechanism to ensure AI is developed and used safely and responsibly in Australia?

Potential mechanisms relevant to AI include regulations, standards, tools, frameworks, principles and business practices.

One of the issues for AI regulation posed in the paper is how to carve out an appropriate space (assuming it is needed) between the existing laws that could be considered as applying to AI. These existing laws include:

  • Privacy law;
  • Consumer protection law;
  • Online safety law;
  • Discrimination law;
  • Copyright law
  • The common law of contract and tort.

There is concern that duplication and conflict between laws be avoided.  For this reason, the paper states that its focus is ‘to identify potential gaps in the existing domestic governance landscape and whether additional AI governance mechanisms are required to support the safe and responsible development and adoption of AI.’[2]

We discussed in our previous blog post the ‘paper-tiger’ problem with the existing government AI principles.

Let’s hope they find a regulatory space, and don’t fall back on the ‘self regulatory’ model (like principles and frameworks) that has failed Australian in the cyber security space and the application of existing laws.

Responses due

Although far from innovative, the AI discussion paper indicates that the current government has some appetite to tackle complex technology issues. It moves Australia slightly closer to a regulatory environment that recognises the importance of fairness and ethical use of the data of all Australians, while at the same time trying to provide greater certainty for businesses that are looking to invest in or embrace AI-enable innovations.

Let’s hope that Australian can develop responsible AI practices to increase the trust and confidence of a community reeling from major data breaches that have affected the entire nation.

A good start … but quite a way to go before it’s likely that we end up with any sort of AI governance mechanism with real teeth.

The discussion paper was issued as part of a broader consultation process. Responses close on 26 July 2023, so you need to get in quick to have your say.

Further references:

[1] Discussion paper, 26.

[2] Discussion Paper, 13.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.