Australia’s AI Ethical Framework: Another paper tiger?

Like many countries Australia has an AI ethical framework. But how influential will it be without any enforcement mechanism?

AI has huge potential for good: to provide social, economic, and environmental benefits. However, there are also risks and ethical concerns regarding privacy, transparency, data security, accountability, and equity. Like many other nations, the Australian government recognises the immense benefits that can come from the development and use of AI.  In fact, the Australian Government has said it is committed to make Australia a global leader in responsible and inclusive AI and has released an AI Action Plan. But for Australia to embrace the use of AI, and to realise the immense potential of AI, we need to be able to trust that the use of AI is safe, secure and reliable, that it is aligned to our expectations and ultimately is a ‘good’ thing.

How do we navigate these difficult waters?

The Department of Industry, Innovation and Science (DIIS) has released an AI Ethics Framework to guide businesses and governments to responsibly design, develop and implement AI. Progressing the AI Ethics Framework and its 8 AI Ethics Principles is part of the Australian Government’s commitment to make Australia a global leader in responsible and inclusive AI, and is one of four key focus areas under the AI Action Plan.[1]

But how successful will that framework be, without any meaningful enforcement mechanism or incentives for adoption (other than that it is the right thing to do)?

 

What is AI Ethics?[2]

Ethics is a set of moral principles which help us discern between right and wrong.[3]

Ethics frameworks provide guidance on how to approach the ethical issues that emerge from the use of AI.

The development of ethical guidelines for use in AI is not new. Australia’s AI Ethical Framework borrows from the work of a wide group of government and industry bodies.  A short review of the development of AI Frameworks is provided in this blog post.  An inventory of global AI ethics frameworks is here.

According to DIIS, the outcomes to be expected from applying the principles and committing to ethical AI practices, include that organisations can:

  • build public trust in their product or organisation
  • drive consumer loyalty in AI-enabled services
  • positively influence outcomes from AI
  • ensure all Australians benefit from this transformative technology.[4]

Background

The framework comes out of a discussion paper released in early 2019, to help inform the government’s approach to AI ethics in Australia. That discussion paper was developed by CSIRO’s Data61 and designed to encourage conversations about AI ethics in Australia.

AI Ethics Framework Principles at a glance

The eight principles that are part of Australia’s AI Ethics Framework are:

  • Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  • Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability:There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Applying the Artificial Intelligence (AI) Ethics Framework

To perhaps answer the criticism that the principles are too vague, guidance is provided on how to apply the principles.[5]  The first point is that the principles should be applied through the AI lifecycle which includes:

  • design, data and modelling (such as planning, data collection and model building)
  • development and validation (such as training and testing)
  • deployment
  • monitoring and refinement (including when fixing any problems that occur).

There are also a couple of threshold questions recommended for consideration, to help decide whether you should apply these ethical principles:

  • Will the AI system you are developing or implementing be used to make decisions or in other ways have a significant impact (positive or negative) on people (including marginalised groups), the environment or society?
  • Are you unsure about how the AI system may impact your organisation or your customers/clients?

If you answer ‘yes’  to either of these, then applying the principles could help you plan for better outcomes.

Businesses can support their ethical AI efforts in many ways:

  • Set appropriate standards and expectations of responsible behaviour when staff deploy AI. For example, via a responsible AI policy and supporting guidance.
  • Include AI applications in risk assessment processes and data governance arrangements.
  • Ask AI vendors questions about the AI they have developed.
  • Form multi-disciplinary teams to develop and deploy AI systems. They can consider and identify impacts from diverse perspectives.
  • Establish processes to ensure there is clear human accountability for AI-enabled decisions and appropriate senior approvals to manage ethical risks. For example, a cross-functional body to approve an AI system’s ethical robustness.
  • Increase ethical AI awareness raising activities and training for staff.

However, will businesses really follow these steps without any regulatory oversight or consequences for failing to do so (other than loss of trust if things go wrong)?

A principle based framework on its own is unlikely to ensure the safe and ethical development and use of AI for the benefit of all Australians.

Australia’s AI Ethical Framework: Next steps

The principles are voluntary. They are intended to be aspirational and to complement – not substitute –existing AI regulations and practices.  They are designed to prompt organisations to consider the impact of using AI enabled systems.

It has been suggested that the lack of any real enforcement or oversight suggests that the framework may be no more than a PR exercise.  The principles have been called “glittering generalities” which have rendered the guidelines unclear, imprecise and ready for politicisation. [6]

In its discussion paper,[7] CSIRO identified a range of tools which can be used to assess risk and ensure compliance and oversight. The most appropriate tools can be selected for each individual circumstance.

A toolkit for ethical AI
1. Impact Assessments: Auditable assessments of the potential direct and indirect impacts of AI, which address the potential negative impacts on individuals, communities and groups, along with mitigation procedures. 2. Internal or external review: The use of specialised professionals or groups to review the AI and/or use of AI systems to ensure that they adhere to ethical principles and Australian policies and legislation. 3. Risk Assessments: The use of risk assessments to classify the level of risk associated with the development and/or use of AI.
4. Best Practice Guidelines: The development of accessible cross industry best practice principles to help guide developers and AI users on gold standard practices. 5. Industry standards: The provision of educational guides, training programs and potentially certification to help implement ethical standards in AI use and development 6. Collaboration: Programs that promote and incentivise collaboration between industry and academia in the development of ‘ethical by design’ AI, along with demographic diversity in AI development.
7. Mechanisms for monitoring and improvement: Regular monitoring of AI for accuracy, fairness and suitability for the task at hand. This should also involve consideration of whether the original goals of the algorithm are still relevant. 8. Recourse mechanisms: Avenues for appeal when an automated decision or the use of an algorithm negatively affects a member of the public. 9. Consultation: The use of public or specialist consultation to give the opportunity for the ethical issues of an AI to be discussed by key stakeholders.

 

In contrast to the Australian position, the EU has proposed AI regulation[8] that incorporate the following:

  • Impact assessments;
  • Internal and external reviews;
  • A risk based approach;
  • An independent over-sight body.

For more on the European Commission’s proposed AI Regulation see our blog post.

The  Australian Human Rights Commission recently released a report on how to ensure that human rights are protected and supported as part of AI initiatives.[9] The AHRC in its review recommended a number of important tools including:

  • The appointment of an AI Safety Commissioner
  • Banning ‘black box’ AI technologies
  • Requiring transparency about the use of AI
  • Requiring impact assessments to be conducted
  • Calling for a moratorium on the use of facial recognition technologies
  • External merits review before an independent tribunal generally should be available in respect of any AI-informed administrative decisions.

For more, refer to our previous blog post.

Australia’s AI Ethical Framework: Conclusion

Ethics both inform and are informed by laws and community values. The principles included in Australia’s new AI Ethical Framework are not new and are entirely consistent with other ethical frameworks before them, and could be used to support Australian values and expectations.

The framework is a good first step and could provide the basis for the formulation of more specific codes, laws or regulation, to support the adoption and use of AI in Australia. There is a path forward which allows for flexible solutions, the fostering of innovation and a firm dedication to aligning the development of AI with human values.  However, it’s not clear whether that path will be followed.

So far there have been no further announcements about the implementation of ethical AI or regulation to support the framework.

Let’s hope the Australian government selects some of elements of the toolkit for ethical AI to support this important framework.  Otherwise Australia’s AI Ethical Framework will become yet one more ‘paper tiger’ – another AI ethical framework without any teeth.[10]

Resources:

Australia’s AI Ethics Principles | Department of Industry, Science, Energy and Resources

Australia’s Artificial Intelligence Action Plan | Department of Industry, Science, Energy and Resources

Artificial Intelligence – Australia’s Ethics Framework (industry.gov.au)

 

[1] Australia’s AI Ethics Framework – Department of Industry – Citizen Space

[2] AI Ethics | IBM

[3] Ibid. Also see, for example, Ethics Explainers – The Ethics Centre.

[4] Australia’s AI Ethics Framework – Department of Industry – Citizen Space

[5] Applying the AI Ethics Principles | Department of Industry, Science, Energy and Resources

[6] InnovationAus.com

[7] Artificial Intelligence – Australia’s Ethics Framework (industry.gov.au)

[8]Europe fit for the Digital Age: Artificial Intelligence (europa.eu)

[9] Home | Human Rights and Technology

[10] In the realm of paper tigers – exploring the failings of AI ethics guidelines – AlgorithmWatch.  Of the 160 AI Ethical Frameworks reviewed, only 10 had any enforcement mechanism.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.