
Like many countries Australia has an AI ethical framework. But how influential will it be without any enforcement mechanism?
AI has huge potential for good: to provide social, economic, and environmental benefits. However, there are also risks and ethical concerns regarding privacy, transparency, data security, accountability, and equity. Like many other nations, the Australian government recognises the immense benefits that can come from the development and use of AI. In fact, the Australian Government has said it is committed to make Australia a global leader in responsible and inclusive AI and has released an AI Action Plan. But for Australia to embrace the use of AI, and to realise the immense potential of AI, we need to be able to trust that the use of AI is safe, secure and reliable, that it is aligned to our expectations and ultimately is a ‘good’ thing.
How do we navigate these difficult waters?
The Department of Industry, Innovation and Science (DIIS) has released an AI Ethics Framework to guide businesses and governments to responsibly design, develop and implement AI. Progressing the AI Ethics Framework and its 8 AI Ethics Principles is part of the Australian Government’s commitment to make Australia a global leader in responsible and inclusive AI, and is one of four key focus areas under the AI Action Plan.[1]
But how successful will that framework be, without any meaningful enforcement mechanism or incentives for adoption (other than that it is the right thing to do)?

Ethics is a set of moral principles which help us discern between right and wrong.[3]
Ethics frameworks provide guidance on how to approach the ethical issues that emerge from the use of AI.
The development of ethical guidelines for use in AI is not new. Australia’s AI Ethical Framework borrows from the work of a wide group of government and industry bodies. A short review of the development of AI Frameworks is provided in this blog post. An inventory of global AI ethics frameworks is here.
According to DIIS, the outcomes to be expected from applying the principles and committing to ethical AI practices, include that organisations can:
The framework comes out of a discussion paper released in early 2019, to help inform the government’s approach to AI ethics in Australia. That discussion paper was developed by CSIRO’s Data61 and designed to encourage conversations about AI ethics in Australia.
The eight principles that are part of Australia’s AI Ethics Framework are:
To perhaps answer the criticism that the principles are too vague, guidance is provided on how to apply the principles.[5] The first point is that the principles should be applied through the AI lifecycle which includes:
There are also a couple of threshold questions recommended for consideration, to help decide whether you should apply these ethical principles:
If you answer ‘yes’ to either of these, then applying the principles could help you plan for better outcomes.
Businesses can support their ethical AI efforts in many ways:
However, will businesses really follow these steps without any regulatory oversight or consequences for failing to do so (other than loss of trust if things go wrong)?
A principle based framework on its own is unlikely to ensure the safe and ethical development and use of AI for the benefit of all Australians.

The principles are voluntary. They are intended to be aspirational and to complement – not substitute –existing AI regulations and practices. They are designed to prompt organisations to consider the impact of using AI enabled systems.
It has been suggested that the lack of any real enforcement or oversight suggests that the framework may be no more than a PR exercise. The principles have been called “glittering generalities” which have rendered the guidelines unclear, imprecise and ready for politicisation. [6]
In its discussion paper,[7] CSIRO identified a range of tools which can be used to assess risk and ensure compliance and oversight. The most appropriate tools can be selected for each individual circumstance.
| A toolkit for ethical AI | ||
| 1. Impact Assessments: Auditable assessments of the potential direct and indirect impacts of AI, which address the potential negative impacts on individuals, communities and groups, along with mitigation procedures. | 2. Internal or external review: The use of specialised professionals or groups to review the AI and/or use of AI systems to ensure that they adhere to ethical principles and Australian policies and legislation. | 3. Risk Assessments: The use of risk assessments to classify the level of risk associated with the development and/or use of AI. |
| 4. Best Practice Guidelines: The development of accessible cross industry best practice principles to help guide developers and AI users on gold standard practices. | 5. Industry standards: The provision of educational guides, training programs and potentially certification to help implement ethical standards in AI use and development | 6. Collaboration: Programs that promote and incentivise collaboration between industry and academia in the development of ‘ethical by design’ AI, along with demographic diversity in AI development. |
| 7. Mechanisms for monitoring and improvement: Regular monitoring of AI for accuracy, fairness and suitability for the task at hand. This should also involve consideration of whether the original goals of the algorithm are still relevant. | 8. Recourse mechanisms: Avenues for appeal when an automated decision or the use of an algorithm negatively affects a member of the public. | 9. Consultation: The use of public or specialist consultation to give the opportunity for the ethical issues of an AI to be discussed by key stakeholders. |
In contrast to the Australian position, the EU has proposed AI regulation[8] that incorporate the following:
For more on the European Commission’s proposed AI Regulation see our blog post.
The Australian Human Rights Commission recently released a report on how to ensure that human rights are protected and supported as part of AI initiatives.[9] The AHRC in its review recommended a number of important tools including:
For more, refer to our previous blog post.

Ethics both inform and are informed by laws and community values. The principles included in Australia’s new AI Ethical Framework are not new and are entirely consistent with other ethical frameworks before them, and could be used to support Australian values and expectations.
The framework is a good first step and could provide the basis for the formulation of more specific codes, laws or regulation, to support the adoption and use of AI in Australia. There is a path forward which allows for flexible solutions, the fostering of innovation and a firm dedication to aligning the development of AI with human values. However, it’s not clear whether that path will be followed.
So far there have been no further announcements about the implementation of ethical AI or regulation to support the framework.
Let’s hope the Australian government selects some of elements of the toolkit for ethical AI to support this important framework. Otherwise Australia’s AI Ethical Framework will become yet one more ‘paper tiger’ – another AI ethical framework without any teeth.[10]
Resources:
Australia’s AI Ethics Principles | Department of Industry, Science, Energy and Resources
Artificial Intelligence – Australia’s Ethics Framework (industry.gov.au)
[1] Australia’s AI Ethics Framework – Department of Industry – Citizen Space
[3] Ibid. Also see, for example, Ethics Explainers – The Ethics Centre.
[4] Australia’s AI Ethics Framework – Department of Industry – Citizen Space
[5] Applying the AI Ethics Principles | Department of Industry, Science, Energy and Resources
[7] Artificial Intelligence – Australia’s Ethics Framework (industry.gov.au)
[8]Europe fit for the Digital Age: Artificial Intelligence (europa.eu)
[9] Home | Human Rights and Technology
[10] In the realm of paper tigers – exploring the failings of AI ethics guidelines – AlgorithmWatch. Of the 160 AI Ethical Frameworks reviewed, only 10 had any enforcement mechanism.
"*" indicates required fields
"*" indicates required fields
Privacy 108 collects your name and email to send you our newsletter. If you do not provide this information, we will be unable to send it to you. We may use third-party service providers (such as email marketing platforms) to distribute our communications. Some providers may store information overseas, including in the United States. For more information about how we handle your personal information, including how to access or correct it or make a complaint, please see our Privacy Policy or contact us at hello@privacy108.com.au. You can unsubscribe at any time using the link in our emails or by contacting hello@privacy108.com.au.