Ensuring AI supports Human Rights? Recommendations from the Australian Human Rights Commission

Concerns around bias, discrimination and unfairness dog the use of Artificial Intelligence, not without justification given the Robo-debt debacle in Australia.

The Australian Human Rights Commission (AHRC) has considered the impacts of Artificial Intelligence (AI), particularly as it relates to government decision making, from a human rights perspective. In its new 240-page report on Human Rights and Technology released in June 2021,the  AHRC provides a roadmap to ensure that public and private sectors safeguard human rights when designing, developing and using new technologies.

The report is the result of three years of consultation with the tech industry, governments, civil society and communities across Australia.

The AHRC has made 38 recommendations for government to consider, to ensure human rights are upheld in Australia’s laws, policies, funding and education in relation to new technologies, including artificial intelligence.  All moderate suggestions and supported by extensive research, with some innovative and perhaps ground-breaking proposals.

General findings

The report’s central thesis is that the success of AI depends on trust between government, business, and the community, and that the key to building that trust is to ensure that human rights are at the core of national AI policies.

The report sets the following benchmark for the national AI policy currently being developed by the Department of Prime Minister and Cabinet:

“Good national or regional strategies on AI and other new technologies tend to have some common elements: they incorporate international human rights standards, including practical steps to ensure protection of rights with accountability and grievance mechanisms; they promote human rights training and education for designers and developers of new technology; they include measures for oversight, policy development and monitoring; they have whole-of-government approaches to ensure consistency across agencies and implementation; they focus on present and future impacts of technologies on society.”

The report also concluded that it did not recommend algorithm-specific regulation, but that focus should be on:

  • outcomes – both the process by which the decision was made and the decision itself, and
  • to the extent AI is used in substitution for or in combination with humans, how this impacts fairness.

Recommendation: New AI Safety Commissioner

The best way to rebuild public trust in the use of AI by government and corporations is by ensuring transparency, accountability and independent oversight, and a new AI safety commissioner could play a valuable role in this process.”

The report recommends establishing an AI Safety Commissioner as an independent statutory office, focused on promoting safety and protecting human rights in the development and use of AI in Australia.  The AI Safety Commissioner should:

  • work with other regulators to build their technical capacity regarding the development and use of AI in areas for which those regulators have responsibility,
  • monitor and investigate developments and trends in the use of AI, especially in areas of particular human rights risk,
  • provide independent expertise relating to AI and human rights for Australian policy makers, and
  • issue guidance to government and the private sector on how to comply with laws and ethical requirements in the use of AI.

The new Commissioner will also have some teeth.  Along the lines of the independent UK’s Information Commissioner’s Office, it would have powers to investigate or audit the development and use of AI (including algorithms), in some circumstances to identify and mitigate human rights impacts.

The appointment of a new AI Safety Commissioner would promote accountability and provide a practical enforcement mechanism to ensure that ethical AI is practiced. The current incentives for companies to adhere to ethical guidelines are the negative repercussions of an unethical AI system to the bottom line.  Ethical frameworks may be filling the gap but they are unlikely to change organisational practices.

AHRC Recommendation: Human Rights Impact Assessments

Under the EU’s GDPR, there are prohibitions against individuals, with some exceptions, being subjected to a decision “based solely on automated processing, including profiling” where that decision produces a legal or similarly significant effect. The AHRC does not recommend the same approach in Australia.

Instead, it proposes a legislative framework for use of AI in decision-making by Government which requires that:

  • Government agencies undertake a human rights impact assessment (HRIA) – focused on risk – before adopting any new AI-informed decision-making system to make administrative decisions; and
  • the HRIA process incorporates a public consultation on the proposed new system for making decisions. [1]

AHRC Recommendation: What to do where AI is used by government

The AHRC recommended that, where the AI-informed decision-making system is adopted by the Government, then the following should apply:

  • individuals should be made aware when a decision that affects them has been made using AI-informed decision making;
  • the Government agency should provide both a plain English and a technical explanation of the AI-informed decision-making process; and
  • external merits review before an independent tribunal generally should be available in respect of any AI-informed administrative decisions.[2]

AHRC Recommendation: Ban ‘black-box’ algorithms

Machine learning models, particularly deep learning models, are frequently called “black box models” as it’s usually unclear how a model is arriving at a given decision. According to this research explain-ability seeks to eliminate this ambiguity around model assembly and model outputs by generating a “human understandable explanation that expresses the rationale of the machine”.

This type of transparency is important for building trust with AI systems to ensure that individuals understand why a model is arriving to a given decision point. If we can better understand the why, we will be better equipped to avoid AI risks, such as bias and discrimination.

The technical explanation of an AI-informed decision should be in a form that can be assessed and validated by a person with relevant technical expertise.  The technical explanation may consist of different factors, such as:

  • the original data set;
  • how the system was trained on that data set;
  • any risk mitigation strategies adopted;
  • the factors, or combination of factors, used to determine an outcome or prediction;
  • any evaluation or monitoring of individual or system outcomes;
  • and any testing, or post-deployment evaluation, carried out in relation to the model.

If the government agency using the AI cannot issue an explanation in those terms, then it should not use that form of AI.  In effect, this is a ban on the use of ‘black box’ algorithms as part of increased transparency measures, including notification of the use of AI and strengthening a right to reasons or an explanation for AI-informed administrative decisions.[3]

AHRC Recommendation: AI used by the private sector

The Commission considered, but ultimately rejected, an economy-wide requirement that businesses using AI must give customers the right of review by a human.  It did support greater transparency, recommending a legislative requirement for the private sector to inform consumers when a business materially uses AI in a decision-making process that affects the legal or other similarly significant rights of the individual.

AHRC Recommendation: Rebuttable presumption of liability

Probably the most interesting – and in global terms, the more innovative – recommendation, is the recommendation for a legislated presumption of liability for a decision, to head off a defence of “it wasn’t me, it was the technology”. The report notes that, as with other forms of decision-making, usually the question of liability will be straightforward for AI-informed decision-making. However, some complexities can arise – either where an AI informed decision-making system operates largely autonomously, or where numerous parties are involved in designing, developing and using the system.

To address this potential ‘gap’, the Commission recommends the creation of a rebuttable presumption that legal liability for any harm that may arise from an AI-informed decision should be apportioned primarily to the legal person that is responsible for making the AI-informed decision itself.

The report notes concerns that a developer of an AI tool should not necessarily be liable because the AI may have been taught by the purchaser or ‘learnt on the job’ in ways the developer did not anticipate or have any control over. The report, in something of an easy out, concludes that “if liability is to be shared among multiple parties, there should still be fair apportionment based on the level of responsibility for any errors or problems.” How such apportionment would be framed legally, or work in practice, is not addressed.

AHRC Recommendations: Moratorium on use of facial recognition and other biometric technologies

The HRC also recommended a moratorium on the use of facial recognition technology in law enforcement settings, there is comprehensive federal and state-based legislation addressing the governance of these technologies in decision-making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as in policing and law enforcement.

This is consistent with ethical practices.  For example, last year IBM’s CEO Arvind Krishna shared that IBM has sunset its general purpose IBM facial recognition and analysis products, emphasizing that “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”[4]

Conclusion

Government decision-making by AI is already here in Australia: section 495A of the Migration Act 1958 (Cth) permits the responsible Minister to “arrange for the use, under the Minister’s control, of computer programs” to make a decision, exercise a power or comply with any obligation. Similar permissions are provided for under section 6A of the Social Security (Administration) Act 1999 (Cth), and section 4B of the Veterans’ Entitlements Act 1986 (Cth).

Australia’s experience to date with the use of AI/ML has not been good and there has been significant erosion of the trust needed for the effective use of AI/ML.

The AHRC’s report is a very important contribution to the thinking around the use of AI/ML in Australia, and how it should be balanced against the potential impact on the rights and freedoms of individuals.  The recommendations made are consistent with the principles included in most ethical frameworks around AI/ML:

  • Fairness;
  • Ethical decisions;
  • Accountability; and
  • Transparency

Also included are some practical recommendations that would give any principle based approach real teeth including:

  • The establishment of an AI Safety Commissioner;
  • The opportunity for an external merits review before an independent tribunal generally in respect of any AI-informed administrative decisions.;
  • More explicit transparency requirements; and
  • The presumption of liability for AI based decisions.

During the consultation period prior to the release of this report, Australian consistently reported that they wanted AI to be fair and accountable.  The recommendations in this report are a great starting point for that.  It remains to be seen what will happen to this excellent piece of work.

Resources:

Human Rights meet Artificial Intelligence: AHRC reports on AI (gtlaw.com.au)

[1] Recommendation 2

[2] Recommendation 8.

[3] Recommendations 3-7

[4] To read more about this, check out IBM’s policy blog, relaying its point of view on “A Precision Regulation Approach to Controlling Facial Recognition Technology Exports.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.