Yes, You Can Face Legal Consequences For (Mis)Using Generative AI

In 2023, Brian Hood (an Australian mayor) made headlines for what was going to be a world-first test case. He planned to sue OpenAI (the company behind ChatGPT) for defamation after the generative AI shared false information about him. It claimed he had been involved in a bribery scheme and had spent time in jail, which was untrue. He has since abandoned the lawsuit, but Australian defamation lawyers are gearing up for fights with Meta and Google as more AI features roll out.  

But, consequences for poor use of generative AI aren’t reserved only for the tech giants. There are multiple situations where Australian Government departments and even individuals have already faced legal consequences in Australia for poor practices around generative AI. Let’s take a look at two of them:

Department of Families, Fairness & Housing Breached Privacy Using ChatGPT

In September 2024, the Office of the Victorian Information Commissioner published an investigation report outlining its findings that the Department of Families, Fairness & Housing (the Department) breached privacy laws when one of its workers used ChatGPT to help with drafting a Protection Application Report (PA Report).

The Low-Quality Content Generated

Before we dig into the privacy concerns that come with using ChatGPT, we wanted to share some snippets from the OVIC report discussing the low quality of the content: 

  • “ChatGPT failed to properly understand the relevant context, and generated inappropriate and inaccurate content as a result”.
  • “a theme throughout the PA Report was the use of language and terminology that were not standard for reports of this nature, and which were not in keeping with the clear and concise style of writing required for reports under Child Protection’s Court report writing guide.” 
  • “at times, the language was overly sophisticated, complex and descriptive with the use of unusual sentence structure and linking words.” 
  • “The content inaccurately described the actions of the father and mother, as well as inaccurately describing the care of the child and risks relating to them.” 

There was also a “Conflicting analysis about a child’s toy… First, the PA Report referenced how the child’s father used a particular toy, a doll, for sexual purposes. The report later referenced the same toy as a notable strength, in that the parents had provided the child with “age-appropriate toys”, which was used to underscore their efforts to nurture the child’s development needs.” This analysis was described in the decision as “ so inappropriate that it cannot reasonably be expected to have been provided by a human child protection practitioner”. 

Privacy Compliance Issues

The OVIC found that the Department had used ChatGPT in a manner that resulted in the collection, use and disclosure of inaccurate personal information and unauthorised disclosure of personal and sensitive information to OpenAI. This was in breach of Victoria’s Privacy law, specifically IPP 3 which relates to the accuracy of personal information (data quality) and IPP 4, which relates to data security. 

The OVIC deemed these contraventions to be ‘serious’ and issued a compliance notice against the Department. The compliance notice bans team members from the Department from using ChatGPT and a long list of other generative AI platforms. The Department is required to implement technical measures relating to the ban, and to report back to OVIC every six months about the efficacy of the ban. 

If the Department fails to comply with the notice, it could face fines of up to half a million dollars. 

Lawyer Facing Potential Disciplinary Action After Legal AI Hallucinates Case Law

In July 2024, a lawyer submitted a list of authorities (i.e., a list of previous legal cases) to the Federal Circuit and Family Court of Australia in a family law matter. The lawyer used legal software that was artificial intelligence-enabled to prepare this list—and the list was inaccurate. The authorities submitted did not exist, which the lawyer could have very easily checked in a short amount of time. 

The possible consequences of this include: 

  • The lawyer may need to pay the legal costs for the other party’s appearance at court that day. 
  • The lawyer may be referred to the Victorian Legal Services Board and Commissioner. The judge gave the lawyer an opportunity to outline why the “conduct in tendering the [incorrect] list of authorities should not be referred”. We don’t have further information at this stage as to 

You can read the court orders here.

Key Takeaways For Australian Organisations

These are 3 brief takeaways for Australian organisations looking to reduce the risk of legal compliance issues relating to generative AI use: 

  1. Know the risks and take steps to reduce or eliminate them. You should be well-versed in the risks your team’s use of generative AI poses to your organisation, and you should have measures in place to reduce them—ranging from policies and technical interventions to an outright ban. An artificial intelligence impact assessment can help with this (get your free template here). 
  2. Provide detailed and practical training to your team. The OVIC decision noted that vague policies without real-world examples were not sufficient to show that the Department had complied with Victoria’s privacy law. 
  3. Sharing personal and sensitive information with generative AI programs and other software can be deemed a ‘disclosure’, which can trigger data breach notification obligations. 

If you’re concerned about your compliance with privacy laws in Australia, reach out. Our experienced team is available to work with you. 

  • We collect and handle all personal information in accordance with our Privacy Policy.

  • This field is for validation purposes and should be left unchanged.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.