Managing ChatGPT and Privacy in Australian Organisations

ChatGPT promises to revolutionise the way we write and work. In late May, Thomson Reuters announced a partnership with Microsoft (which essentially controls ChatGPT) to create a contract drafting AI for Word. Meanwhile, Microsoft’s Co-Pilot promises ‘a whole new way to work’.  

But ChatGPT and generative AI are also set to drastically change the face of privacy.  

Concerns have already been raised around the potential privacy risks that come with ChatGPT, for example:  

While we expect a position on how to handle privacy issues with generative to be developed, there any many questions currently to be addressed by organisations looking to take advantage of this exciting new technology. 

In this post, we’ll outline some key privacy concerns you should consider before using (or continuing to use) ChatGPT.  

ChatGPT and Privacy: Key Concerns

ChatGPT Processes and Uses the Information You Share 

ChatGPT uses and stores the information you (and your employees and other stakeholders) share when you engage with it to train its AI and improve its services. The only way around this is to use the paid version of ChatGPT and opt out of this collection.  

The OpenAI Privacy Policy notes that it may share the information it collects with vendors and service providers, business transfers, and its affiliates. This is an incredibly broad (and vague) data sharing provision that offers limited protections for anyone who uses the platform.  

In other words, ChatGPT collects, processes, and uses all the information you share. The platform’s FAQs outline that you should avoid sharing sensitive information with it and highlights that its team may read any prompts you input.  

Generative AI May Breach Your Privacy Policy 

Since ChatGPT collects, stores, processes, and uses the information you feed it, there is a risk that your organisation may breach your privacy policy and/or privacy laws. And the risk isn’t insignificant. In fact, a report from Cyberhaven notes that 11% of employees have pasted confidential or sensitive information into ChatGPT.  

The Cyberhaven article reporting on their findings cites the following example:  

“A doctor inputs a patient’s name and details of their condition into ChatGPT to have it draft a letter to the patient’s insurance company justifying the need for a medical procedure. In the future, if a third party asks ChatGPT “what medical problem does [patient name] have?” ChatGPT could answer based what the doctor provided.” 

This example shows how sensitive information can be shared with ChatGPT. It’s also very likely to breach the privacy policy of the doctor as well as a suite of medical confidentiality laws.  

Data Deletion Requests  

While Australia does not have ‘a right to erasure’ (unlike under the GDPR and CPRA), federal privacy laws require covered organisations to take reasonable steps to destroy data once it is no longer needed. Specifically, APP 11.3 states: 

“An APP entity must take reasonable steps to destroy or de-identify the personal information it holds once the personal information is no longer needed for any purpose for which the personal information may be used or disclosed under the APPs.” 

This would be complicated where data has been shared with ChatGPT. The platform’s FAQs state that it cannot delete individual prompts.  

This would mean Australian organisations would need to request all their data to be deleted. And given the lack of local laws, it’s uncertain whether OpenAI would honour this request.  

ChatGPT Raises the Stakes for Cybersecurity 

Finally, ChatGPT also raises the stakes for organisational cybersecurity. ChatGPT can write malicious code and it’s likely to increase the sophistication of phishing efforts. As a result, it’s a good time to consider additional training for your team.  

Key Lessons About ChatGPT and Privacy for Organisations: 

If you haven’t banned the use of ChatGPT and other generative AI in your organisation, you should implement a policy and processes that address the privacy risks associated with it:  

  1. Start with a Privacy Impact Assessment. Understanding how the data you collect and store may be used in the context of generative AI should be your first move.   Undertaking a PIA will help identify issues and appropriate mitigations. 
  2. Update your internal policies to outline how the platform can be used and the dangers associated with it. You should also detail what information you team should not share with the platform and provide clear, easy-to-understand information about ChatGPT’s privacy practices and how they may affect your business. It should be clear that almost anything shared with the ChatGPT platform may ultimately make its way into the public domain.  
  3. Limit access to personal and sensitive data across your organisation.  
  4. Regularly review how the platform is used. Your internal processes should include regular audits to confirm that the data-sharing policies are being followed and conducting updated PIAs, particularly as the technology is developed and modified.  
  5. Train your team. As technologies like ChatGPT develop, your team will remain one of your biggest privacy and security risks. You should arrange regular training for them to minimise the risk and to foster a culture of privacy.  It’s very tempting to use this incredibly powerful tool but it’s important to always remember the risks. 

Privacy in the ChatGPT Age 

Privacy 108 provides tailored privacy and security solutions to organisations operating in Australia. We can work with you to provide tailored training, upskill your team, or help you mature your organisational privacy.  

Contact us to find out more.  

  • We collect and handle all personal information in accordance with our Privacy Policy.

  • This field is for validation purposes and should be left unchanged.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.