ChatGPT Wrote This Blog Post: How Could This Affect Organisational Privacy?

“Every technology, including AI like ChatGPT, comes with potential risks and benefits. Balancing innovation with privacy and security is crucial. For business owners, it is essential to be aware of these issues and implement robust practices to mitigate privacy and ethical risks. AI should complement human decision-making, enhancing privacy, security, and ethical standards, not undermine them.” – ChatGPT  

This is a follow-up post to our earlier article about ChatGPT and privacy. Be sure to read that in conjunction with this post to understand some of the broader risks.  

Privacy Concerns When Using ChatGPT to Generate Content for Your Organisation 

Forget About Any Right to Correction 

ChatGPT is an advanced model, but it’s not perfect – and it can’t replicate human reasoning. As a result, it can produce incorrect answers through hallucinations (where it just makes up information) and failures in ‘reasoning’. This can be a privacy problem.  

We saw an example of this earlier this year when ChatGPT wrote that the Mayor of Hepburn Shire (outside Melbourne) served prison time for bribery. The reality is that he was a whistleblower in a foreign bribery matter. He threatened ChatGPT’s parent company, OpenAI, with a defamation lawsuit if it did not correct the information.  

We haven’t seen any further developments in this case, so it’s possible that the information was corrected. However, correcting information drafted by ChatGPT isn’t easy.  

One author took it upon himself to try to correct information ChatGPT generated about him, with very limited success. You can read more about that in PC Mag’s article “How to tell ChatGPT when its wrong”. 

ChatGPT itself doesn’t have any functionality that allows users to submit corrections.  

So, why is this a privacy problem?  

Many privacy laws around the world, including Australian privacy laws, grant individuals the right to correct personal information if it is inaccurate, out-of-date, incomplete, misleading, and/or irrelevant.   They also include an obligation to ensure that the information organisations hold is accurate – so relying on ChatGPT to ensure the correctness of the information you might use or share is very unwise… 

ChatGPT Has Already Had Breaches That Exposed Personal Information 

ChatGPT stores the prompts and information you share with it by default. This poses a significant risk to businesses – and it’s why companies like Apple, Amazon, and Samsung have banned their teams from using it. They recognise that it’s very easy for team members to accidentally share more information than they should with the platform (whether it’s proprietary or personal information).  

This risk is amplified when you add in the risk of ChatGPT being breached. Login credentials are already being sought and shared on the dark web, and it’s likely that its servers hold a trove of personal information that could be an attractive target for hackers.  

What Privacy Risk Is There to Organisations Using ChatGPT to Generate Content? 

Any organisation using ChatGPT in its processes runs the risk of its team inadvertently sharing personal information with the platform. Organisations in industries that handle higher volumes of sensitive personal information will carry a higher risk if their teams use the platform – particularly in industries like financial services, healthcare, and the law.  

Financial services firms in the US, for example, have already banned ChatGPT (due to the risk of running afoul of third-party messaging laws).  

Meanwhile, the use of models like ChatGPT in healthcare is becoming extremely widespread. And the AI-tools are outperforming doctors in some respects. However, this too carries an extremely considerable risk of inadvertent sharing of sensitive information – it could be a matter of not removing one instance of a patient’s name in a cut-and-paste paragraph.  

Each organisation should be carefully considering the risks and either:  

  • banning use of the platform if the risks are too high; or  
  • training their team on how to use it correctly if the risks are acceptable.  

Privacy By Design Incompatibility 

ChatGPT was not designed with privacy in mind. OpenAI has bolted on some privacy features after the fact – like the option to not store prompts. However, as is usually the case when privacy is bolted on instead of built-in, the effectiveness of these features is questionable.  

It’s critical that your organisation conduct a thorough privacy impact assessment and third-party vendor assessment before allowing your team to use it.  

If your organisation needs help maturing its third-party vendor assessment process, reach out. Our team of privacy professionals would love to help. 

  • We collect and handle all personal information in accordance with our Privacy Policy.

  • This field is for validation purposes and should be left unchanged.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.