I-MED’s Data Headache: Unauthorised Access & Unauthorised Disclosure Double Whammy
The last ten days of September were tough for Australia’s largest medical imaging provider, I-MED. Crikey broke two stories about the radiology company, which is in the market to be acquired, in late September – each relating to data it holds. We’re going to dig into each story in this article, and tease out some important takeaways for Australian organisations.
Background: Gathering Datasets To Train Artificial Intelligence
Many of us are becoming more familiar with, and using, artificial intelligence, as part of our work. Some of you might even know that artificial intelligence is usually ‘trained’ using data sets – and the gathering of the data to train AIs is often controversial.
Legal claims and probes are appearing across the globe relating to this, including copyright contravention claims, and regulator warnings and probes into the gathering and use of personal information for training purposes. The US Federal Trade Commission, for example, has warned businesses that there’s “nothing intelligent about obtaining artificial consent” in response to organisations quietly updating their privacy policies to allow for using data for training AI.
Regulators are also releasing guidelines relating to the gathering of data to train AI, including Australia’s regulator, the Office of the Australian Information Commissioner (OAIC), which recently released its Guidance on Privacy and Developing and Training Generative AI Models.
OAIC Probing I-Med For Sharing Patient Images To Train An AI
Prior to releasing its guidance, the OAIC reportedly launched an investigation into I-Med for its use of healthcare data through a partnership with harrison.ai.
The partnership between I-Med and Harrison.ai reportedly involved the sharing of chest x-rays to train an AI model, now in use and known as Annalise.ai. A quick check of the Annalise.ai website shows that it’s an artificial intelligence model designed to help radiologists deliver better care by improving speed and accuracy in detection, resulting in fewer false positives and negatives.
It’s a noble cause, no doubt, and one that many patients of I-Med may have consented to – but the issue is that I-Med may not have a clear legal basis for sharing the health data. It is reported that patients were not informed their data may be used this way (and gotten valid consent). Instead, I-Med appears to be relying on the dataset being ‘de-identified’ for its use.
To clarify, Australian privacy law allows for the use and disclosure of personal information for the primary purpose it was collected – in this case, it would likely be for diagnostic purposes.
From there, an organisation can disclose your personal information if you consent to that secondary use or if it’s required or authorised under an Australian law or court/tribunal order, or reasonably necessary for enforcement-related activities.
However, information that has been de-identified it not considered personal information and is therefore not subject to the Privacy Act 1988 (Cth). It seems I-Med may have relied upon this when it disclosed the data to Harrison.ai.
It’s worth noting that regulators and many individuals are uncertain about the use of de-identified sensitive data in training artificial intelligence due to the risk of re-identification. With the rapid development of AI, nervousness about the risk of re-identification is a valid concern. And relying on this basis for disclosure is considered to be a legal gray area.
Regardless of the outcome of the OAIC probe, it’s worth noting the customer sentiment surrounding the disclosure. If you use data in ways that your customers don’t expect and haven’t been alerted to, you do risk drawing badwill – regardless of whether your use and disclosure is legally permissible.
I-Med Data Accessed By Hacker
Crikey broke another story about I-Med in late September, noting that a hacker had gained access to I-Med’s accounts via credential stuffing. The 3 hacked accounts reportedly had passwords that were 3-5 characters in length, and there was no multi-factor authentication in place. Via these accounts, the hacker could access medical imaging and patient files dating back to 2006.
Given the sensitivity of the data I-Med holds on its patients, this is concerning. We’ve written about multi-factor authentication in an earlier post, and we cannot understate how crucial MFA is in today’s digital environment. The volume of breaches that are occurring that may have been prevented had MFA been in place is disappointing. Similarly, the number of companies that are paying the high costs of data breaches (including ransoms) instead of the comparatively low costs of implementing MFA is hard to understand.
It will be interesting to see how the OAIC responds to this news, on top of its existing probe into the use of ‘de-identified’ health data in the development of AI. Hopefully there will be clear guidance for Australian organisations on the OAIC’s expectations in this rapidly developing and complex area.
If your organisation needs assistance with its privacy disclosures or security posture, reach out. Our team of privacy consultants is available to help.