The AFP in trouble again for its use of AI … with no privacy impact assessment

The OAIC are again investigating the Australian Federal Police over AI surveillance concerns, this time involving an AI system called Auror.

What is Auror?

Some might mistake Auror the AI system for a Harry Potteresque wizard or witch working with dark magic as a highly trained law enforcement officer, detaining rogue witches and wizards for the Ministry of Magic.

Sadly in this context, Auror has nothing to do with the Department for the Regulation and Control of Magical Creatures.

Auror is billed as  a ‘retail crime intelligence and loss prevention platform’ targeted at helping retailers reduce loss, prevent crime and improve store safety.  It is a US crime intelligence database used in more than 40% of Australian stores, allowing users to report crime incidents.  Yes, they’re watching you when you pop in after work on Friday to collect your Bachelor’s Handbag (and for those not in the know, that’s urban slang for a supermarket cooked chook and Macquarie Dictionary’s people’s choice word of the year 2022).

Users such as Coles and Woolworths upload CCTV and other profile data (gender, build, defining tattoos and other features) with offence details (nature of offence, modus operandi, the place where the offence occurred and the nature of the property that was the subject of the offence, because much of the crime involved is theft). The system then processes the data so that the managers of stores who are Auror users will be alerted if the same offender trips the cameras entering their store. It is claimed that the system can predict and prevent crime before it happens – a kind of ‘Oh no, not you again’ amplified globally.

What has the AFP (allegedly) done

So where does the Australian Federal Police come into the picture?

Via a Crikey report following an FOI request conducted as part of a Crikey investigation noting: “Emails released to Crikey in an FOI request show that AFP staff were quick to take full advantage of the trove of data available through Auror long before the agency’s higher-ups became aware of its use, carried out privacy and security reviews or formally partnered with the company.”

The ACT police have also admitted using the ‘retail crime prevention’ software. The ACT Policing Chief Police Officer Neil Gaughan was asked about that during budget estimates hearings on 19 July 2023, admitting that over 100 staff had been using system. However, he confirmed that the agency had stopped using the system (or perhaps only CCTV footage from the system) until a privacy impact assessment had been done.

Assurances have been given that ACT CCTV footage has not been input into Auror’s platform and that the software has just been used to view footage uploaded from retailers. Furthermore, they apparently also don’t use the facial recognition or AI capabilities of the system. ‘It’s just a read only for us … a substitute for going to businesses to collect the CCTV.’

Nothing to see here? We’ll see what the OAIC decides in its investigation of the AFP’s use.

Why is this a problem

So what’s the gripe?

Well, this isn’t the first time that the AFP have had their knuckles wrapped about AI technology.

You will remember that when Clearview AI unlawfully (collecting sensitive information without consent, collecting personal information by unfair means and so on and so forth) scraped more than three billion images from social media platforms and other publicly available sources for their facial recognition tool, the AFP were also sprung using that unlawfully collected personal information without having conducted a Privacy Impact Statement exercise.

The OAIC conducted an investigation and was not pleased.

A determination was made finding that the AFP had breached the federal agency Privacy Code by failing to do a privacy impact assessment for a high risk project – as required by clause 12 of the Code. The AFP was also found to have breached APP 1.2 in failing to take reasonable step to implement practices, procedures and systems relating to the entity’s functions or activities to ensure compliance with the code.

Issues with use of facial recognition

Let’s remind ourselves of where the compliance situation sits in this space: facial imaging (which collects biometric data) uses sensitive information under the Privacy Act 1988 and, for the supermarket chains to collect that information in accordance with their obligations under Australian Privacy Principle 3.3 they need to be able to demonstrate both that collection is reasonably necessary for one or more of their functions or activities and that there is consent from the data subject.

Is there consent?

Consent in this arena is a much-contested space, mostly implied in a pretty shaky kind of way when you continue your way into the supermarket after observing a sign on the way into the store that might say something along the lines of “Cameras and monitoring technologies in use within this store”.  There are all sorts of issues with that. These become apparent if you scratch just an itsy bit under the surface and begin asking some simple questions like, for example, did you actually see the sign, if it’s a service station and you’ve filled the car how are you going to pay for your petrol without going into the store (and is that really consent when you’ve got no choice), if it’s the only place to buy milk for 50 kilometres is consent freely given, what have I actually consented to if all I know is that there are cameras and monitoring technologies etc etc …

Lawful and fair collection of personal information

The AFP’s legal obligation as a law enforcement area doesn’t actually directly involve consent though, but it is obliged under APP 3.5 to only collect personal information by lawful and fair means (which becomes problematic in the consent context above, when that information is being collected via the supermarket chains).

Under APP3.4(d)(ii) the AFP’s obligation as a law enforcement body was also to reach a reasonable belief that the collection of the information is reasonably necessary for, or directly related to, one or more of the entity’s functions. Leaving aside the lawful and fair means issue, how would the AFP as an entity have means of forming a reasonable belief if, as it seems, the senior folks in the organisation weren’t even aware of what was going on?

So it looks like a problem and, taken with the Clearview AI debacle, it might be endemic.

 

 

The importance of Privacy Impact Assessments

The whole scenario points to many potential failures but most significantly, the importance of not initiating a data collection and processing activity without first conducting a Privacy Impact Assessment exercise to establish and document what is actually going on, provide guardrails to staff and also be able to demonstrate compliance if the regulator comes calling.

To quote ACT Policing Chief Police Officer Neil Gaughan, “When we first started using DNA, many people thought that was the end of the world as we knew it.” True that, but when law enforcement agencies started using DNA, there wasn’t any meaningful privacy regulation, and they weren’t using DNA evidence that had been collected from Woolworths, gleaned from folks out doing their shopping.

Privacy, software design and technology. Ian is a privacy, IT and software contracts lawyer with over 30 years of experience as a lawyer and over 20 years of experience advising on the legal aspects of data management and processing.