5 Developments in Australian Privacy To Keep An Eye On This Year

Late last year, our team got together to discuss the privacy happenings in 2024 and to look forward to 2025 and predict what’s on the horizon (be sure to check out the video!). In this post, we’re digging into some of the privacy changes we know are coming in the future and outline what this means for organisational privacy in 2025.  The 5 changes we’ve picked are:

  • Australian Privacy Act – second tranche of reforms
  • Australian Online Children’s Code
  • Mandatory AI guardrails
  • Bunnings’ appeal
  • Harms focused OAIC Enforcement.

Privacy Reforms – The Second Tranche is Coming

The first tranche of privacy reforms were pushed through on 29 November 2024, the final day of parliamentary sittings for 2024. In case you missed it, the first tranche of reform introduced the following changes: 

  • New statutory tort for serious invasions of privacy, alongside exemptions;
  • Increased OAIC enforcement powers;
  • Criminalisation of doxing; and 
  • A compliance notice regime.

You can read our detailed coverage for more information. 

For organisations responding to these changes, we suggest also planning for the second tranche of privacy reforms at this point. While we don’t know when the second tranche of changes will come into effect, we do know that Commissioner Kind is ‘eagerly awaiting’ them. 

Here’s what may be coming: 

  • Fair and reasonable: Introduction of an overarching principle that processing of personal information must be ‘fair and reasonable’
  • Amended Definitions and Expanded Scope: Changes to the definition of personal information from “about” to “relates to,” removal of the small-business exemption, and restriction of the employee records exemption to extend privacy protections to private sector employees.
  • Consent and Fair Handling: Clarification that consent must be voluntary, informed, current, specific, and unambiguous; and requires that collecting, using, and disclosing personal information remain fair and reasonable in the circumstances.
  • Purpose and Retention Requirements: Mandate that organisations define and document their purposes for handling personal information at or before collection, and document retention periods.
  • Introduction of Controllers and Processors: Align with international standards by distinguishing between those who determine how personal information is processed and those who process it on their behalf.
  • Enhanced Individual Rights: Grants expanded rights, including the right to request explanations of personal information held, object to processing, seek erasure, correct inaccuracies, de-index search results, and opt out of direct marketing or targeted ads.

Privacy Commissioner Carly Kind also expects changes that will ensure Australian organisations build better privacy and technical security into operations, potentially in line with the GDPR’s privacy by design requirements (in Article 25). At the moment, Australian organisations are encouraged but not expressly required to implement privacy by design.  

Although slated for some time in 2025, there is an election coming so all bets are off as to if and when we might see draft legislation to introduce all (or any) of the above ‘agreed in principle’ amendments.

An Australian Online Children’s Code 

The OAIC must create an Online Children’s Code by the end of 2026, which means we’re likely to see drafts and consultation periods by year’s end. We will have a better idea of what’s coming at this point, but we have a few indicators as to what might be included – since the OAIC has outlined that ‘to the extent possible, we will look to align the code with the UK’s Age Appropriate Design Code’. 

The UK’s online children’s code outlines 15 standards for managing children’s privacy, which can be summarised as follows: 

  1. Best Interests of the Child
    Design products and services in ways that safeguard and promote children’s welfare above any commercial considerations.
  2. Data Protection Impact Assessments (DPIAs)
    Complete a DPIA specifically focused on children’s data protection risks. Use findings to inform product features and compliance strategies.
  3. Age-Appropriate Application
    Assess the age range of likely users and tailor protective measures. Apply heightened protections to younger users.
  4. Transparency
    Offer clear, child-friendly explanations of how and why personal data gets collected and used. Avoid confusing or legalistic language.
  5. Detrimental Use of Data
    Avoid using children’s personal data in ways that could harm their well-being or exploit their vulnerabilities.
  6. Policies and Community Standards
    Uphold published policies and standards—such as community guidelines—so that children and parents trust that services live up to stated commitments.
  7. Default Settings
    Switch on the most privacy-friendly settings by default. Give children and their parents the freedom to adjust settings later if needed.
  8. Data Minimisation
    Collect only what you need. Limit both the amount of personal data gathered and how long you keep it.
  9. Data Sharing
    Do not share children’s personal data unless a compelling reason exists to do so. Weigh any potential risks to the child.
  10. Geolocation
    Switch off geolocation services by default. Provide prominent notices when tracking location and let children and parents control whether location data stays visible or stored.
  11. Parental Controls
    Make any parental monitoring features evident. Offer an appropriate explanation when such tools track a child’s online interactions.
  12. Profiling
    Avoid or limit profiling of children if it might expose them to harmful content or reinforce negative patterns. Use safeguards to protect vulnerable users.
  13. Nudge Techniques
    Refrain from using interface designs that pressure children to make poor privacy decisions, such as encouraging them to disclose unnecessary data.
  14. Connected Toys and Devices
    Embed privacy controls into connected toys and devices to protect children’s data in real time. Communicate any collection activities clearly.
  15. Online Tools
    Provide accessible, user-friendly tools to help children (and parents) exercise their privacy rights, including options to report concerns or request changes to data use.

Given that we know significant reforms are coming, we suggest organisations that are creating products and services for children get familiar with the UK standards and begin to align with them – especially for products that are currently being developed. This will help to reduce the costs of ‘bolt on’ privacy solutions and allow for better outcomes for the organisation and its young users. 

More information and resources from the UK ICO are available here.

Mandatory AI Guardrails 

In September 2024, the Australian Federal Government signalled the introduction of ten AI “guardrails” for organizations that develop or use AI in high-risk scenarios. These guidelines might launch as voluntary measures, though they will likely shift toward mandatory status for high-stakes research and development.

Final details have not yet been shared, but the consultation period has ended. We anticipate that changes introduced following the consultation may require human oversight in AI development and deployment, as well as inclusions around user awareness regarding AI-driven decisions, mechanisms for challenging AI outcomes, and transparency throughout the AI supply chain.

We covered this earlier in more detail here.

However, AI regulation may be subject to a re-think given recent movements in the USA to remove guard rails and put a spotlight on AI development and innovation.

Review of OAIC’s Bunnings Determination

We covered by OAIC’s decision from its investigation of Bunnings’ use of CCTV and Facial Recognition Technologies (FRT) in a previous post.

There are many interesting interpretations of the Australian Privacy Principles covered in the OAIC’s recent decision.  Some of these include:

  • That ‘collection’ includes even a transient period of milliseconds and an automated process (where there is no human interaction with the personal information collected);
  • The standard to be used to determine if the collection was ‘necessary’ (which is part of one of the exceptions relied on to support the contention that consent was not required).

Clarification of any part of the Australian Privacy Principles from a court is very important to the development of privacy law in Australia.  To date, judicial guidance on the interpretation and application of the Australian Privacy Principles has been limited.  The recent settlement of the case the OAIC brought against Meta / Facebook (covered here) closed another door that could have provided much needed judicial precedent on the interpretation and application of the Australian Privacy Principles.

Bunnings has said it will appeal the recent decision.  We will watch with interest to see both the points of appeal and the process of that case.

Harms Focused OAIC Enforcement

The Bunnings decision also highlights another broader development we expect to see this year – harms-focused enforcement from the OAIC.  The OAIC stated it would be moving to become a ‘harm-focused regulator’ in its Statement of Intent dated October 30, 2024

“The OAIC will move to a new organisational structure that will support it to be a more effective and harm-focused regulator. The OAIC will complete a transition to this new structure by early 2025.”

In the Bunnings decision, the OAIC considered whether the collection of sensitive personal information (in the form of biometrics) was necessary in the circumstances. In determining this, the OAIC considered the use for prevention of violence in the retail setting versus the potential harm from FRT (which is seen as a form of pervasive, non-specific surveillance). The OAIC found that the introduction of high-risk FRT was not appropriate and required Bunnings to cease using the tech. In fact, the commissioner stated that “FRT and the surveillance it enables [is] “one of the most ethically challenging new technologies in recent years.” 

With this in mind, we’ve compiled a list of other ethically challenging technologies that may attract the attention of the OAIC or other regulators: 

  • Disinformation From Generative AI: Large language models and image-generation tools can create convincing but deceptive content, raising concerns about misinformation, identity theft, and privacy breaches.
  • Deepfake Tools: Software that produces realistic audio or video impersonations often obscures reality, undermines trust, and threatens reputations.
  • Internet of Things (IoT) Devices: Smart home systems, wearable devices, and connected vehicles collect extensive personal data, sometimes without users fully understanding the scope or purposes of data collection (which are covered by the new Cybersecurity regulation).
  • Location Tracking: Apps and services that monitor individuals’ whereabouts can enable invasive profiling, targeted ads, or data sharing without clear oversight. We referred to this in our list of biggest penalties under the GDPR in 2024, check out that post for more on this topic. 
  • Neural Data: Emerging brain-computer interfaces that gather brain signals for controlling devices or therapies raise serious concerns about privacy, consent, and the potential for exploitation. This data has already been added to a number of privacy laws in the US, including California. 

Sign up for our newsletter to receive 2 x monthly updates covering all the privacy happenings in Australia and around the world. 

Sign Up to our Newsletter

  • We collect and handle all personal information in accordance with our Privacy Policy.

  • This field is for validation purposes and should be left unchanged.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.