Useful AI Governance Resources For Australian Businesses
There has been a huge increase in the number of people ‘sneaking’ Artificial Intelligence (AI) into work – so much so that there’s now a name for it: Shadow AI.
We would hazard a guess that the reason many people are using AI without authorisation isn’t because they’re bad employees who do what they want – but instead because most organisations don’t have any form of AI Governance policy. Since employees don’t know how or when to seek approval to introduce AI tools and they’re less likely to understand the risk, they just do it.
And it’s proving to be expensive for organisations around the globe.
The Case for Effective AI Governance
Here’s what IBM’s 2025 Cost of a Data Breach report said on AI Governance (page 34):
“AI adoption has outpaced oversight. This year’s research quantifies that governance gap and the costs it carries. Most organizations said they didn’t have governance policies to mitigate or manage the risk to AI. For those that do, less than half have strict approvals for AI deployments. That deficiency had consequences. Not only do these organizations leave themselves open to security, operational and reputational risks, but they’ve paid a steeper cost than average when breached.”
The report noted that a majority of organisations didn’t have mechanisms to detect shadow (unauthorised) AI use – but that data breaches involving shadow AI cost, on average, 670,000 USD more than other data breaches and took longer to discover. It also revealed that AI-related data breaches were increasingly common in 2025.
In other words, a lack of effective AI Governance can have expensive consequences for organisations – while also making a data breach more likely.
Useful Resources on AI Governance
If you’re unsure where to start when it comes to wrapping your head around AI governance, we’ve compiled a list that may help:
Australian Resources
The Director’s Guide to AI Governance
The Australian Institute of Company Directors has published detailed guides and supporting documents covering AI Governance. The resources were created in partnership with the Human Technology Institute (HTI) at the University of Technology Sydney.
The resource includes:
- An introduction to AI, which is a helpful place to start especially for those who are new to understanding AI in the context of an organisation.
- The guide to AI governance, which details 8 elements of AI governance.
- A summary of the 8 elements of AI governance.
- A governance checklist.
- A free webinar that discusses how these resources can be implemented.
Get the Director’s Guide to AI Governance.
Privacy 108’s AI Resources
Here are the links:
- AI Impact Assessment (a downloadable tool)
- The EU AI Act’s Impact on Australia
- Guidance on AI Note Takers for Meetings and Minutes
- Training AI with Personal Information
- AI Guardrails Proposed for Australia
AI Impact Navigator
The National AI Centre has launched the AI Impact Navigator. It’s designed to help Australian organisations manage and report on the real-world social, environmental, and economic impacts of their AI systems. Building on the Voluntary AI Safety Standard, the Navigator provides tools and templates to help companies report on their AI’s impact on customers, the workforce, investors, and the community. This resource is particularly valuable for legal and privacy professionals seeking to demonstrate corporate transparency and responsible AI governance beyond traditional frameworks.
Global Resources
USA: The NIST AI Risk Management Framework
The US National Institute of Standards and Technology (NIST) published its AI RMF in 2023, alongside an incredibly detailed Playbook (147 pages). The broad goal of the AI RMF is to provide organisations “with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.”
The Playbook specifically lays out actions organisations can take to improve AI governance (in detail).
Links:
AI RMF Playbook (document will automatically download when you click this link)
View the documents without downloading here.
The University of Turkey’s AI Governance Framework
This AI Governance Framework is designed to align with the OECD’s AI system lifecycle and supports compliance with the EU’s AI Act (note that it is not intended to be a comprehensive compliance tool). We like the AI Governance Task List that comes with this resource. It’s detailed but relatively straightforward.
The AI Governance Library
A website that links out to a wide range of AI Governance resources, including some listed here. We recommend checking it out once you’ve digested the information contained in the resources above.
Find the AI Governance Library.
The IAPP’s AI Governance Centre
We listed this one last because it requires a membership to access the resources. However, if you join the IAPP, you can get their weekly AI Governance email newsletter.
What Your AI Governance Documents Should Cover
The Australian Government’s Business webpage on AI contains a helpful list about how to use AI responsibly. We’ve detailed it here, since you can use it as a starting point for what your AI governance documentation should cover:
- Processes and Guidelines: Define the specific tasks for which AI will and will not be used, and establish procedures for ensuring the accuracy and integrity of its output.
- Risk Management: Detail a process for identifying, assessing, and mitigating potential harms to your business, customers, and the community. This includes deciding if the benefits of an AI tool outweigh the risks and creating a clear plan to manage them.
- Privacy and Security: Outline how your business will protect private and confidential data when using AI tools, including due diligence on supplier terms and conditions and ensuring compliance with privacy laws.
- Testing and Monitoring: Establish protocols for thoroughly testing AI tools before deployment and continuously monitoring their performance.
- Output Validation: Define the process for checking AI-generated content for accuracy, relevance, and bias before it is used or shared.
- Transparency: Specify when and how to inform customers about the use of AI, and provide a mechanism for them to provide feedback or challenge AI-driven decisions.
- Documentation: Maintain a clear record of all AI tools in use, including their purpose, responsible parties, features, limitations, and a summary of testing and risk management efforts.
Finally, if you’re just getting started with your AI governance framework, we encourage you to start by understanding the risk and creating a policy – as opposed to ‘bolting on’ technical solutions to uncover shadow AI. Your organisation’s policy and the training you provide your team will offer more robust protections and more pragmatic solutions that address risk at the appropriate level (ie. project level for shadow AI implementation, organisational level when it comes to risks adoption/lack of adoption pose).
If you need help managing your organisation’s AI adoption (authorised or otherwise), reach out for a free consultation by emailing hello@privacy108.com.au. Our team of privacy professionals is available to help.