
There has been a huge increase in the number of people ‘sneaking’ Artificial Intelligence (AI) into work – so much so that there’s now a name for it: Shadow AI.
We would hazard a guess that the reason many people are using AI without authorisation isn’t because they’re bad employees who do what they want – but instead because most organisations don’t have any form of AI Governance policy. Since employees don’t know how or when to seek approval to introduce AI tools and they’re less likely to understand the risk, they just do it.
And it’s proving to be expensive for organisations around the globe.
Here’s what IBM’s 2025 Cost of a Data Breach report said on AI Governance (page 34):
“AI adoption has outpaced oversight. This year’s research quantifies that governance gap and the costs it carries. Most organizations said they didn’t have governance policies to mitigate or manage the risk to AI. For those that do, less than half have strict approvals for AI deployments. That deficiency had consequences. Not only do these organizations leave themselves open to security, operational and reputational risks, but they’ve paid a steeper cost than average when breached.”
The report noted that a majority of organisations didn’t have mechanisms to detect shadow (unauthorised) AI use – but that data breaches involving shadow AI cost, on average, 670,000 USD more than other data breaches and took longer to discover. It also revealed that AI-related data breaches were increasingly common in 2025.
In other words, a lack of effective AI Governance can have expensive consequences for organisations – while also making a data breach more likely.
If you’re unsure where to start when it comes to wrapping your head around AI governance, we’ve compiled a list that may help:
The Australian Institute of Company Directors has published detailed guides and supporting documents covering AI Governance. The resources were created in partnership with the Human Technology Institute (HTI) at the University of Technology Sydney.
The resource includes:
Get the Director’s Guide to AI Governance.
Here are the links:
The National AI Centre has launched the AI Impact Navigator. It’s designed to help Australian organisations manage and report on the real-world social, environmental, and economic impacts of their AI systems. Building on the Voluntary AI Safety Standard, the Navigator provides tools and templates to help companies report on their AI’s impact on customers, the workforce, investors, and the community. This resource is particularly valuable for legal and privacy professionals seeking to demonstrate corporate transparency and responsible AI governance beyond traditional frameworks.
The US National Institute of Standards and Technology (NIST) published its AI RMF in 2023, alongside an incredibly detailed Playbook (147 pages). The broad goal of the AI RMF is to provide organisations “with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.”
The Playbook specifically lays out actions organisations can take to improve AI governance (in detail).
Links:
AI RMF Playbook (document will automatically download when you click this link)
View the documents without downloading here.
This AI Governance Framework is designed to align with the OECD’s AI system lifecycle and supports compliance with the EU’s AI Act (note that it is not intended to be a comprehensive compliance tool). We like the AI Governance Task List that comes with this resource. It’s detailed but relatively straightforward.
A website that links out to a wide range of AI Governance resources, including some listed here. We recommend checking it out once you’ve digested the information contained in the resources above.
Find the AI Governance Library.
We listed this one last because it requires a membership to access the resources. However, if you join the IAPP, you can get their weekly AI Governance email newsletter.
The Australian Government’s Business webpage on AI contains a helpful list about how to use AI responsibly. We’ve detailed it here, since you can use it as a starting point for what your AI governance documentation should cover:
Finally, if you’re just getting started with your AI governance framework, we encourage you to start by understanding the risk and creating a policy – as opposed to ‘bolting on’ technical solutions to uncover shadow AI. Your organisation’s policy and the training you provide your team will offer more robust protections and more pragmatic solutions that address risk at the appropriate level (ie. project level for shadow AI implementation, organisational level when it comes to risks adoption/lack of adoption pose).
If you need help managing your organisation’s AI adoption (authorised or otherwise), reach out for a free consultation by emailing hello@privacy108.com.au. Our team of privacy professionals is available to help.
Oops! We could not locate your form.
"*" indicates required fields
"*" indicates required fields
Privacy 108 collects your name and email to send you our newsletter. If you do not provide this information, we will be unable to send it to you. We may use third-party service providers (such as email marketing platforms) to distribute our communications. Some providers may store information overseas, including in the United States. For more information about how we handle your personal information, including how to access or correct it or make a complaint, please see our Privacy Policy or contact us at hello@privacy108.com.au. You can unsubscribe at any time using the link in our emails or by contacting hello@privacy108.com.au.