Boardroom Expectations vs The Privacy Pro’s Reality for AI Risk
Despite us now having years to adjust to life with AI, research suggests there’s a fairly large divide between boardroom expectations and the privacy reality of how organisations use AI. Our experience reflects this, with many of the organisations we work with struggling to bridge the governance gap and reduce or manage the risks of AI use.
In this blog post, we outline 5 of the key risks boards are facing as we move into 2026, and what can be done to manage those risks.
5 Key AI Governance Risks Boards Should Know About
Risk 1: The Gap Between Board Members and Privacy Pros
It’s not uncommon for board members and privacy professionals to see AI adoption quite differently. Boards may see AI as a productivity tool or tech-enhanced autocomplete, while privacy professionals consider it to be a complex risk vector – particularly where autonomous decision-making is involved.
Boards must recognise and address this risk, focusing on increasing education and AI literacy and getting the right people in the room when making decisions relating to AI infrastructure and policies.
But privacy professionals will play a role in this shift. In our earlier piece covering privacy predictions for 2026, we noted that the privacy professional role is likely to shift towards connecting design choices to downstream legal, ethical, reputational and social impacts. We think this shift will mean more organisations adopt an approach that links its privacy teams to executive decision-making and board-level risk – and that’s a change we welcome.
So, what should boards be aiming for here? A good goal to start is to have systems in place that mean the board can confidently sign off on significant and/or high-risk AI deployments, as well as broader governance documents that permit AI implementation throughout the organisation. To achieve this, boards will need to understand the risks, benefits, and the supporting tech stack.
- We published an earlier list of AI Governance Resources that may be of interest here.
Risk 2: Shadow AI Use
Shadow AI is “the unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the IT department.” One really common example is employees using ChatGPT, Gemini or other Generative AI tools without permission.
The reality – and the risks boards must face – is that shadow AI adoption is extremely commonplace. Research varies on just how common, but in our experience, almost every organisation has some employees using unauthorised AI tools. These pose very real risks to an organisation’s intellectual property, confidential information, and customer data. (Learn more about the risks of ChatGPT and similar tools.)
On the other hand, boards should also consider whether the policies put in place will kill or stifle innovation and/or employee morale.
Risk 3: Lack of Governance of AI Agents
AI Agents, also called Agentic AI, are (simplistically) proactive AI tools. They can be used to execute tasks autonomously – and they’re currently used to complete tasks like triaging security alerts, updating CRMs, and processing financial transactions.
The risk boards need to govern is that these agents often operate without a clear operational owner. Agentic AI tools are able to adapt their objectives and make multi-step decisions in real time, which comes with significant risks. So, for organisations using these tools, decision provenance is going to be key. You will need to have systems in place to ensure that, at the very least, there’s an audit trail for the AI agent’s decisions. Though, ideally you would have an accountable human with oversight to ensure you don’t end up using a workforce of AI tools with no supervision. Privacy professionals will need to plug the gaps here, ensuring boards are alert to the risks, while also implementing systems to manage a digital workforce (if adopted).
Risk 4: Not having a ‘Speak-Up Culture’
Given the very real risks that AI poses to organisations – from poorly executed decisions of AI agents to discrimination in autonomous decisions – it’s critical that boards encourage a ‘speak up’ culture. Essentially, you want a culture where all workers feel that they would be heard if they flagged any kind of issue with an AI tool – including Shadow AI.
We’d predict that almost all of these concerns could be managed by privacy professionals within the organisation. That said, there should be a key person accountable for passing on ‘near misses’ or emerging risks to the board or the board’s risk committee.
Risk 5: The Dearth of Accountability for AI
Similar to the AI Agent risk outlined above, we see organisations often overlook assigning accountabilities for AI use and the consequences of it. For example, who would be responsible in your organisation if any of the following happened:
- An entry-level worker leaked customer or confidential information to a free generative AI version (which can then use that data for training purposes)?
- A large-scale data breach occurred due to someone accidentally publishing customer data that was supposed to be contained in a private version of the website only while training a chatbot?
- An AI hiring tool ‘learned’ to automatically exclude Indigenous or Torres Strait candidates from consideration?
Fragmented accountabilities like this can cause privacy responsibilities to become a pass-the-parcel exercise, which increases risks. Boards should be working towards ensuring clear accountabilities for specific AI risks in 2026.
If your organisation needs help understanding and managing its AI risks, our team can help. Privacy 108 offers a comprehensive suite of privacy legal and consulting services, delivered by our team of privacy and security experts. This includes AI risk impact assessments, AI risk management, and policy creation. Reach out to us at hello@privacy108.com.au for an obligation-free consultation, or get in touch here.
We also recently added IAPP’s AI Governance Professional Training to our suite of privacy certification courses. The gap between the boardroom and privacy reality is part operational, and part knowledge and skills.
You can learn more about this course and our upcoming training sessions here.
Or complete the form below to register.