Most Common Privacy Failures at Organisations (and How to Overcome Them)

ISACA’s 2023 Privacy in Practice report included a list of the most common privacy failures in organisations. In this post, we’ll look into these common privacy failures – and discuss how to overcome them.  

Common Privacy Failures at Organisations 

ISACA identified the following eight common privacy failures:  

  1. Lack of training/poor training.  
  2. Not practicing privacy by design.  
  3. Data breach/leakage. 
  4. Not performing a risk analysis. 
  5. Social engineering.  
  6. Bad or nonexistent detection of personal information.  
  7. Noncompliance with applicable laws and regulations.  
  8. Ethical decision making.  

Source: Privacy in Practice 2023 Report by ISACA 

How to Overcome These Common Privacy Failures 

Lack of training/poor training.  

On its face, the solution to overcome this common privacy failure seems simple – schedule training and awareness activities. However, there are some factors to consider:  

  • You should assess your privacy training needs by assessing what gaps exist and benchmarking existing privacy practices against industry best practices before designing training. This practice ensures your privacy training will be more effective because it reflects your organisation’s needs and knowledge gaps. 
  • Regular privacy training is an opportunity for organisations to build awareness about privacy, the role each team member plays in safeguarding it, and the importance of protecting privacy.   
  • Privacy training is more engaging and will translate better to real-life situations if it includes examples that happened or may occur. 

Read more about developing effective training programs. 

Not practicing privacy by design.  

Privacy by design is a framework and systems design model that identifies possible risks to the rights of data subjects and minimizes them before they can cause damage, often in a way that creates win-wins (instead of pitting privacy against other departments).  

ISACA’s study revealed some interesting trends seen in organisations that adopted privacy by design, including that they are: 

  • More likely to separate privacy training from security training.  
  • More likely to use artificial intelligence or automation.  
  • More confident in their organisation’s ability to protect its sensitive data.  

We’ve written before about Privacy by Design and how it can be implemented including:  

Data breach/leakage. 

Data breaches are challenging to avoid altogether in the current landscape. However, with so many data breaches being caused by human error, there is plenty of room for improvement.  

We covered Australian data breaches caused by human error in our coverage of the OAIC Data Breach Report: June – December 2023 

Not performing a risk analysis. 

Many organisations overlook risk analysis before undertaking projects – and the consequences can be significant. (Just ask the Australian Federal Police, which has been on the receiving end of the OAIC’s investigative lens in two instances as a result of poor risk analysis) 

Privacy Impact Assessments can establish and document what is actually going on (or what will happen), provide guardrails to staff and also be able to demonstrate compliance if the regulator comes calling. They also aren’t extremely complicated or expensive to undertake, especially when considering the alternatives (fines, OAIC investigations, and poor public perception, for example).  

And remember, a privacy risk assessment is quite different to a security risk assessment: you need to do both. 

Social engineering.  

The OAIC’s Data Breach Report: June – December 2023 revealed that almost 20% of breaches were caused by social engineering. Social engineering is a type of cyber attack where criminals use psychology and social pressure to gather information and gain account access. And no companies or humans are immune to these attacks – in fact, there have been successful social engineering attacks against the likes of Google and Facebook (to the tune of $100 million USD), CEOs of utilities providers, and Uber. 

Instilling a healthy amount of skepticism is the best way to avoid social engineering attacks within your organisation. You should also:  

  • Encourage team members to be extremely suspicious of unsolicited communications. They should not click links, open attachments, or give out personal information unless they are certain who they are dealing with.  
  • Require verification of sender identity before taking any action. This should include a careful check of email addresses, logging in through a known URL address (instead of clicking on links), and, where information updates are requested, your team should call or meet with the person or company contact making the request.  
  • Be extremely cautious in circumstances where urgency or alarm is created. Team members should be trained to take a deep breath, slow down, think critically, and (crucially) get someone else involved in ‘urgent’ circumstances. 
  • Train your team about the risks their social media presence poses and how they can protect themselves and your organisation. The more information they share online, the more information criminals will have to target their attacks.  
  • Take steps to implement security that will reduce the risk of social engineering attempts being received by the target. Human error is inevitable, so using technology to reduce the risk is very important.  

Bad or nonexistent detection of personal information.  

This privacy failure essentially refers to the mismanagement of personal information within the organisation. Creating and maintaining a data inventory within a single database is the most effective protection against this failure.  

This will help everyone in the organization not only understand all the data you hold but also identify what is personal data (often one of the biggest challenges for the organization). 

Your data inventory can also help minimize the data you collect and hold, helping to establish retention and deletion requirements. 

We only need to think about Optus, Medibank and Latitude to appreciate the risk associated with over retention of data, 

There is a suite of tools that streamline the entire data discovery, mapping, lineage, and management process. We encourage organisations to consider these automation tools, wherever possible. They can reduce the risk of human error and oversight, streamline the data management process to deliver an immediate ROI, and improve data quality.   

You can read more about data inventory management in the following posts:  

Noncompliance with applicable laws and regulations.  

Legal compliance can be complex for organisations, particularly those with operations outside of Australia (such as customers in the EU or the US states with privacy laws, or global companies headquartered in Australia).  

However, noncompliance with applicable laws and regulations is also indicative of poor privacy practices. There are key trends within privacy regulation, such as transparency and consent, adequate security, accuracy, and data inventory management. And, generally, maintaining high privacy standards can help organisations meet many or most of their legal compliance obligations.  

Organisations looking to overcome or avoid this privacy failing should work with their privacy team to improve compliance and strengthen privacy practices.  

Ethical automated decision making. 

Automated decision making poses large ethical challenges for organisations. Automated or AI-enhanced decision making comes with the risk of introducing or perpetuating biases, alongside issues of lack of transparency. However, it also promises to promote efficiencies.  

Before introducing automated decision making, organisations should consider:  

  • Implementing robust data governance practices to ensure data quality, identify and mitigate bias, and protect privacy. 
  • Investing in AI models that are more transparent and understandable, allowing for human oversight and intervention. 
  • Establishing ethics committees and frameworks to guide AI development and deployment, incorporating ethical principles into decision-making processes. 
  • Building diverse and inclusive teams to identify and address potential biases from different perspectives. 
  • Educating users about how AI works and the potential for bias, promoting transparency and building trust. 
  • Regularly monitoring AI systems for bias, discrimination, and security vulnerabilities, and auditing their performance against ethical principles. 
  • Collaborating with external stakeholders, including researchers, advocacy groups, and policymakers, to develop and implement ethical AI standards. 

You can read more about the ethics of AI in our discussion of the AI Ethical Framework. 

Privacy Consulting by Privacy 108 

Privacy 108 offers a comprehensive suite of privacy legal and consulting services, delivered by our team of privacy and security experts. 

Our privacy services include: 

Wherever you are on your organisation’s privacy maturity journey, we can provide the advice and support you need to implement and operationalise your privacy program. 

Reach out:

  • We collect and handle all personal information in accordance with our Privacy Policy.

  • This field is for validation purposes and should be left unchanged.

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.