
Europe’s AI Act is the first-of-its-kind legislation governing AI systems and, from August 2025, some more of the Act’s key provisions will come into force. With penalties for non-compliance ranging from 1.5%-7% of worldwide annual turnover, it’s worth spending some time getting up to speed on what’s happening with the EU AI Act.
The EU AI Act has seen significant progress over the past year. Some of the main developments include the following
July 2024: The AI Act was officially published in the Official Journal of the European Union, marking its formal notification.
August 2024: The Act entered into force, although many of its requirements are being phased in over time.
February 2025: The first phase of implementation took effect, banning AI systems that pose unacceptable risks and introducing AI literacy requirements.
April 2025: Not AI Act related, but Europe released its AI Continent Action Plan to promote AI innovation.
July 2025: The European Commission finalised its voluntary codes of practice for AI governance. These were originally slated for completion in May 2025.
July 2025: Meta reportedly declined to sign the voluntary codes of practice.
August 2025: Several provisions of the AI Act will come into effect, including rules for general-purpose AI models, governance structures, confidentiality measures, and penalties for non-compliance.
One of the initial key changes was that certain AI systems that pose an unacceptable risk were banned within the EU.
The EU AI Act segments different AI systems based on risk and imposes different levels of control depending on the risk. AI systems that have been identified as posing unacceptable risk and falling with the prohibited group include those that manipulate human behaviour, exploit vulnerabilities of specific groups, or employ subliminal techniques beyond a user’s awareness. They also include AI systems that promote social scoring systems and those that predict an individual’s risk of committing a crime based on profiling or personality traits.
Additionally, systems that create or expand facial recognition databases by scraping images from the internet or CCTV footage, and those that infer emotions in workplaces or educational institutions (except for medical or safety reasons), are all banned, plus those that use real time facial recognition in public places.
These bans are designed to protect individuals from the most intrusive and harmful uses of AI and ensure that AI technologies are developed and used responsibly.
The EU AI Act sets forth AI literacy requirements aimed at fostering a deep understanding of AI technologies among EU citizens, building a knowledgeable and proactive community that can navigate the complex landscape of AI, ensuring its benefits are maximized while minimizing its risks.
AI literacy is defined as the ability to understand the fundamentals of AI, its capabilities, limitations, and potential societal impacts.
The Act mandates that educational institutions integrate AI literacy into their curricula, encouraging students to engage with AI concepts from an early age. This includes understanding algorithms, data ethics, and the implications of AI decisions on privacy and security.
The literacy requirements also extend to workplace training programs. Companies deploying AI systems are required to provide their employees with adequate training to comprehend the AI tools they are using, enabling them to discern the ethical ramifications of AI-driven decisions. This ensures that employees are equipped to identify and mitigate any potential biases or risks associated with AI technologies in their operations.
Moreover, the Act promotes public awareness campaigns to demystify AI for the general public. These initiatives aim to cultivate a well-informed citizenry capable of engaging in informed debates about AI policies and practices. By raising public awareness, the EU aims to empower individuals to critically assess AI applications and advocate for responsible AI governance.
The AI literacy requirements are part of the broader strategy to ensure the responsible development and use of AI.
The AI Governance Code of Practice is intended to aid compliance with the EU’s AI Act. It’s worth noting that adherence is voluntary and does not, of itself, constitute compliance with the AI Act. In other words, it’s better to view the code of practice as a set of best practices as opposed to a legal compliance checklist.
There are three chapters in the Code of Practice:
The first two chapters are intended to be relevant for all providers of general-purpose AI models. Meanwhile, chapter three on Safety and Security is only relevant to those providers of advanced AI models.
Briefly, the Transparency chapter asks providers to maintain a ‘Model Documentation Form’ with details on the model, its properties, distribution, and training data. This should be well-kept and shareable so downstream providers and regulators can review them to get an accurate picture of the AI’s data quality and integrity.
The Copyright chapter asks providers to implement a copyright policy that prohibits technological measures to prevent copyright from being circumvented by AIs, and adhere to rights reservations. It also outlines measures for mitigating copyright-infringing outputs, among other things.
The Safety and Security chapter outlines state-of-the-art practices for managing systemic risks within advanced AI models. Key aspects include rigorous systemic risk identification, detailed analysis through various model evaluations, and the estimation of potential harm. Ultimately, these more advanced models should only be deployed if their systemic risks are deemed acceptable.
The following rules come into effect from August 2, 2025:
To receive updates like this via email, subscribe to our bi-monthly newsletter. You can unsubscribe at any time.
Oops! We could not locate your form.
"*" indicates required fields
"*" indicates required fields
Privacy 108 collects your name and email to send you our newsletter. If you do not provide this information, we will be unable to send it to you. We may use third-party service providers (such as email marketing platforms) to distribute our communications. Some providers may store information overseas, including in the United States. For more information about how we handle your personal information, including how to access or correct it or make a complaint, please see our Privacy Policy or contact us at hello@privacy108.com.au. You can unsubscribe at any time using the link in our emails or by contacting hello@privacy108.com.au.