What’s Happening with the EU AI Act: A 2025 Update
Europe’s AI Act is the first-of-its-kind legislation governing AI systems and, from August 2025, some more of the Act’s key provisions will come into force. With penalties for non-compliance ranging from 1.5%-7% of worldwide annual turnover, it’s worth spending some time getting up to speed on what’s happening with the EU AI Act.
- We covered the contents of the AI Act in more detail here.
What’s happened over the last 12 months?
The EU AI Act has seen significant progress over the past year. Some of the main developments include the following
July 2024: The AI Act was officially published in the Official Journal of the European Union, marking its formal notification.
August 2024: The Act entered into force, although many of its requirements are being phased in over time.
February 2025: The first phase of implementation took effect, banning AI systems that pose unacceptable risks and introducing AI literacy requirements.
April 2025: Not AI Act related, but Europe released its AI Continent Action Plan to promote AI innovation.
July 2025: The European Commission finalised its voluntary codes of practice for AI governance. These were originally slated for completion in May 2025.
July 2025: Meta reportedly declined to sign the voluntary codes of practice.
August 2025: Several provisions of the AI Act will come into effect, including rules for general-purpose AI models, governance structures, confidentiality measures, and penalties for non-compliance.
The First Phase of Implementation
Banned AI Systems
One of the initial key changes was that certain AI systems that pose an unacceptable risk were banned within the EU.
The EU AI Act segments different AI systems based on risk and imposes different levels of control depending on the risk. AI systems that have been identified as posing unacceptable risk and falling with the prohibited group include those that manipulate human behaviour, exploit vulnerabilities of specific groups, or employ subliminal techniques beyond a user’s awareness. They also include AI systems that promote social scoring systems and those that predict an individual’s risk of committing a crime based on profiling or personality traits.
Additionally, systems that create or expand facial recognition databases by scraping images from the internet or CCTV footage, and those that infer emotions in workplaces or educational institutions (except for medical or safety reasons), are all banned, plus those that use real time facial recognition in public places.
These bans are designed to protect individuals from the most intrusive and harmful uses of AI and ensure that AI technologies are developed and used responsibly.
AI Literacy Requirements
The EU AI Act sets forth AI literacy requirements aimed at fostering a deep understanding of AI technologies among EU citizens, building a knowledgeable and proactive community that can navigate the complex landscape of AI, ensuring its benefits are maximized while minimizing its risks.
AI literacy is defined as the ability to understand the fundamentals of AI, its capabilities, limitations, and potential societal impacts.
The Act mandates that educational institutions integrate AI literacy into their curricula, encouraging students to engage with AI concepts from an early age. This includes understanding algorithms, data ethics, and the implications of AI decisions on privacy and security.
The literacy requirements also extend to workplace training programs. Companies deploying AI systems are required to provide their employees with adequate training to comprehend the AI tools they are using, enabling them to discern the ethical ramifications of AI-driven decisions. This ensures that employees are equipped to identify and mitigate any potential biases or risks associated with AI technologies in their operations.
Moreover, the Act promotes public awareness campaigns to demystify AI for the general public. These initiatives aim to cultivate a well-informed citizenry capable of engaging in informed debates about AI policies and practices. By raising public awareness, the EU aims to empower individuals to critically assess AI applications and advocate for responsible AI governance.
The AI literacy requirements are part of the broader strategy to ensure the responsible development and use of AI.
July’s AI Governance Codes of Practice
The AI Governance Code of Practice is intended to aid compliance with the EU’s AI Act. It’s worth noting that adherence is voluntary and does not, of itself, constitute compliance with the AI Act. In other words, it’s better to view the code of practice as a set of best practices as opposed to a legal compliance checklist.
There are three chapters in the Code of Practice:
- Transparency.
- Copyright.
- Safety and Security.
The first two chapters are intended to be relevant for all providers of general-purpose AI models. Meanwhile, chapter three on Safety and Security is only relevant to those providers of advanced AI models.
Briefly, the Transparency chapter asks providers to maintain a ‘Model Documentation Form’ with details on the model, its properties, distribution, and training data. This should be well-kept and shareable so downstream providers and regulators can review them to get an accurate picture of the AI’s data quality and integrity.
The Copyright chapter asks providers to implement a copyright policy that prohibits technological measures to prevent copyright from being circumvented by AIs, and adhere to rights reservations. It also outlines measures for mitigating copyright-infringing outputs, among other things.
The Safety and Security chapter outlines state-of-the-art practices for managing systemic risks within advanced AI models. Key aspects include rigorous systemic risk identification, detailed analysis through various model evaluations, and the estimation of potential harm. Ultimately, these more advanced models should only be deployed if their systemic risks are deemed acceptable.
- You can find more on the AI Act Code of Practice.
The August 2025 Implementation Phase
The following rules come into effect from August 2, 2025:
- Notified Bodies: member states must establish a notifying authority for designating and monitoring assessment bodies. These bodies will be tasked with overseeing high risk models, among other things.
- General Purpose AI Models: Providers must inform the Commission within two weeks if a general AI model meets certain conditions, so the Commission can determine if the model has systemic risk. The article on GPAIs explains how an AI model is classified as having systemic risk. Specifically, it highlights that models with high impact capabilities have systemic risk, and notes that those models that use a large amount of computation for training are assumed to have high impact capabilities.
- Governance: The EU’s AI Office is to be created with a goal of improving the EU’s knowledge and skills in AI. It should be supported by the member countries of the EU.
- Confidentiality: The AI Act requires all parties involved in applying the regulations to respect the confidentiality of all information and data they obtain, including protecting intellectual property rights and trade secrets. It also outlines data deletion obligations.
- Penalties: Penalties for breaches of the AI Act will depend on factors like the seriousness and length of the rule-breaking, the impact of the breach, and steps providers take to fix the problem. It notes that penalties for non-compliance ranging from 1.5%-7% of worldwide annual turnover, while penalties for providing incorrect or misleading information can result in fines of up to 1% of a company’s annual turnover.
To receive updates like this via email, subscribe to our bi-monthly newsletter. You can unsubscribe at any time.