More Change in the Winds: The European Commission’s proposed changes to EU’s AI Act
In November, EU’s European Commission issued a Digital Omnibus that included changes to EU AI’s Act, which was in the process of being implemented.
This blog focuses on what the EU AI Act was designed to do, the background to that legislation and reasons for a move away from the ‘safety first’ position embedded in that legislation, together with an overview of some of key proposed changes in the Digital Omnibus on AI.
We have published a separate post on the proposed changes to the GDPR.
Background to the EU AI Act
The EU AI Act emerged from a growing recognition in the EU of both the transformative impact and benefits from artificial intelligence across society, together with the need to address potential risks associated with its deployment. The Act was the result of lengthy and often-heated debate in an attempt to strike a balance between fostering innovation while implementing robust safeguards against potential risks associated with AI, such as discrimination, privacy breaches, or threats to safety
To achieve these aims, the Act introduced a risk-based framework that categorises AI systems according to their potential impact, with more stringent requirements for high-risk applications.
It imposed obligations on providers and users of AI systems to conduct risk assessments, maintain transparency, and ensure human oversight.
Additionally, the Act sought to enhance accountability by mandating clear documentation and traceability, while supporting the development of trustworthy AI that aligns with EU values and ethical standards.
We have written previously about the EU AI Act here:
The EU AI Act and its impact on Australia
What’s happening with the EU AI Act: A 2025 Update
Overall, the legislative approach to protecting EU people from harm and keeping them safe reflects the broader EU commitment to upholding fundamental rights of individual while at the same time understand and supporting technological advancement.
There was some acceptance of this approach by US tech, with a recognition of the potential risks from some types of AI. However, the recent election of a Trump government in the USA marked a significant change in direction.
Shift away from AI Regulation
The Trump administration’s approach to AI regulation now emphasizes deregulation, innovation acceleration, and centralized federal control to avoid a patchwork of state laws – rather than safety and risk management.
In particular, it seeks to limit state-level AI regulations through executive orders and litigation, and legislation to establish the pre-eminence of federal law in this space – aiming for a unified federal standard so as to maintain perceived U.S. global AI dominance.
The administration is also interested in strengthening export controls, especially to limit China’s access to advanced AI technology. It also aims to prevent ideological bias in federal AI systems and prioritises national security and economic competitiveness.
The EU has reacted to the Trump administration’s deregulatory and innovation-focused AI regulation approach by shifting its own stance on AI regulation.
Under pressure from the Trump administration, US tech companies, and European industry groups, the EU has proposed delaying parts of its AI Act, including postponing stricter rules on high-risk AI systems from 2026 to 2027, as well as other changes discussed in more detail below. These changes aim to ease compliance burdens and boost competitiveness for EU companies against US and Chinese firms.
However, this shift has drawn strong criticism from privacy advocates and civil society groups within the EU, who warn it represents the largest rollback of digital fundamental rights in EU history. They argue the easing could weaken protections under GDPR and the original AI Act, allowing greater access to sensitive personal data for AI training.
The European Commission defends these changes as necessary technical simplifications to help European businesses grow and innovate while maintaining high standards for fundamental rights, data protection, and safety. This reaction reflects a pragmatic recalibration balancing innovation competitiveness with regulatory safeguards, influenced by external pressures from the Trump administration’s AI policy stance and industry lobbying.
Key Proposed Changes to EU’s AI Act
So, what are the current proposals for changes to the EU AI Act? Some of the major proposed changes include:
Extended legal basis for processing sensitive data for bias mitigation
While the EU AI Act currently only mentions that sensitive data may be processed (under certain conditions) to ensure bias detection and correction in relation to high-risk AI systems, the proposal expands this possibility to other actors and AI systems, as well as to AI models.
Simplified requirements for AI systems qualifying for the Article 6(3) derogation
Article 6(3) of the AI Act states that an AI system will not be considered high risk, even if it falls within scope of Annex III, if it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.” While Article 6(4) of the existing AI Act requires providers of such systems to register themselves and these systems in an EU database of high-risk AI systems, the proposed regulation would eliminate this registration requirement.
Shifted AI literacy obligations
Currently, the EU AI Act imposes an obligation on all providers and deployers of AI systems to take measures to ensure “a sufficient level of AI literacy” for staff. Acknowledging this approach may be challenging in practice, especially for smaller companies, the Commission proposes to delete this obligation and instead, require that Member States and the Commission encourage providers and deployers to take such measures.
Simplified rules for SMEs and SMCs
In an effort to enable a smooth transition of micro, small and medium-sized enterprises (“SMEs”) into small mid-cap enterprises (“SMCs”), the Commission proposes to expand some of the benefits granted under the EU AI Act to SMEs and thus reduce the compliance burden imposed on SMCs. For clarity, the proposal would also introduce definitions of SMEs and SMCs, which are aligned with previous Commission recommendations.
The Commission also proposes that all SMEs should benefit from a simplified way to comply with the obligation to establish a quality management system, and not only microenterprises as is currently the case under the EU AI Act, and that both SMEs and SMCs only need to implement the quality management system in a manner proportionate to the size of their organization.
Clarified rules on conformity assessments
The existing AI Act imposes different conformity assessment rules for AI systems that fall within scope of Annex I.A of the Act, and those that fall within scope of Annex III. The Commission proposal clarifies that, where a single AI system falls within scope of both Annex I.A and Annex III, the provider must follow the conformity assessment rules that apply by virtue of falling within scope of Annex I.A.
Clarified Supervision and Enforcement System
The proposal includes several provisions aiming at clarifying the role of the AI Office as well as expanding its supervision and enforcement powers. In particular, it clarifies that the AI Office would have exclusive competence for the supervision and enforcement of Annex III AI systems that are based on general-purpose AI models, where that model and system are developed by the same provider, as well as for AI systems that constitute or are integrated into a designated very large online platform or very large online search engine within the meaning of the Digital Services Act. The Commission also would be required to undertake pre-marketing conformity assessments of any such systems that are classified as high risk and are subject to third-party conformity assessment pursuant to Article 43 of the Act.
These amendments mark a significant change from the existing AI Act, which arguably gives Member State market surveillance authorities shared competence over the supervision and enforcement of such systems.
Updated Timelines
Importantly, the Commission proposes to amend the timeline for the entry into force of certain provisions, including obligations related to high-risk AI systems. Currently, the rules applicable to high-risk AI systems would start applying on December 2, 2027 for such systems covered by Annex III, and as of August 2, 2028 for those systems covered by Annex I. Likewise, the Commission proposes to amend Article 111(2) to state that the AI Act will apply to operators of high-risk AI systems placed on the market or put into service before these dates only if, as of those dates, these systems are subject to significant changes in their design.
It is also proposed to push back the application of transparency obligation in Article 50(2) of the EU AI Act to February 2, 2027, which applies to providers of AI systems (including general-purpose AI systems) generating synthetic audio, image, video or text content, to the extent their AI system has been placed on the market before August 2, 2026.
What happens next?
Overall, these proposed reforms reflect a shift in the EU’s approach to AI regulation: from a strict ‘safety first’ stance to a more balanced framework that seeks to maintain high standards of safety and ethics without stifling innovation. As the legislative process continues, further refinements to the Act are likely, with an ongoing focus on supporting both the responsible use of AI and the region’s competitiveness in the global AI landscape.
The final text of the Digital Omnibus on AI is likely to evolve during negotiations with the European Parliament and the Council of the EU (“Council”). Definitely keep an eye on this space for further developments.