New AI Regulation in the EU: A risk based approach with teeth
Artificial intelligence (AI) is transforming the world in many aspects. Many countries are considering how to make the most of the opportunities from this transformation and to address its challenges, while at the same time managing the issues that come with the use of AI. In April 21, 2021, the European Commission unveiled its long-awaited proposal for a regulation laying down harmonized rules on artificial intelligence and amending certain union legislative acts (the AI Regulation), which is a key piece of he commission’s ambitious European Strategy for data.
According to the Commission, the proposed AI Regulation will make sure that proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide. It is hoped that the regulation will help establish a system to enable Europeans to trust what AI has to offer.
The proposed AI Regulation builds on preparatory work by the commission and its advisers. The European Strategy on AI was published in 2018. After extensive stakeholder consultation, the High-Level Expert Group on Artificial Intelligence (AI HLEG) developed Guidelines for Trustworthy AI in 2019, and an Assessment List for Trustworthy AI in 2020. In parallel, the first Coordinated Plan on AI was published in December 2018 as a joint commitment with Member States. (This plan has been updated as part of the new package and is discussed further below). 
The Commission’s White Paper on AI, published in 2020, set out a clear vision for AI in Europe: an ecosystem of excellence and trust, setting the scene for the AI Regulation. The White Paper was accompanied by a ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics‘ concluding that the current product safety legislation contains a number of gaps that needed to be addressed, notably in the Machinery Directive.
The outcomes of this work is a detailed roadmap for regulation of AI that is consistent with and integrates wherever possible with other existing legislation such as product safety legislation. The commission went to great lengths to avoid inconsistencies and duplication and aims to minimize additional burdens for all concerned.
AI Regulation: What is AI?
The term “AI system” is defined broadly and in a technology agnostic way as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with.”
AI Regulation: Risk-based approach
The proposed regulation reflects a risk-based approach to AI, that recognizes the potential of AI and the many benefits it presents, but at the same time is keenly aware of the dangers these new technologies present to the European values and fundamental rights and principles.
AI Regulation: Unacceptable risk
Systems that pose an unacceptable risk will be banned. These are AI systems considered a clear threat to the safety, livelihoods and rights of people. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.
AI Regulation: High Risk
The term ‘High Risk AI System” is not defined. Articles 6 and 7 contain the criteria to be used to determine whether a system should be considered high risk. Examples of AI systems identified as high-risk include AI technology used in:
- Critical infrastructures(e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
- Safety components of products(e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment(e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcementthat may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum and border controlmanagement (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High risk AI Systems will be subject to strict obligations before they can be put to the market. These include:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk;
- High level of robustness, security and accuracy.
The impact of these obligations is that High-risk AI systems will be subject to scrutiny before they are placed on the market or put into service and throughout their life cycle, including through the mandatory risk management system, strict data and data governance requirements, technical documentation and record-keeping requirements, and post-market monitoring and reporting of incidents requirements.
All remote biometric identification systems (e.g. facial recognition systems) are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle.
Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). And any such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
AI Regulation: Limited and Minimal Risk
Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.
Extra-territorial application of AI regulation
The obligations under the proposal affect all parties involved: the provider, importer, distributor and user. In particular, the regulation applies to:
- providers that place on the market or put into service AI systems, irrespective of whether those providers are established in the European Union or in a third country;
- users of AI systems in the EU; and
- providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU.
In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation. The European AI Board will assist the national supervisory authorities and commission to ensure the consistent application of the regulation, issue opinions and recommendations, or collect and share best practices among member states.
Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.
The regulation foresees steep administrative fines for various types of violations ranging for companies between 2% and 6% of total annual worldwide turnover.
When will it apply?
The regulation, once adopted, will come into force 20 days after its publication in the Official Journal. It will apply 24 months after that date, but some provisions will apply sooner. This long “grace period” increases the risk that, notwithstanding the efforts from the commission to make the regulation future-proof, some of its provisions will be overtaken by technological developments before they even apply.
What happens next?
The proposal now goes to the European Parliament and Council for further consideration and debate. Given the controversial nature of AI, large number of stakeholders and interests involved, it seems fair to assume there will be many changes before the law is finalised.
There will likely be many amendments — and hopefully, some further clarifications — in the European Parliament and discussions with member states.
For example. on 21 June 2021 the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) released a Joint Opinion on the proposed AI Regulation, suggesting many changes to the European Commission’s proposal.
Other commentators have identified three main areas where amendment is needed:
- the regulation of emotion recognition,
- the regulation of biometric classification systems and
- the protection against commercial manipulation.
For more, read here.
In the meantime, we have at least a starting point for regulation of an area that brings so much opportunity together with potential harm.
Implementation of the AI Plan and Regulation
Coordination of Member State AI plans will strengthen Europe’s leading position in human-centric, sustainable, secure, inclusive and trustworthy AI.
In 2018 the European Commission adopted the Coordinated Plan on Artificial Intelligence that was developed together with the Member States to maximise the impact of investments at EU and national levels, and to encourage synergies and cooperation across the EU. One of the key actions towards these aims was an encouragement for the Member States to develop their national AI strategies.
In April 2021, an updated Coordinated Plan on Artificial Intelligence was released together with the proposed regulatory framework discussed above. The key aims of the 2021 Coordinated Plan are to accelerate investment in AI, act on AI strategies and programmes and align AI policy to avoid fragmentation.
Turning strategy into action, the 2021 Coordinated Plan’s key message is that the Commission and Member States should:
- accelerate investments in AI technologies to drive resilient economic and social recovery aided by the uptake of new digital solutions;
- act on AI strategies and programmes by fully and timely implementing them to ensure that the EU fully benefits from first-mover adopter advantages;
- align AI policy to remove fragmentation and address global challenges.
In order to achieve this, the updated plan sets four key sets of policy objectives, supported by concrete actions and indicating possible funding mechanism and the timeline to:
- set enabling conditions for AI development and uptake in the EU;
- make the EU the place where excellence thrives from the lab to market;
- ensure that AI works for people and is a force for good in society;
- build strategic leadership in high-impact sectors.
European Commission Ethics Guidelines for AI
When thinking about the use of AI it is also worth remembering the Commission’s guidelines on ethics for AI. These should also be considered in any AI implementation in the EU.
 Article 6 refers to products or components that are covered by existing EU product safety legislation that is listed in Annex II to the Proposal, such as EU legislation on machinery, toys, lifts, pressure equipment or medical devices, just to name a few. Article 7 refers to AI systems used in areas set out in Annex III that the commission considers high risk and the criteria to take into account to update the Annex.