AI Regulation Around The World
In May, hundreds of AI experts warned that AI (if left unchecked) could pose an existential risk to humanity. Meanwhile, Big Tech leaders (including OpenAI’s CEO) have called for greater regulation of AI. Europe has answered the rallying calls for AI regulation with a new proposed law, but different governments around the world are taking different approaches. Let’s look at them – and dig into what that means for AI in Australia.
AI Regulation in Europe
Europe’s AI Act (currently in approved draft form) looks set to become a first-of-its-kind law, assuming it passes its final hurdles at the Council of the European Union and amongst the member states.
The AI Act introduces a system that requires increasing protections as the level of risk to privacy and safety rises. Certain applications of AI, such as social scoring and systems that do not align with EU values, would be prohibited.
Supporting material suggests that most AI systems will fall into lower-risk categories.
The AI Act also asks organisations to be transparent about certain AI use, including:
- Notifying humans when they are interacting with an AI system (in many cases);
- Notifying humans when emotional recognition or biometric categorisation systems are applied to them; and
- Labelling deep fakes in most cases.
Europe’s AI Act will apply to anyone who develops or deploys AI within the EU. Organisations outside of the EU will fall under the law, too.
Europe’s AI Penalties
Penalties under Europe’s AI Act are likely to be hefty. The AI Act provides for fines of up to 40 million Euros or 7% of a company’s worldwide annual turnover, whichever is higher.
The Act does leave room for proportionality in its penalties, however.
AI Regulation in The USA
The USA does not have a comprehensive AI law, but it has several other laws that touch upon aspects of AI and the White House released a Blueprint for the Regulation of AI. The framework applies to automated systems that have the potential to meaningfully impact the rights, opportunities, or access to critical resources or services of the American public.
“On his first day in office, the President ordered the full Federal government to work to root out inequity, embed fairness in decision-making processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.”
The Blueprint for the Regulation of AI outlines 5 principles that should be used to design and deploy AI in the US. They are:
- Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Human Alternatives, Consideration, & Fallback: You should be able to opt-out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
AI Regulation in Asia
Reuters recently reported that the 10-member Association of Southeast Asian Nations (ASEAN) intends to create a Guide on AI Governance and Ethics. Their coverage notes that the ASEAN AI Guidance may be published sometime between late 2023 and (more likely) at the Digital Ministers’ Meeting early in 2024.
In May, the Cyberspace Administration of China released its draft AI Regulations. Dezan Shira and Associates report that the draft law lays out some ground rules for Generative AI (like ChatGPT). As you’d expect from an authoritarian regime, it is designed to regulate the type of content generative AI can create and information distortion. However, other rights are reportedly quite progressive, including its transparency rights which would make it easier to identify potential or actual rights violations where AI systems are used.
What This Means for Australia
The Australian Government released a discussion paper – Supporting Responsible AI – on 1 June. The paper focuses on potential governance mechanisms to set Australia up as a global leader in responsible AI.
It is open for comment until 26 July. We will keep you updated about any further developments in future posts.
In the meantime, if you need assistance with your organisation’s privacy or security, reach out. Our privacy team would love to help.