AI guardrails proposed for Australia
The Australian Federal Government announced in September 2024 that it will release ten mandatory AI ‘guardrails’ for companies developing or using artificial intelligence in high-risk settings.
These guidelines will possibly come into effect on a voluntary basis but be made mandatory for high-risk AI research and development in the future.
Although we haven’t seen the detail yet, the guardrails are expected to include:
- Ensuring human oversight and control of AI systems
- Informing users about AI-enabled decisions and interactions
- Providing avenues for individuals affected by AI to challenge outcomes
- Promoting transparency across the AI supply chain to address risk effectively
We understand the Government will consult on making these guidelines mandatory and a discussion paper will soon be released outlining regulatory options.
This follows a June 2023 public consultation on “Safe and Responsible AI in Australia” which received over 500 responses. The Government’s interim response to this public consultation in January 2024 concluded that existing laws are insufficient and committed to a risk-based, technology-neutral approach to AI regulation.
The Government formed a group of 12 AI experts in February 2024 and they have been advising the government on transparency, testing and accountability in high-risk AI settings.
Australia is now joining other developed nations in exploring AI regulation. The EU has already enacted broad regulations, while the US and UK are still deliberating. China has implemented strict guidelines, requiring companies to obtain approvals before offering AI services.
Stay tuned for updates. We’ll be watching closely as the discussion paper is released.
For more on AI development, check out our previous blog posts: