Australia’s AI Safety Institute: Great new initiative or keeping up with the pack?

Late in 2025, the Australian Government announced the establishment of a national AI Safety Institute (AISI) as part of the new National AI Plan. (We covered the National AI Plan in more detail here.)

The Institute, supported by government funding of $29.9 million, will roll out in early 2026. Recruiting has commenced (with most positions closing for applications in January or early February 2026).  

According to the government, its establishment shows that ‘Australia intends to be a global leader in safe, trustworthy, and values-aligned AI.’  But does it really? In particular, how effectively will the Institute be able to operate in its mostly advisory capacity and with no mandatory requirements or AI specific regulation to support its role?

This article explores the Institute’s role, its place within Australia’s broader AI strategy, and what we know so far about staffing, collaboration, and international engagement and whether the AISI is really likely to make Australia a global leader in AI or just another one of the ‘middle-power’ pack.

Outline of Australia’s AI Safety Institute

The AISI will act as a central, expert hub to evaluate advanced AI models, including  monitoring, testing, and assessing emerging AI risks.  It will also advise on policy and regulatory compliance and collaborate with  both the  National AI Centre and with international partners to set global testing protocols and safety standards via membership of the global International Network of Safety Institutes.

Ultimately it is designed to ensure the safe, responsible development and deployment of artificial intelligence, without impacting innovation which, translated, means no pesky regulation to slow things down. 

Australia’s National AI Plan

The AI Safety Institute was included in Australia’s National AI Plan – released in 2025.  That plan  elected to pursue a voluntary approach to how Australia will adopt, regulate, and benefit from AI  relying on existing laws (in privacy, consumer protection and intellectual property , for example) to fill in the space. This was largely in response to Productivity Commission recommendations to pause major new AI rules while gaps in the existing legal framework are audited, stressing the risk of constraining what it estimated could be a $100-billion-plus boost to the economy.  The economic and ‘productivity’ benefits clearly outweigh concerns around AI harms at least in the Australian policy space.

Key objectives of the National AI Plan, in addition to the establishment of the AI Safety Institute, include:

  • Infrastructure investment: Expansion of data centres and digital infrastructure, supported by $1 billion from the National Reconstruction Fund.
  • Support for SMEs and non-profits: Funding through the “AI Adopt Program” to help smaller organisations integrate AI safely and effectively. 
  • Workforce training: Initiatives to upskill workers and ensure meaningful consultation in the design of AI systems. 

We covered the National AI Plan in more detail here.

Role of Australia’s AI Safety Institute

The  new AI Safety Institute is the mechanism through which the government intends to operationalise safety, governance, and oversight—ensuring AI “serves Australians, not the other way around.”  The core functions of the AI Safety Institute include:

1. Technical Evaluation of AI Systems: The Institute will conduct assessments of emerging and high-risk AI systems, including foundation models, generative AI, and autonomous decision-making tools. This includes evaluating safety, robustness, fairness, and compliance with Australian law.

2. Advising Government on Regulation: As AI evolves, legislation will need to adapt. The Institute will support policymakers by identifying regulatory gaps, advising on updates, and ensuring consistency across sectors.

3. Supporting Enforcement and Compliance: The government has emphasised that the Institute will help ensure AI companies comply with Australian legal standards, including privacy principles of fairness, transparency, and consumer protections.

4. Publishing Research and Guidance: The Institute will release technical research, safety benchmarks, and best-practice guidance for industry, academia, and the public—similar to the role played by the UK AI Safety Institute and the U.S. AI Safety Institute.

Collaboration with the National AI Centre

The Institute will work closely with the National AI Centre (NAIC), which already plays a major role in industry engagement, capability building, and AI Week activities.

This collaboration is expected to include:

  • Joint research initiatives on responsible AI
  • Shared industry outreach, particularly for SMEs and not-for-profits
  • Coordinated national events, including National AI Week
  • Alignment of safety standards with innovation programs

The NAIC’s existing networks—spanning industry, academia, and government—will help the Institute scale quickly and ensure its work is embedded across the Australian AI ecosystem.

International Collaboration

AI governance is inherently global, and Australia has signalled its intention to be an active participant in international safety efforts.

The AI Safety Institute will collaborate with:

  • The International Network of AI Safety Institutes, a growing coalition of countries working on shared safety benchmarks and research
  • Bilateral and multilateral partners, including the UK, U.S., Canada, and the EU
  • Global standards bodies, particularly those developing AI assurance and testing frameworks

The government has emphasised that international cooperation is essential to addressing cross-border risks, aligning safety standards, and ensuring AI development reflects democratic values.

For Australian organisations operating internationally—or using global AI models—this alignment will be critical.

Funding and Staffing

Recruitment for the founding team began in late 2025 and continues into early 2026, with the government seeking a mix of technical, regulatory, and strategic expertise.

Based on publicly available job advertisements and government statements, the Institute is seeking candidates with a range of skills including:

  • Experience working in or with other AI Safety Institutes (e.g., UK, U.S., Singapore)
  • Technical expertise in AI model evaluation, including red-teaming, adversarial testing, and safety benchmarking
  • Backgrounds in AI governance, ethics, or risk management
  • Regulatory and policy experience, particularly in emerging technology
  • Research capability in machine learning, safety science, or computational social science
  • Experience in standards development, including ISO/IEC AI standards
  • Stakeholder engagement and public-sector advisory skills

This blend of technical and governance expertise reflects the Institute’s dual mission: to provide deep technical capability while shaping national policy and regulatory frameworks.

While the full structure has not yet been published, early indications suggest the Institute will include:

  • A Technical Evaluation Division
  • A Policy and Regulatory Advisory Unit
  • A Research and Standards Team
  • A Partnerships and Engagement Office

This mirrors the structure of other global AI safety bodies and supports the proposed key functions of Australia’s AI Safety Institute.

What can we learn from other AI Safety Institutes?

Creation of a national AI Safety Institute is not a unique proposition. The UK and US both created AI Safety Institutes in late 2023 to give government an in-house, technically capable “test and evaluate” function for frontier AI models.  However, those institutes differ in mandate, style and political constraints. Although Australia is more likely to follow the UK AI Safety Institute (UK AISI) it is worth considering the US experience and what that might mean for Australia.

Creation of UK AISI and US AISI

The UK AISI  was announced as the evolution of the Frontier AI Taskforce, which was set up in April 2023 to focus on “frontier” models after rapid capability jumps like GPT-4.  Its stated mission is to “minimise surprise to the UK and humanity from rapid and unexpected advances in AI” by building the sociotechnical infrastructure (metrics, evaluations, governance tools) needed for advanced AI oversight.  Politically, it was also designed to make the UK a visible global leader on AI safety, tying directly into the Bletchley Park AI Safety Summit and political ambition to make the UK the “geographical home” of AI safety regulation. 

UK AISA is not a separate body and is located in the UK Department for Science, Innovation and Technology.  Its initial funding was £100 million, framed as the largest public AI-safety-specific commitment at that time.  Staff numbers are not officially fixed in public documents, but the Taskforce and then the Institute are consistently described as a “globally recognised research team at the heart of government,” indicating a relatively small, high-skill technical unit rather than a large regulator.

In contrast, the US AI Safety Institute (US AISI) was created under President Biden’s AI Executive Order and housed in NIST (the US government standards organisation) again in 2023.   The purpose is to develop measurement, benchmarking and evaluation frameworks that can underpin regulatory action across US federal agencies and sectors, in line with NIST’s broader standards role.

US AISI is also a coordination mechanism between government, labs and industry: a way to channel safety requirements into the development of US-based foundation models while retaining competitiveness. In February 2024 the US government launched the AI Safety Institute Consortium (AISIC), which has grown to nearly 300 member organisations, including major labs and tech companies such as Google, Anthropic and Microsoft, giving it a very large external “networked” footprint even if the core institute remains relatively lean. 

For an Australian context, both institutes were set up to close the gap between high-level AI principles and the hard engineering work of actually testing real, powerful models before and after deployment, with the US AISA more closely linked to industry. Both are small in-house expert bodies plugged into larger departmental or standards structures, not standalone mega-regulators.

What are the key functions of UK AISI and US AISI?

The key functions of the UK AISI are very similar to those of the Australian  and the UK AISI, though with more of a focus on ‘frontier AI models.’  Key functions include

  • Evaluate frontier AI models: design, run and publish safety and capability evaluations, including red-teaming, alignment tests and misuse risk assessments on models provided by leading labs. 
  • Conduct foundational AI safety research: launch exploratory projects on topics like interpretability, robustness, agentic behaviour and control, aiming to shift the debate from “speculative and philosophical” to “scientific and empirical.” 
  • Build infrastructure for governance: develop benchmarks, testing protocols and other sociotechnical tooling that can be reused by regulators, companies and other governments. 
  • Facilitate information exchange: act as a hub between policymakers, international partners, companies, academia and civil society, including via formal information-sharing channels.
  • International engagement: position the UK at the center of the emerging global network of AI Safety Institutes, including expansion to San Francisco to be physically close to major labs. 

The functions of the US AISI have some overlap but also some differences.

There seems to be a greater emphasis on the development of measurement and evaluation standards: a key function is to create frameworks, tests and benchmarks that agencies and industry can adopt, consistent with NIST’s broader mission.   Focus is also on supporting Federal government agencies, providing co-ordination across US agencies to support implementation of the AI Executive Order by providing technical advice and standardised tests that regulators can plug into their own sectoral mandates.  This also translates into concentration on high-risk domains for the US Federal Government.  An example is the Testing Risks of AI for National Security (TRAINS) Taskforce, which concentrates on national security–relevant misuse and systemic risk.

The US AISI will also operate the AI Safety Institute Consortium, which already has 300 member organisations to co-develop evaluations, best practices and sector-specific guidance. The US AISA will enable pre-deployment access agreements: negotiate MOUs with frontier labs (e.g. Anthropic, OpenAI) so that models can be tested for safety and national-security risks before release.  This is a particularly important function given the concentration of frontier AI in the USA.

From an Australian perspective, the important pattern is that neither institute writes most of the rules; they generate the tests, evidence and infrastructure that other parts of government and industry then use.

What have the UK and US AISI’s achieved to date?

The UK AISI ran early government-backed evaluations of frontier models following the Bletchley Summit, contributing empirical evidence about risks from large-scale systems. 

Perhaps more importantly it has developed and released “Inspect,” an open-source AI evaluation framework (MIT licence) with more than 100 pre-built tests and tools for monitoring and visualising model behaviour, now used internationally by governments, companies and academics.

The UK AISI has also been active internationally, helping shape the global “network of AI Safety Institutes,” formalised at the AI Seoul Summit in May 2024, which now includes institutes or planned bodies from Japan, EU, Canada, Australia and others. 

In contrast, the US AISI rapidly built the AI Safety Institute Consortium (AISIC) to 290-plus members, giving the US a very broad, multi-stakeholder forum for safety work anchored in NIST.  It has also embedded AI safety testing and standards into the implementation of the AI Executive Order, ensuring that agencies would have reference tests and benchmarks rather than each building bespoke tools from scratch.

These achievements show what “early wins” for an AI SI can look like: concrete tools (like Inspect), institutional networks (consortia) and targeted taskforces.  It is not clear that any of these achievements are on the agenda for Australia’s new institute.

What has been the public reaction to AI SIs?

Many AI governance and policy commentators have praised the UK for “actually doing things” in the safety space: committing substantial public money, producing open-source tooling, and running real evaluations rather than relying only on voluntary industry self-assessment.

Internationally, the UK AISI is generally viewed as a model for other governments, but there is ongoing debate about whether its “frontier-model-centric” mandate is too narrow relative to broader socio-economic harms. There is also scepticism about the UK’s broader AI governance posture: critics point to under-resourced regulators and worry that the Institute could become a technical fig leaf if its work is not coupled to enforceable obligations. 

In the US, supporters see the institute as an overdue move to give the US government in-house technical capacity, aligned with NIST’s longstanding credibility on standards and testing.

Critics argue that the US effort has been hamstrung by politics: the institute sits inside a highly contested policy environment, and there is concern that industry influence and partisan dynamics could blunt its impact or delay strong safety requirements. 

The large consortium model attracts mixed reactions: it provides breadth and buy-in, but raises questions about how much genuine constraint on powerful companies can emerge from a forum they strongly shape. 

Takeaways for an Australian AI Safety Institute

So, what can the new Australian AISI learn from the experience of overseas institutes so far?

The model of a technical institute inside government—small, expert, and plugged into existing regulators—rather than a new mega-agency – as implemented by the UK and US show that this is politically feasible and operationally useful. 

Early credibility can be developed via concrete deliverables: open-source evaluation tools, independent model testing, and transparent reporting on risks, not just strategy papers. 

However, public trust depends on clear separation between the safety institute’s technical assessments and any political or commercial pressures, a recurring concern in both the UK and US debates. And also a concern in Australia where the Productivity Commission report heralded a new policy direction for ensuring AI safety.

What This Means for Australian Privacy, Governance, and Compliance Professionals

The establishment of Australia’s AI Safety Institute marks a significant milestone in the nation’s digital-governance journey. 

For Australian organisations —especially those handling personal data or deploying – AI systems—the AI Safety Institute may become a key source of guidance, expectations, and technical standards. Its work could help shape how organisations assess AI risks, design governance frameworks, and demonstrate compliance in an increasingly complex regulatory environment.  Through collaboration with the National AI Centre, the Institute may help build capability in sectors that traditionally lack access to technical expertise, such as not-for-profits and SMEs.

Internationally, integrating into the emerging international network of AI Safety Institutes is an opportunity for Australia to punch above its weight by sharing evaluations, tools and testbeds rather than trying to do everything alone. 

However, the extent to which it will be able to achieve any of these outcomes depends on leadership, funding, regulatory empowerment and political support for its work.  Australia’s history in the funding and empowerment of regulators in the privacy and adjacent spaces suggests that there could be challenges ahead for the AISI, particularly if it is going to be establishing a leadership position in global AI.

The UKAISI refers to ensuring AI is advanced in a safe, secure and beneficial way.  Australia’s commitment is to safe, trustworthy, and values-aligned AI.  It will be interesting to see whether ultimately there is a difference between ‘beneficial’ and ‘values-aligned’ AI particularly where economic values take precedence over more human values.

Resources we used

Privacy, security and training. Jodie is one of Australia’s leading privacy and security experts and the Founder of Privacy 108 Consulting.