Australia launched the AI Safety Institute (AISI) in November 2025 with $29.9M in funding. It goes live early 2026. The job is to fill a gap—the country needs somewhere to independently evaluate advanced AI systems before they’re released.
This guide is part of our comprehensive overview of Understanding Australia’s National AI Plan and Its Approach to AI Regulation, where the AI Safety Institute represents the government’s commitment to keeping Australians safe while fostering AI innovation.
What makes AISI different? It’s advisory. Not regulatory. AISI will test models, monitor risks, and share findings. But it won’t enforce compliance. That stays with the existing regulators—OAIC for privacy, ACCC for consumer protection, eSafety Commissioner for online harms.
The core work breaks into three parts. Pre-deployment testing of frontier AI models. Upstream risk assessment where they evaluate capabilities at design stage. And downstream harm analysis tracking what happens in the real world. Plus identifying regulatory gaps—the places where existing Australian laws don’t cover AI-specific risks.
This fits into the National AI Plan’s three-pillar framework around opportunities, benefits, and safety. AISI joins an international network with the UK and US safety institutes.
For you, AISI is practical guidance on when to engage with safety evaluation, what testing methodologies to expect, and how safety insights inform your compliance obligations. Early engagement is recommended even though AISI hasn’t launched yet. Preparation now means smooth interaction when they’re operational.
What Is the Australian AI Safety Institute?
The Australian Government announced AISI in November 2025 as a whole-of-government hub for monitoring, testing, and sharing information on emerging AI technologies, risks, and harms. They go live early 2026 with $29.9M in funding.
AISI sits within the Department of Industry, Science and Resources. It’s part of Australia’s National AI Plan under the “Keep Australians Safe” pillar. That complements the National AI Centre (NAIC) which handles adoption guidance.
The advisory function is what matters. AISI provides expert guidance and testing capability. It doesn’t have regulatory enforcement authority. Specialist regulators retain enforcement powers under existing laws.
AISI’s core mandate: pre-deployment testing of advanced AI systems, upstream risk assessment at design stage, and downstream harm analysis of deployed systems. Plus identifying regulatory gaps where existing Australian laws don’t adequately address AI-specific risks.
How Much Funding Does the AI Safety Institute Receive?
$29.9 million commitment to establish the AI Safety Institute announced in the National AI Plan (December 2025).
The money covers establishment costs, operational capacity through the early years, and technical infrastructure for testing. It pays for staffing AI safety experts, building partnerships with international safety institutes, and creating pre-deployment testing capability for Australian AI developers.
This sits within the broader National AI Plan investment. It’s modest compared to the UK AISI’s larger research budget. But it’s sufficient for the advisory and testing mandate in Australia’s context.
The funding reflects Australia’s light-touch regulatory philosophy—advisory guidance rather than extensive regulatory bureaucracy.
What Are the AI Safety Institute’s Key Functions?
AISI operates through three primary functions.
First, pre-deployment testing of advanced AI systems. This is voluntary evaluation of frontier AI models before public release. The methodologies come from UK and US safety institutes—red teaming, capability elicitation, safety cases.
Second, monitoring and analysis of AI risks and harms. This includes upstream work evaluating AI capabilities, training datasets, and system architecture at design stage. Plus downstream work monitoring real-world impacts of deployed AI systems, tracking incidents, and analysing harm patterns.
Third, information sharing with government, industry, and international partners.
Monitoring tracks both capability trends (what advanced AI can do) and harm patterns (actual adverse outcomes). Information sharing enables evidence-based policymaking without creating new regulatory requirements.
AISI doesn’t replace existing regulators. It enhances their AI-specific capability. Portfolio agencies and regulators remain best placed to assess AI uses and harms in their specific sectors.
For regulatory gap identification, AISI uses a systematic process to spot where existing Australian laws fail to address AI-specific risks. That informs recommendations to specialist regulators.
What Is the Difference Between Upstream and Downstream Risk Assessment?
Upstream AI risks are the model capabilities and how AI models and systems get built and trained that can create or amplify harm. This is proactive evaluation at the AI design and development stage, before deployment.
Downstream AI harms are the real-world effects people experience when an AI system is used. This is reactive monitoring tracking actual outcomes.
Upstream identifies risks based on what AI could do. Downstream tracks what AI has done.
Both approaches inform AISI’s regulatory gap identification and recommendations to specialist regulators. Upstream enables early intervention—safer by design. Downstream validates whether upstream predictions matched reality.
For you, upstream assessment determines whether pre-deployment testing is recommended. Downstream analysis may trigger post-deployment review.
Upstream methodology covers capability elicitation, dataset analysis (training data risks, bias patterns), and architecture review (safety measures, alignment techniques).
Downstream methodology includes incident monitoring, harm pattern analysis, and deployed system audits.
These are complementary approaches. Upstream predictions get tested against downstream reality. That creates a feedback loop for methodology refinement.
How Does Pre-Deployment Testing Work?
Pre-deployment testing is voluntary. AI developers submit frontier models to AISI for safety testing before public release.
Testing methodologies come from UK and US safety institutes. Red teaming involves adversarially probing models to uncover vulnerabilities. Capability elicitation determines maximum model capability. Safety case review is a structured argument demonstrating system safety within deployment context.
AISI tests for dangerous capabilities in these domains: cybersecurity (offensive hacking, vulnerability discovery), CBRN (chemical/biological/radiological/nuclear knowledge), autonomous replication (AI self-propagation), and persuasion (manipulation at scale).
Evaluation produces a risk assessment report—model capabilities identified, safeguards tested, recommendations for deployment conditions or additional controls.
Testing takes weeks, not days. Comprehensive evaluation requires thorough adversarial testing and capability mapping.
Results inform whether AISI recommends deployment, conditional deployment with safeguards, or flagging concerns to specialist regulators.
The voluntary collaboration model creates no legal requirement for pre-deployment testing. But there’s strong incentive. It demonstrates responsible development and may influence regulator interpretation of existing laws like consumer protection.
For developer preparation: document AI governance processes, prepare safety case materials, identify potential high-risk capabilities, establish liaison with AISI team.
UK and US AI Safety Institutes conducted joint pre-deployment evaluation of OpenAI’s o1 model and Anthropic’s upgraded Claude 3.5 Sonnet.
How Does AISI Identify Regulatory Gaps?
AISI uses a systematic process. It analyses upstream risk assessments and downstream harm data to identify areas where existing laws that AISI monitors for coverage gaps fail to adequately address AI-specific risks.
Australia’s current regulatory framework applies existing laws to AI—Privacy Act, Australian Consumer Law, Online Safety Act—rather than creating AI-specific regulation.
AISI’s role: test whether existing laws provide adequate coverage for AI risks discovered through safety evaluation. Where gaps exist, make recommendations to relevant specialist regulators.
Gap identification methodology follows four steps. First, identify AI-specific risk through testing and monitoring. Second, map to existing legal frameworks. Third, assess adequacy of current provisions. Fourth, recommend reforms if coverage is insufficient.
Recommendations flow to specialist regulators. OAIC for privacy gaps. ACCC for consumer protection gaps. eSafety for online harms gaps.
Monitor the gap identification process. Today’s identified gap may become tomorrow’s compliance requirement.
Example gap areas flagged in policy analysis: automated decision-making transparency, AI-generated content disclosure, high-risk AI system requirements, and liability frameworks for AI harms.
AISI’s advisory role means it recommends but doesn’t create new regulations. Regulators and Parliament make final decisions.
Gap identification is ongoing as AISI evaluates systems. Recommendations feed into medium-term policy development (2026-2028).
Where Do You Report AI Safety Concerns in Australia?
AISI will establish a reporting mechanism when operational in early 2026. Details aren’t public as of January 2026.
Interim approach: report through existing specialist regulator channels based on harm type. Privacy concerns go to OAIC. Consumer harm goes to ACCC. Online safety goes to eSafety Commissioner.
AISI reporting will likely cover advanced AI capability concerns (unexpected model behaviours, safeguard failures), pre-deployment testing requests, and incident notifications for deployed systems.
Expected process: online reporting portal, confidential submission option for commercially sensitive concerns, triage to appropriate response pathway (AISI analysis, referral to specialist regulator, public guidance).
Types of reportable concerns: discovery of dangerous capabilities during development, safeguard bypass techniques, unexpected model behaviours, and downstream harms from deployed systems.
Who should report: AI developers (internal testing findings), security researchers (vulnerability discoveries), organisations deploying AI (incident notifications), and public (observed harms).
AISI report doesn’t replace existing notification obligations. Privacy breaches still go to OAIC, consumer law violations to ACCC.
Website and contact details expected at industry.gov.au/aisi (not yet live as of January 2026—monitor for early 2026 launch).
How Should CTOs Engage with the AI Safety Institute?
Proactive engagement is recommended when you’re developing frontier AI models with potential dangerous capabilities, deploying high-risk AI systems in sensitive domains (healthcare, finance, critical infrastructure), or discovering unexpected model behaviours during testing.
Pre-launch preparation (before early 2026): review UK AISI research publications to understand testing methodologies, document AI governance processes (responsible AI frameworks, risk assessments, vendor due diligence), and prepare safety case materials if developing advanced systems.
Post-launch engagement pathway (early 2026 onward): monitor industry.gov.au for AISI contact details and submission processes, consider voluntary pre-deployment testing for frontier models, and establish liaison relationship for ongoing safety consultation.
Decision framework for engagement follows three steps. First, assess AI system risk profile (capabilities, deployment context, potential harms). Second, review NAIC’s governance guidance to determine if advanced safety evaluation is warranted. Third, engage AISI for systems exceeding standard risk thresholds.
Voluntary collaboration benefits: demonstrates responsible development practices, may influence specialist regulator interpretation of existing laws, early identification of safety issues before deployment, and access to international safety institute methodologies.
Risk threshold indicators suggesting AISI engagement: frontier model development (large language models, multimodal AI), autonomous decision-making in high-stakes domains, AI systems with potential for scaled harm, and novel architectures without established safety precedents.
What to prepare for engagement: documented AI governance framework, technical specifications (model architecture, training data sources, capability assessments), safety case materials (if available), and deployment context description.
AISI engagement complements (doesn’t replace) privacy impact assessments, security reviews, vendor due diligence, and ethics reviews.
For international developers: Australian-based AI developers should engage AISI regardless of global deployment plans. International developers deploying in Australia should monitor for AISI guidance on local safety expectations.
FAQ Section
Does Australia have an AI Safety Institute?
Yes. Australia established AISI in November 2025 with $29.9M funding. AISI becomes operational in early 2026 as a whole-of-government hub for AI safety evaluation, monitoring, and information sharing. It operates as an advisory body within the Department of Industry, Science and Resources.
Is AISI a regulatory body?
No. AISI has advisory functions, not regulatory enforcement authority. AISI conducts safety evaluations and makes recommendations but cannot compel compliance. Specialist regulators (OAIC for privacy, ACCC for consumer protection, eSafety Commissioner for online harms) retain enforcement powers under existing Australian laws.
When does AISI start operations?
Early 2026. The Australian Government announced AISI’s establishment in the National AI Plan (December 2025). Exact operational start date not yet confirmed—monitor industry.gov.au for updates on website launch, reporting mechanisms, and pre-deployment testing submission processes.
What is the International Network for Advanced AI Measurement, Evaluation and Science?
Global collaboration of national AI safety institutes (formerly “International Network of AI Safety Institutes”). Members include Australia, UK, US, and other jurisdictions. The network shares testing protocols, risk frameworks, and evaluation methodologies. Australian AISI gains access to UK and US safety research and participates in joint testing exercises.
Can I use UK AISI research methodologies while waiting for Australian AISI to launch?
Yes. UK AISI publishes extensive research on safety evaluation. You can adopt UK methodologies (safety cases, red teaming protocols, capability elicitation frameworks) for internal testing. Australian AISI is expected to align with international best practices, making UK research valuable preparation.
What happens if AISI finds risks during pre-deployment testing?
AISI provides a risk assessment report to the developer with findings and recommendations. Options: deployment approved with identified safeguards, conditional deployment pending additional controls, recommendation against deployment, or referral to specialist regulators if risks trigger existing legal obligations.
How does AISI relate to NAIC?
Complementary functions. AISI focuses on safety evaluation (testing, monitoring, risk assessment), while NAIC provides AI6 governance practices that complement AISI safety evaluation. AISI safety insights inform NAIC’s “Guidance for AI Adoption” framework. Use NAIC guidance for standard AI implementations and engage AISI for advanced systems requiring specialised safety evaluation.
Is pre-deployment testing mandatory?
No—voluntary collaboration model. AISI encourages but doesn’t require pre-deployment testing. However, voluntary testing may demonstrate responsible development practices that influence specialist regulator interpretation of existing laws.
What AI systems should undergo pre-deployment testing?
Frontier models with potential dangerous capabilities (cybersecurity offensive tools, CBRN knowledge, autonomous replication, persuasion at scale), high-risk deployments in sensitive domains (healthcare diagnosis, financial credit decisions, critical infrastructure control), and novel architectures without established safety precedents.
How much does AISI pre-deployment testing cost?
Not yet announced. UK AISI voluntary collaboration agreements with major developers (Anthropic, OpenAI) don’t appear to charge fees. Australian AISI funding ($29.9M) suggests government-supported capability. Monitor industry.gov.au for pricing and fee structure when operational details are released early 2026.
What’s the difference between AISI testing and security audits?
AISI focuses on AI-specific safety risks (dangerous capabilities, alignment failures, scaled harms), while security audits address traditional cybersecurity (vulnerabilities, access controls, data protection). Both are valuable. AISI evaluation complements security reviews by covering AI-specific risk categories not addressed in standard security frameworks.
Can startups engage with AISI or is it only for large enterprises?
AISI’s mandate covers all Australian AI developers regardless of organisation size. Initial focus is likely on frontier model developers and high-risk deployments (often larger organisations), but AISI guidance and reporting mechanisms should be accessible to startups. Monitor early 2026 operational announcements for SME engagement pathways.