Skip to main content
TopAIThreats home TOP AI THREATS
INC-25-0048 confirmed medium Systemic Risk

Australia Scraps AI Advisory Body After 15 Months and $188K, Drops Mandatory AI Guardrails (2025)

Responsible parties

Initiated by Australian Government , affecting Australian citizens subject to high-risk AI in welfare, policing, and credit decisions without mandatory guardrails and 270 expert nominees who completed documentation over 15 months and received no response ; contributing factors include regulatory gap and competitive pressure.

Incident Details

Last Updated 2026-04-06

The Australian government scrapped its planned AI Advisory Body in late 2025 after a 15-month, $188,000 AUD recruitment process that identified 270 experts and shortlisted 12 nominees, none of whom were appointed. The December 2025 National AI Plan also dropped 10 mandatory guardrails for high-risk AI proposed in September 2024, relying instead on existing laws and a new advisory-only AI Safety Institute ($29.9 million AUD). The rollback removes governance mechanisms that would have applied to algorithmic decision-making in welfare, policing, credit, and other high-risk domains. Coded as INC-26 because the full scope of the decision, including the $188,000 cost, was first reported publicly in February 2026.

Incident Summary

The Australian government abandoned its planned AI Advisory Body in late 2025, scrapping the initiative after a 15-month, $188,000 AUD recruitment process that narrowed 270 expert candidates to 12 shortlisted nominees. None were appointed. Nominees had been contacted for documentation in February 2025 and then heard nothing for approximately six months before the body was quietly dropped.[1][3]

The advisory body was part of a broader governance framework proposed by then-Industry Minister Ed Husic in September 2024, which also included 10 mandatory guardrails for high-risk AI: accountability, risk management, data governance, testing protocols, human oversight, transparency, contestability, supply chain visibility, record keeping, and conformity assessments.[2]

The December 2025 National AI Plan formally abandoned both. The government adopted a “technology-neutral” approach, relying on existing legal frameworks (Privacy Act, Consumer Law, anti-discrimination law) and a new AI Safety Institute (AISI) funded at $29.9 million AUD. The AISI has advisory powers only; it cannot compel AI providers to change practices or impose penalties.[2][3] The practical consequence is that high-risk AI deployments in welfare eligibility, predictive policing, automated credit scoring, and similar domains can proceed without the mandatory oversight requirements that the guardrails would have imposed.

Industry Minister Tim Ayres, who replaced Husic after the 2025 election, stated through a spokesperson that the advisory body had been “superseded by a more dynamic and responsive approach.” The Productivity Commission had recommended against economy-wide AI regulation, citing a $116 billion economic opportunity. DIGI, representing Apple, Google, Meta, and Microsoft in Australia, had lobbied against mandatory guardrails.[2]

Professor Toby Walsh of UNSW, who had served on an interim expert panel, responded that Australia risks missing a “narrow window of opportunity” to regulate AI safely before harms become entrenched.[1]

Note on incident code: The policy decision occurred in December 2025, but the full scope (including the $188,000 cost and recruitment details) was first reported publicly in February 2026, placing discovery in the INC-26 year.

Key Facts

  • $188,000 AUD spent over 15 months: Recruitment and administrative costs with no appointments made. The figure comes from department records cited by ABC News; the Information Age/ACS headline rounds to “$200k” but specifies $188,000 in body text.[1]
  • 270 experts narrowed to 12: Full recruitment pipeline completed; nominees contacted and left without response for six months[1]
  • 10 mandatory guardrails dropped: Accountability, risk management, data governance, human oversight, transparency, and five other requirements formally abandoned in the National AI Plan (SmartCompany, 2025-12)[2]
  • Advisory-only replacement: AI Safety Institute has no enforcement capability, cannot compel changes or impose penalties[3]
  • Industry lobbying: DIGI (representing Apple, Google, Meta, Microsoft) publicly argued against mandatory guardrails and welcomed the government’s decision[2]
  • Productivity Commission influence: Recommended treating new AI laws as a “last resort,” citing a $116 billion economic opportunity[2]

Threat Patterns Involved

Primary: Safety Governance Override — A formal AI governance structure (advisory body with 270 expert candidates and 12 shortlisted nominees) and mandatory safety guardrails (10 specific requirements for high-risk AI) were both established and then abandoned by leadership decision. This is not a case of governance never existing; the mechanisms were designed, funded, and partially staffed before being overridden.

Secondary: Accumulative Risk & Trust Erosion — The scrapping of both the advisory body and mandatory guardrails together removes two independent governance layers simultaneously, leaving a gap that an advisory-only institute without enforcement powers may not fill.

Significance

  1. Governance structures can be abandoned after investment: A funded, partially staffed safety structure was eliminated before activation when political priorities shifted, demonstrating that neither public expenditure nor completed recruitment protects governance mechanisms from override.
  2. Advisory-only bodies have structural limitations: The replacement AI Safety Institute lacks enforcement capability. This substitution replaces a governance body designed to provide independent advice with an internal department body that cannot compel compliance.
  3. Industry lobbying influenced the outcome: DIGI’s lobbying against mandatory guardrails, combined with the Productivity Commission’s framing of regulation as a threat to economic opportunity, contributed to the decision. This illustrates how economic arguments can override safety governance even when the safety case has not been evaluated.
  4. Expert nominees left in limbo: The treatment of shortlisted nominees, who completed documentation and then received no communication for six months, may discourage future participation in government AI governance initiatives.

Evidence Gaps

No public documentation has been found on the internal decision-making that preceded the rollback; it is unclear whether the advisory body was formally evaluated and rejected or simply deprioritized. No specific downstream harm cases from the absence of mandatory guardrails have been identified as of April 2026. The risk assessment for this incident reflects increased systemic exposure to unmitigated high-risk AI deployment, not realized incidents of harm.

Timeline

Industry Minister Ed Husic proposes 10 mandatory guardrails for high-risk AI and announces creation of a permanent AI Advisory Body

Department contacts 12 shortlisted nominees from a pool of 270 experts, requesting documentation

Productivity Commission urges government to pause economy-wide AI regulation, calling new AI laws a 'last resort'

National AI Plan released, formally abandoning mandatory AI guardrails in favor of 'technology-neutral' regulation using existing laws

Department of Industry confirms via Senate Estimates that AI Advisory Body will not proceed

ABC News reports $188,000 AUD spent over 15 months with no appointments made

Outcomes

Recovery:
Industry Minister Tim Ayres stated the approach was 'superseded by a more dynamic and responsive approach.' The government continues consulting external experts informally.
Regulatory Action:
The government replaced the advisory body with a new AI Safety Institute (AISI) funded at $29.9 million AUD, which has advisory powers only and no enforcement capability.

Use in Retrieval

INC-25-0048 documents Australia Scraps AI Advisory Body After 15 Months and $188K, Drops Mandatory AI Guardrails, a medium-severity incident classified under the Human-AI Control domain and the Safety Governance Override threat pattern (PAT-CTL-006). It occurred in Oceania (2025-12-02). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Australia Scraps AI Advisory Body After 15 Months and $188K, Drops Mandatory AI Guardrails," INC-25-0048, last updated 2026-04-06.

Sources

  1. Govt spent $200k on AI group before axing it (news, 2026-02)
    https://ia.acs.org.au/article/2026/govt-spent--200k-on-ai-group-before-axing-it.html (opens in new tab)
  2. Australia's planned AI Advisory Body has been dumped (news, 2025-12)
    https://www.smartcompany.com.au/artificial-intelligence/australian-ai-advisory-board-plan-scrapped/ (opens in new tab)
  3. Govt abandons plan for external AI Advisory Body (news, 2025-12-10)
    https://www.innovationaus.com/govt-abandons-plan-for-external-ai-advisory-body/ (opens in new tab)
  4. Australia's planned AI Advisory Body has been dumped (news, 2025-12)
    https://www.themandarin.com.au/304851-australias-planned-ai-advisory-body-has-been-dumped/ (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Primary)