Overview of Why Iran is just the beginning of AI warfare
This ABC News Daily episode (host Sam Hawley) features Toby Walsh, Chief Scientist at the AI Institute, UNSW, discussing how artificial intelligence is already reshaping modern warfare — using recent conflicts (including the Iran strike) as a case study — and why this marks a major strategic and moral inflection point. The conversation covers current battlefield uses of AI, a US government dispute with AI firm Anthropic, the limits of human oversight and explainability, and the urgent need for international guardrails.
Key takeaways
- AI is actively being used in modern conflicts to gather intelligence, prioritise and select targets, plan logistics and speed decision-making — enabling thousands of target decisions in short timeframes.
- These capabilities change the character of war (speed, scale, and risk) and raise serious legal, ethical and moral concerns — especially around handing life-or-death choices to machines.
- Explainability and human oversight are major weaknesses: models can produce ranked targets quickly but struggle to justify or trace the exact evidence behind recommendations.
- There is an ongoing political fight in the US over which AI companies the military can use. Anthropic tried to impose red lines (no large-scale domestic surveillance, no autonomous lethal weapons); the US administration pushed back and moved contracts toward other vendors.
- International agreements/guardrails are possible and historically precedented (chemical, biological, nuclear, cluster munitions, blinding lasers), and are urgently needed to prevent worst-case outcomes (e.g., drone swarms attacking civilians).
- AI-driven platforms change strategic balance: cheaper autonomous systems (drones, uncrewed vessels) let smaller or less wealthy actors project force and defend effectively (example: Ukraine vs Russia, Iran’s drone programs).
Topics discussed
- How AI was reportedly used in the Iran strikes: target identification, prioritisation (over 1,000 targets mentioned for day one), and strike planning.
- Anthropic vs US government controversy: Anthropic’s red lines, its removal from some US military contracts, and the PR/political fallout.
- The limits of AI explainability and the challenge of ensuring meaningful human oversight at the speed AI enables.
- The prospects for international norms and treaties to limit autonomous lethal weapons and domestic surveillance.
- Strategic implications for national defence procurement (move toward uncrewed/autonomous systems) and for countries like Australia.
Examples and evidence cited
- Reported use of AI in:
- Target selection and logistics in the Iran conflict (thousands of targets identified quickly).
- Conflicts such as Gaza and Ukraine — AI tools have shaped planning and operations.
- Alleged use in a Venezuelan operation (reported/claimed).
- Anthropic’s Claude model was used by US military teams; Anthropic attempted to forbid two uses (domestic surveillance and autonomous weapons), triggering a government response and contract shifts toward other vendors (e.g., OpenAI).
Risks and concerns highlighted
- Speed vs oversight: AI can generate massive, fast recommendations leaving little time for considered human judgment.
- Explainability: current models are poor at explaining why they recommended a specific target or action, complicating accountability and legal compliance.
- Ethical and legal issues: delegating lethal decisions to machines challenges international humanitarian law and moral norms.
- Proliferation: non-state actors and smaller states can obtain powerful AI-enabled attack capabilities (easier to deploy swarms and autonomous platforms).
- Erosion of norms: adversarial regimes that disregard international law increase the danger of misuse.
Policy recommendations and actions suggested
- Negotiate international guardrails that limit the worst uses of AI in warfare (especially autonomous lethal weapons and mass surveillance).
- Insist on meaningful human control / human-in-the-loop for lethal decisions.
- Improve transparency, auditability and explainability requirements for military AI systems.
- Update procurement strategy to prioritise defence against autonomous threats and invest in uncrewed/AI-capable platforms where appropriate.
- Use public conscience and diplomacy (as with past bans on chemical/biological weapons) to generate political will for limits.
Notable quotes and framing
- Toby Walsh: AI is likely “one of the most radical shifts” in warfare — described as potentially the “third revolution in warfare” (after gunpowder and nuclear weapons).
- Historical analogy: public outrage drove limits on chemical weapons after WWI — a similar public/diplomatic response could constrain AI weaponisation.
- Strategic point: affordable, autonomous platforms lower the barrier to projecting force, changing the global balance of power.
Production details
- Guest: Toby Walsh, Chief Scientist, AI Institute, University of New South Wales.
- Host: Sam Hawley (ABC News Daily).
- Produced by Sydney Pead; audio production by Anna John; supervising producer David Cody.
- Series: ABC News Daily (Not Stupid segment referenced at the start).
Summary produced to help readers grasp the episode’s core arguments without listening to the full audio.
