Overview of Legendary Hacker Matt Suiche on Cyberwar in the Age of AI
This Bloomberg Odd Lots episode features Matt Suiche — founder of ONDB, former legendary hacker, and longtime cybersecurity expert — discussing the evolving intersection of kinetic warfare, cyber operations, and AI. The conversation covers recent Iran–Israel tensions and cyber activity, physical attacks on cloud infrastructure, how AI is changing software development and offensive/defensive cyber capabilities, the security risks of autonomous AI agents, and Suiche’s view that data (not software) will be the durable economic asset in the AI era.
Key topics discussed
- Recent cyber and hybrid actions in the Israel–Iran conflict:
- Reported Israeli cyber reconnaissance (e.g., hijacked prayer app, traffic-light manipulation reported in Tehran) used mainly for reconnaissance and disruption rather than outright destructive cyberattacks.
- Kinetic strikes (drones) targeting cloud/data-center infrastructure — significant because they show how physical/kinetic actions can directly disrupt cloud services.
- Historical cyber precedents (context): Stuxnet, wiper attacks against Aramco, Shadow Brokers-era leaks, supply-side compromise and insider risks (e.g., contractor selling zero-days).
- AI’s role in offense, defense, and software development:
- AI-assisted bug discovery and faster software creation (cost of building software collapsing).
- Limits today: hallucinations, reliability issues, and insufficient maturity for fully autonomous critical decisions.
- Autonomous AI agents:
- Definition: loops that call LLM APIs and third-party tools and execute actions autonomously.
- Security risks: agents given excessive permissions create broad attack surfaces and accidental destructive outcomes.
- Economics and product implications:
- “SaaSpocalypse” thesis — commoditization of software via AI; Suiche argues data will be the lasting, monetizable asset.
- New market models: programmatic access to high-quality/private data (API marketplaces, micropayments/stablecoins) for agent consumption.
- Practical operational points: agent UX (CLI vs web UI), developer/ops workflow changes, and reversibility/forensics of agent-generated code and protocols.
Main takeaways
- Cyber in wartime remains primarily intelligence, reconnaissance, and disruption, but kinetic strikes against infrastructure (e.g., data centers) change the calculus — physical attacks can be cheaper and more disruptive than some expensive cyber exploits.
- Low-cost drones create asymmetric effects: inexpensive kinetic tools (Shahid-style drones ~USD 20k) can knock out cloud availability zones, causing cascading disruption for services that assume centralized cloud resilience.
- AI reduces the marginal cost of building software, accelerating creation but also compressing incentives to invest in security/audit — raising systemic risk.
- Data, not software, is likely the most valuable long-term asset in an agentic-AI economy; access to authoritative, high-quality datasets and programmatic APIs will be monetizable.
- Autonomous agents introduce a familiar but amplified security problem: the same autonomy that enables capability enables destructive behavior if permissions and controls are not carefully designed.
- Legacy security lessons still apply: do not hand blanket permissions to agents; implement security-by-design, least privilege, auditability, and forensics for agent activity.
Notable quotes / insights (attributed)
- “Data is the only durable asset in the AI economy.” — Matt Suiche
- “The cost of building software is going toward zero.” — Matt Suiche
- “If you give all permissions to an agent it becomes Murphy’s law: if something can go wrong, it will.” — Matt Suiche
- “An AI agent is just another service or piece of software from a security standpoint.” — Matt Suiche
Incidents & examples discussed
- Reported hijacking of a prayer app to send messages (deception/disruption).
- Reported tampering with traffic lights in Tehran (reconnaissance/positioning of targets).
- Drones striking data centers (Amazon AWS zones affected); downstream effects included game services disruption (e.g., Fortnite) and companies rerouting deployments (Versal moved traffic to India).
- Historical context: Stuxnet (PLC targeting), Aramco wiper attacks, Shadow Brokers leaks, and insider sale of government zero-days (L3Harris example).
- AI misuse examples: jailbreaks and data leaks from LLMs (reported incidents such as alleged misuse of Claude).
Security implications & recommendations (actionable)
For enterprises, security teams, and policy makers:
- Treat agents as services: apply the same security model as for any other service — least privilege, rate limits, scoped credentials, logging, and rollback capabilities.
- Never hand blanket permissions to AI agents; design granular access control and approval workflows before granting agents write or destructive capabilities.
- Adopt security-by-design for agent architectures: integrate data governance, traceability/audit trails, and explicit safety constraints from day zero.
- Harden physical infrastructure threat models: include low-cost kinetic threats (drones, sabotage) in cloud and data-center risk assessments and continuity planning.
- Protect high-value data as a strategic asset: implement secure, controlled programmatic access (API gateways, monetization models, contracts) rather than broad unrestricted access.
- Prepare for increased operational costs and asymmetric economic attacks (e.g., disruption to memory supply/chokepoints like Strait of Hormuz affecting hardware/cloud costs).
- Monitor vendor policies and data retention: be cautious with cloud/AI vendors that collect prompts and input data — assess retention, privacy, and regulatory implications.
Short-term watchlist (what to monitor next)
- Further evidence of kinetic attacks against cloud/data-center facilities and resulting impacts on major cloud providers and dependent services.
- New disclosures or operational uses of AI in offensive cyber operations (weaponization, automated exploit discovery).
- Commercial rollout of enterprise-grade agent frameworks and how vendors address identity, permissions, and auditing.
- Market shifts around SaaS business models and new data-access monetization platforms (API marketplaces, micropayment systems).
- Incidents of prompt/data retention leaks or AI jailbreaks exposing sensitive or proprietary information.
About Matt Suiche / ONDB (brief)
- Matt Suiche: French hacker-turned-cybersecurity entrepreneur with history analyzing major leaks and critical-infrastructure attacks.
- ONDB: Suiche’s startup focused on data infrastructure for agentic AI — positioned to provide trusted, monetizable, and secure programmatic access to high-quality data for AI agents.
Final note
The episode stresses continuity with past cybersecurity lessons while highlighting new friction points introduced by low-cost kinetic weapons and AI agents. Organizations should treat agentic AI as a new class of software/service with explicit identity, permissions, and fail-safes — and reassess physical and supply-chain risks to cloud-reliant architectures.
