175. Ex-Director of GCHQ: China, Russia, and the Threats Facing the UK (Jeremy Fleming)

Summary of 175. Ex-Director of GCHQ: China, Russia, and the Threats Facing the UK (Jeremy Fleming)

by Goalhanger

1h 10mFebruary 9, 2026

Overview of 175. Ex-Director of GCHQ: China, Russia, and the Threats Facing the UK (Jeremy Fleming)

This episode of The Rest is Politics features Jeremy Fleming — former Director of GCHQ and ex-MI5 officer — speaking with Alastair Campbell and Rory Stewart. Fleming reviews his career, explains how UK intelligence evolved after the Cold War and 9/11, and gives a detailed assessment of the security threats the UK faces today: state actors (China, Russia, Iran, North Korea), cybercrime, the accelerating risks from AI, and institutional weaknesses in national decision-making and resilience. The conversation mixes operational insight, strategic critique, and practical recommendations for policy and public resilience.

Key topics discussed

  • Jeremy Fleming’s background (accidental entry to MI5, later head of GCHQ).
  • Evolution of UK intelligence: post‑Cold War transition, 9/11, and post‑7/7 transformation into a tech/data business.
  • Relationship and collaboration between MI5, MI6 and GCHQ; role of Five Eyes.
  • Contemporary threat landscape: China, Russia, Iran, North Korea.
  • Cybercrime and ransomware as major national‑economic threats.
  • Artificial intelligence as an accelerant of threats and the need for global governance.
  • Institutional challenges: National Security Council (NSC) decision‑making, ministerial capacity, incentives in systems.
  • Past policy failures: insufficient resilience, lax handling of Russian money, insecure internet design.
  • Practical advice on recruitment into intelligence agencies and the range of skills required.

Main takeaways

  • The intelligence agencies remain essential: human intelligence provides intent while signals/data/cyber provides scale; the balance shifts with the threat.
  • Post‑7/7 the UK significantly increased investment in tech, data and regional capacity; 9/11/7‑7 were pivotal moments for modernisation.
  • China should be treated as a strategic/threat actor — long‑term aim: regime and party security — but engagement and trade remain necessary; China’s threat differs from Russia’s proximate, militarised threat.
  • Russia is a proximate, aggressive actor with direct military risk to Europe and a pattern of covert and disruptive operations.
  • Cybercrime is no longer niche: roughly 600,000 UK businesses suffered cyber attacks last year; nationally significant incidents (200+) have increased and can cost firms billions (example: Jaguar Land Rover ≈£2bn loss).
  • AI increases reach and sophistication of criminal and hostile state activity (better phishing, hyper‑personalisation, potential bio/chemical misuse). AI governance is urgent — it should be thought of internationally, not just domestically.
  • National resilience and diversification (supply chains, critical infrastructure, standards) are under‑emphasised; over‑dependence on US tech/security umbrellas has tradeoffs and risks.
  • Accountability and public legitimacy matter: agencies must “earn their license to operate” via statutory frameworks, transparency where possible, and oversight.
  • Institutional friction: ministers often lack time, training and continuity to make deep strategic choices; incentives and system architecture shape outcomes more than formal structures.

Notable insights & quotes

  • “Agencies have to earn their license to operate.” — Fleming on transparency, statutory footing and public trust.
  • “We are in a world where geopolitics has been turned upside down… China approaching peer level with America.” — framing the multipolar context.
  • “600,000 businesses in the UK had some sort of cyber attack last year.” — scale of the cyber problem.
  • Jaguar Land Rover example: cyber disruption can halt supply chains and cost firms billions.
  • “AI is an accelerant for the sorts of threats we already see.” — AI enables more convincing attacks, in more languages, at scale.
  • China’s strategic goal: “the long‑term security of the party, the continuation of the state.” — motive behind espionage and suppression of dissidence.
  • Five Eyes: an alliance built on trust and decades of cooperation, not a formal organisation with a charter.

Threat assessments (concise)

  • China: long‑term, systemic competitor; sophisticated espionage (cyber and industrial IP theft), influence operations (diaspora pressure, unofficial policing), and geopolitical ambition. Different in character from Russia — less proximate military threat today but very capable technologically and economically.
  • Russia: proximate, militarily aggressive (Ukraine), open use of force and disruptive covert activity across Europe; threat to NATO and to allied systems.
  • Cybercrime / ransomware: major economic and societal threat; attackers include criminal groups and state‑affiliated actors (some Russian‑speaking, North Korean, Iranian). Law enforcement capacity lags scale of attacks.
  • AI: multiplier effect for existing threats (fraud, disinformation, targeted social engineering) and raising longer‑term risks (bio/chemical misuse, highly plausible deepfakes). Governance and public preparedness are urgent.
  • North Korea: a cyber state, reportedly funding parts of its budget through cybercrime and extortion.

Institutional and policy issues

  • NSC and ministerial decision‑making: ministers are time‑poor, often lack domain expertise and continuity; the structure can produce short‑term, headline‑driven choices rather than sustained strategic planning.
  • Incentives matter: complex systems (government, GCHQ, large organisations) run on incentives more than perfectly designed plans — fix incentives to improve outcomes.
  • Five Eyes remains valuable but is an alliance, not a formal body; information sharing is selective, and UK‑US relations are mutually dependent.
  • Concerns about politicisation of intelligence leadership (e.g., rapid turnover of NSA heads) can erode trust and effectiveness.
  • Past weaknesses: inadequate scrutiny and regulation around beneficial ownership and foreign money enabled problematic capital flows (e.g., Russian oligarch money in UK). The internet’s insecure design has had long‑term consequences; similar challenges loom with AI.

Recommendations and action items (from discussion)

  • Strengthen resilience and diversification across critical sectors (energy, tech, bioscience, defence) and set industrial priorities where sovereignty matters.
  • Harden cyber defences for businesses and public services; increase standards, awareness and rapid response capacity.
  • Build international governance/standards for AI — likely via like‑minded coalitions to set norms, safety standards and enforcement.
  • Improve public education and workforce readiness for fast technological change; prepare society for rapid adoption and disruption.
  • Improve transparency, statutory oversight, and accountability of intelligence agencies to maintain public trust.
  • Prioritise strategic, long‑term national security discussion at ministerial and NSC levels; align incentives across Whitehall to sustain multi‑year policies.
  • Close legal/regulatory gaps on beneficial ownership, foreign investment screening and other economic vulnerabilities.

Practical notes for prospective intelligence applicants

  • GCHQ and other agencies need a wide range of skills: analysts, scientists, mathematicians, linguists, engineers, cyber specialists, and many non‑technical roles (logistics, security, estates).
  • Tech competence is increasingly important for many roles, but values, commitment to mission, and passing vetting/security thresholds remain essential.
  • Recruitment is public: apply online; roles are advertised and open to a broad set of applicants.

Bottom line

Jeremy Fleming’s interview blends operational realism with strategic urgency. The UK faces multiple, overlapping threats — state and criminal — amplified by technological change. Addressing them requires better resilience, clearer long‑term strategy, upgraded cyber and AI governance, and sustained political will to align incentives across government, industry and allies. The intelligence community remains central, but it must continue to earn public trust and operate within strengthened legal and oversight frameworks.