The third golden age of software engineering – thanks to AI, with Grady Booch

Summary of The third golden age of software engineering – thanks to AI, with Grady Booch

by Gergely Orosz

1h 17mFebruary 4, 2026

Overview of "The third golden age of software engineering – thanks to AI, with Grady Booch"

In this episode Gergely Orosz interviews Grady Booch, a founding figure in software engineering, about how AI fits into the long arc of the field. Booch frames software history as successive rises in levels of abstraction and argues we are in a third "golden age" — one that started around the turn of the millennium and is being accelerated by AI tools. He explains why AI code generation is not the death of software engineering, what problems and risks are now central (security, supply-chain, ethics, system scale), which skills will become more valuable, and what foundations engineers should study to thrive.

The three golden ages of software engineering

First golden age (late 1940s – late 1970s)

  • Core abstraction: algorithmic / process-oriented view (assembly, Fortran, numerical/business automation).
  • Context: software tightly coupled to hardware; emergence of software as a distinct entity (Grace Hopper).
  • Driving forces: scientific/mathematical needs, early defense projects (Whirlwind, SAGE), and economic pressure to reuse software across hardware.
  • Problems: early complexity, division of labor, and the seed ideas for formal methods and open sharing of code.

Second golden age (late 1970s – 1990s)

  • Core abstraction: objects and classes (object-oriented programming/design).
  • Context: micro-miniaturization, personal computers, distributed systems, rise of platforms and libraries.
  • Driving forces: need to manage growing complexity, reuse at higher abstraction layers, and internet protocols that enabled service architectures.
  • Outcomes: robust platforms, larger ecosystems, emergence of modern software engineering practices and open-source movement at scale.

Third golden age (from ~2000s; accelerating now)

  • Core abstraction: libraries, packages, services, and now AI-assisted synthesis — moving from writing code to composing, integrating, and governing large systems.
  • Context: massive proliferation of software into all aspects of civilization, cloud platforms, service ecosystems, and accessible AI tools that shorten the path from intent to executable code.
  • New primary concerns: software scale/management, safety/security (supply-chain attacks), systemic economic risks (platform concentration), and ethical implications.

Why AI code generation is not the end of software engineering

  • Historical pattern: every time a new abstraction/tool appeared (compilers, high-level languages, libraries), fears about obsolescence surfaced — but engineers who moved up abstractions won out.
  • Booch’s core argument:
    • Software engineering ≠ just typing code. Engineers balance technical, economic, and ethical forces; those decision problems are not solved by current AI.
    • Current AI excels at automating repetitive patterns and accelerating common tasks (especially web/CRUD patterns), but not at broader system-level design, governance, risk, and ethical trade-offs.
    • AI is a new level of abstraction/tooling, freeing engineers from tedium and enabling more creativity — not replacing the need for systems thinking.
  • Direct response to Dario Amodei: Booch strongly rejects the claim that "software engineering will be automatable in 12 months," calling it deeply misguided for understating software engineering’s breadth (he bluntly called that claim "utter bullshit" in the discussion).

What will change — jobs and skills

  • Likely to be automated first:
    • Repetitive, messy infrastructure and pipeline tasks (CI/CD scaffolding, standard infra provisioning).
    • Simple, single-purpose applications and routine UI/CRUD coding that can be prompted and generated.
  • Increasingly valuable skills:
    • Systems-level thinking: architecture, decomposition, emergent behavior, multi-agent systems.
    • Security, supply-chain hygiene, risk management and governance.
    • Ethical reasoning and policy design around software impacts.
    • Integration and orchestration of libraries, services, and AI agents.
  • Result: some roles will shrink or shift; demand will grow for those who manage complexity, durability, and societal consequences of software.

Recommended foundations and resources

Booch emphasizes studying systems, architecture, and interdisciplinary thinking:

  • Systems theory and complexity:
    • Herbert A. Simon & Allen Newell — The Sciences of the Artificial.
    • Work from the Santa Fe Institute on complexity and complex systems.
  • Architectures and multi-agent ideas:
    • Marvin Minsky — Society of Mind.
    • Rodney Brooks — subsumption architecture (embodied agents).
    • Early AI architectures: blackboards, global workspaces, Hearst-style systems.
  • Broader interdisciplinary grounding:
    • Neuroscience/embodied cognition (when building systems that interact with the physical world).
    • Software engineering classics and design patterns; study historic large systems for longevity lessons.
  • Practical: use AI assistants to learn libraries faster and prototype, but pair that with foundational knowledge so you can evaluate and secure generated artifacts.

Notable insights & quotes (paraphrased)

  • "The history of software engineering is one of rising levels of abstraction."
  • "The best technology evaporates — it becomes part of the air we breathe." (meaning: well-engineered tech becomes invisible)
  • On AI predictions from industry leaders: Booch characterizes sweeping claims that AI will fully automate software engineering in months as misguided and overly reductive.
  • Practical: AI reduces distance between intent (natural language) and executable code, enabling hobbyists and domain experts to create more throwaway/short-lived software — and sometimes valuable, enduring systems.

Actionable recommendations (for engineers and teams)

  • Strengthen foundations: study systems theory, architecture, and complexity science rather than only new tools.
  • Move up the stack: focus on system design, integrations, governance, security, and ethics — problems that won’t be fully automated.
  • Use AI tools pragmatically: treat them as accelerators for learning, prototyping, and removing tedium; always verify for security/maintainability.
  • Learn to manage supply-chain and production risks: invest in code review, testing, dependency verification, and guardrails (automation + verification).
  • Embrace the opportunity: encourage cross-discipline creators and hobbyists — new ideas and products will emerge from wider participation.

Bottom line

Grady Booch frames today’s AI surge as another major rise in abstraction — not an apocalypse. The role of software engineering will shift toward managing larger systemic, social, and ethical problems, and those with deep foundations in systems thinking and architecture will be the most valuable. AI is a powerful new tool that accelerates productivity and creativity, but it doesn’t remove the need for human judgment about what to build, how to secure and maintain it, and whether we should build it at all.