Goldman CIO Marco Argenti on the Warp-Speed Improvements in AI

Summary of Goldman CIO Marco Argenti on the Warp-Speed Improvements in AI

by Bloomberg

52mMarch 30, 2026

Overview of Goldman CIO Marco Argenti on the Warp-Speed Improvements in AI

This episode of Bloomberg’s Odd Lots features Marco Argenti, CIO of Goldman Sachs, discussing how rapid advances in AI have moved the industry from “experimentation” to large-scale production use. Argenti explains how Goldman has implemented firm-wide AI (GSAI), the technical and organizational work required to make it safe and productive, the measurable outcomes they track, and the longer-term implications for vendors, developers, regulators, and competitive advantage.

Key topics discussed

  • The shift from AI experimentation to production-grade, agentic AI across Goldman Sachs.
  • GSAI assistant deployment and usage metrics.
  • Developer productivity tools (agentic developer assistants, Cloud Code, Copilot-like tools).
  • Importance of data quality, curation, and integration.
  • Token economics, cost control and centralization of model access.
  • Build vs. buy dynamics and termination of some third-party contracts.
  • Forward-deployed engineers and working directly with model providers.
  • Security, info barriers, model risk management, and regulatory engagement.
  • How talent needs and developer roles are changing.

Main takeaways

  • AI is no longer a toy: Goldman considers this a production technology with real impact on workflows and timelines.
  • Broad adoption: Goldman rolled out a GSAI assistant to ~47,000 employees and logs well over a million prompts per month.
  • Productivity is measured by output and timelines: accelerated project completion (e.g., cloud migrations finishing ahead of schedule) is the concrete ROI signal Argenti watches.
  • Data is the bottleneck: high-quality, curated data (and a curated data pipeline) is decisive for good AI outcomes.
  • You must centralize model access and routing: a model gateway that routes to the Pareto-optimal model (cost vs. quality) is critical for cost control and performance.
  • Tokens will be a material cost line: per-token prices may fall, but usage (tokens consumed) will likely rise, making tokens a significant ongoing cost to manage.
  • Build vs buy is shifting: small, previously outsourced apps are increasingly built internally because dev tooling lowers time/cost to build; large, regulated, at-scale systems remain complex buys.
  • Existing regulatory frameworks for model risk apply: banks have long managed neural nets; LLMs are larger but fit into extended model risk management, tiering, and human-in-the-loop controls.

Notable production use cases at Goldman

  • GSAI assistant: answers complex, multi-dimensional client or internal questions by retrieving internal and external data and planning responses rather than simple chat responses.
  • Developer workflows: agentic developer assistants speed up cloud migration, code spec creation, and delivery—changing developers’ roles toward planning, product thinking, and supervision.
  • Vibe-coded/rapid prototypes: employee-built travel assistants (rebooking, disruptions) and faster cloud migrations of legacy apps, often developed in hours.
  • Large-scale projects: examples of cloud migration projects completing months ahead of schedule due to AI-enabled productivity gains.

Implementation details & technical controls

Data and platform

  • Legend AI lakehouse: a data platform that enables retrieval from curated sources to GSAI, enabling a query→answer pipeline.
  • Curation matters: curated and structured data yields disproportionately better AI answers.

Model routing and token management

  • Model gateway: central routing that picks an appropriate model per query to balance cost and quality (Pareto frontier).
  • Centralized access: guarding against “wild west” direct API usage prevents unexpected token bills and security risks.
  • Optimization philosophy: prioritize usage and creative experimentation, while the platform team optimizes cost/quality behind the scenes.

Security, governance, and regulatory compliance

  • Info barriers: enforced across the system with ID/badge-based session controls so an agent/session only accesses allowed data.
  • Model risk management: inventory, risk tiering, human-in-the-loop controls; higher-risk applications get stronger controls.
  • Code controls: AIs can propose code (pull/merge requests) but cannot auto-approve or auto-deploy; standard CICD checks, signing, and approvals remain mandatory.
  • Regulators: banks apply existing regulatory patterns (used for neural nets) — classification, controls, and supervision — to LLMs/agentic systems.

Business & market implications

  • Vendors: not all legacy SaaS vendors are equally threatened — impact depends on whether the underlying process will change. Stable, regulated core processes (e.g., general ledger) are less likely to be replaced quickly; developer tools and UX layers are more vulnerable.
  • Build vs. buy: small, focused apps are increasingly built in-house because of dramatically reduced development time; large-scale enterprise systems remain complex to replace.
  • Competition: banks can lose some of the “90%” work to commoditized AI, but the final 10% (proprietary data, cross-asset insights, client relationships, bespoke structuring) is where premium value remains.

Talent, roles, and culture changes

  • Skill shift: Argenti frames new essentials as “explain, delegate, supervise.” Workers need to:
    • Describe desired outcomes clearly,
    • Break work into tasks that agents/specialized models can execute in parallel,
    • Supervise and validate outputs.
  • Developers move up the stack: more product/architecture/managerial work; less repetitive toil. Measured output and quality/timelines become key metrics.
  • Forward-deployed engineers: product-oriented vendor engineers embedded to accelerate adoption and reduce intermediary slowdowns.
  • Employee experience: initial excitement and “slot-machine” prompting behavior may create fatigue, but many engineers report renewed enthusiasm as AI removes repetitive work.

Risks and challenges

  • Token cost management: large-scale agentic workflows multiply token usage; central governance and routing required to avoid shocks.
  • Security & control: must prevent local, unapproved model execution and ensure code signing, CI controls, and info barriers are enforced.
  • Model explainability: regulators don’t demand perfect explainability; they expect appropriate controls, tiering, and monitoring (the bank is not starting from scratch).
  • Velocity vs speed: rapid prototyping (speed) can hit security, scalability, or maintainability walls; sustainable velocity requires investment in platform, controls, and integration.

Practical recommendations for executives (implicit in Argenti’s approach)

  • Treat AI as production infrastructure: invest in a central AI/platform team for model routing, monitoring, and security.
  • Centralize model access and billing: build a model gateway and metering to control token spend and optimize cost/quality.
  • Invest heavily in data quality and curation: it’s the primary determinant of AI usefulness.
  • Define risk tiers and controls: expand model risk management frameworks to include LLM/agentic behavior, human-in-the-loop gating for critical decisions.
  • Reevaluate buy vs build per use case: prefer build for small, rapid apps; be cautious about replacing large-scale systems without robust testing and controls.
  • Upskill talent: train employees to explain intent, decompose work, and supervise outputs — managerial skills become more broadly required.

Bottom line

Marco Argenti emphasizes that AI has moved from experimental novelty to a core, productivity-enhancing platform across Goldman Sachs. The benefits are concrete (faster delivery, better client responses, internal productivity), but capturing them requires serious investment in data infrastructure, centralized model governance, integration, security controls, and workforce reskilling. Commodity parts of workflows will get automated, but proprietary data, cross-asset insight, and deep client relationships remain the source of differentiated value for banks.

Notable quotes

  • “This is not the drill. This is real. It’s not the age of experimentation anymore.”
  • “Data quality is really the determinant between good AI and not so good AI.”
  • “We gave our GSAI assistant to 47,000 people… we are way above a million prompts per month.”
  • “My philosophy is to try to isolate the developer or the user from the token anxiety.”
  • “Clients are really paying us for that extra 10%.”