Overview of Interview: Bret Taylor of Sierra and OpenAI
This bonus episode of Boss Class (produced by The Economist) is an interview with Bret Taylor — co‑founder and CEO of Sierra (an AI‑agent vendor for customer service) and chair of OpenAI. Taylor discusses the near‑term and medium‑term future of AI agents: why firms should experiment now, how to deploy agents safely and usefully, what business models will emerge, and what this means for jobs and managers. The conversation blends practical deployment advice (guardrails, metrics, pricing) with broader industry perspective (vendor landscape, role of foundation models).
Who Bret Taylor Is
- Former architect of early Google Maps infrastructure; ex‑CTO of Facebook; former co‑CEO of Salesforce; past chair of Twitter.
- Current CEO of Sierra (AI agents for customer service) and chair of OpenAI — placing him at both application and foundation‑model layers of the stack.
Key takeaways
- AI agents will become a core part of digital customer interactions. Taylor’s prediction: in a few years (c. 4–5) there will be off‑the‑shelf agents for many common business processes.
- We’re in an “early innings” phase comparable to the late‑1990s web: lots of custom builds now, vendor solutions will mature later.
- Experimenting now is important: AI is deflationary and waiting may cede competitive advantage to faster adopters.
- Models are imperfect and non‑deterministic — deployment requires engineering, procedural controls, and layered monitoring.
- Narrow, well‑defined use cases are easier and safer to deploy (engineering problem); broader AGI goals remain scientific problems.
- Good business metrics focus on outcomes (e.g., CSAT combined with self‑service rate), not only technical measures.
- New business models: outcomes‑based pricing (pay when the agent actually solves the customer’s problem) is a viable alternative to license models.
- Jobs will change rather than disappear overnight — roles (e.g., call‑center staff) can evolve into AI‑adjacent jobs (e.g., “AI architects”).
Topics discussed
Adoption & strategy
- Why boards/CEOs are pushing AI adoption: cost reduction, competitive reinvestment, and changing customer touchpoints (ChatGPT as a new “front door”).
- Advice: experiment, but be pragmatic — don’t build everything long term yourself if a vendor solution will later exist.
Risk, robustness & regulation
- Two core model issues: imperfection (hallucination) and non‑determinism (same prompt → different outputs).
- Regulated industries are starting with low‑risk tasks (e.g., appointment booking) and building experience before tackling high‑risk advisory roles.
- Consistency can be a strength: AI may be more consistent than humans in many interactions.
Monitoring & guardrails
- Defense‑in‑depth approach:
- Preventative controls (guardrails/standard operating procedures).
- Real‑time supervision: “supervisor models” that detect hallucinations or SOP deviations live.
- Post‑conversation evaluation: flagging low‑sentiment or risky conversations for human review.
- Use AI to triage and surface the needle conversations for humans to review — efficient human‑in‑the‑loop.
Customer experience & human handoff
- Sierra agents identify as AI and often include admissions like “I occasionally make mistakes” to build trust.
- AI agents can produce high customer satisfaction (multilingual, patient, consistent).
- Handoffs can be configured: co‑pilot model (AI collects data, human finalizes) or autonomous agent, depending on business choice.
Business model & product
- Outcomes‑based pricing: charge only when the agent solves the issue; escalate to a human for free.
- Firms are likely to buy purpose‑built agents for tasks (audit agent, CS agent, lead‑gen agent) rather than raw models.
Practical recommendations (for leaders)
- Start with narrow, high‑value, low‑risk use cases to gain experience and reduce downside.
- Define business outcomes first (e.g., CSAT + percentage resolved without human) rather than technical KPIs alone.
- Prepare to reinvest cost savings into growth/experience — consider the competitive risk of waiting.
- Build layered monitoring: preventative guardrails, real‑time supervisor models, and post‑hoc review queues.
- Consider outcomes‑based procurement — pay for solved customer problems rather than seat licenses.
Practical recommendations (for practitioners & frontline workers)
- Reskill with a “beginner’s mindset”: learn to use and orchestrate agents; roles will evolve (e.g., call‑center teams becoming AI architects).
- Focus on tasks within your company likely to receive reinvestment; position yourself to benefit from those changes.
- Use AI as a creative foil and productivity tool (critique, summarization), but retain writing/thinking when that process is valuable.
Risks & mitigations
- Risk: hallucinations and unpredictable outputs. Mitigation: narrow domain, supervisor models, human review, and audit trails.
- Risk: reputational/legal exposure in advisory/regulatory domains. Mitigation: keep humans in the loop for high‑risk decisions; restrict agents to information collection or clearly labeled support.
- Risk: workforce disruption. Mitigation: proactive reskilling, internal mobility, and transparent transition plans.
Outlook (4–5 year view)
- A mature vendor ecosystem will likely emerge with ready‑made agents for common domains (customer service, finance audits, legal workflows).
- Foundation models will lower marginal costs of producing software, but many businesses will still buy hardened, audited solutions (collective hardening, compliance, support).
- Net effect: increased automation of transactional work, higher consistency, and a shift in spending toward higher‑value human interactions.
Notable quotes
- “If it were 1995…every company needs a website. I think in 2026, every company needs an AI agent.”
- “Models are imperfect. More challenging: models are non‑deterministic…that makes testing and robustness very challenging.”
- “Think of it as defense in depth…AI monitoring the AI” (on supervision and layered controls).
- “Most companies don't want to be software companies; most companies just want the job done.”
Quick checklist for an AI‑agent pilot
- Pick a narrow, high‑volume, low‑risk process (returns, booking, form‑filling).
- Define outcome metrics: CSAT + % resolved without escalation.
- Implement guardrails & SOPs; ensure agent explicitly identifies as AI.
- Deploy supervisor model for real‑time safety checks; queue flagged conversations for human review.
- Choose pricing aligned with outcomes where possible.
- Plan for reskilling and role evolution for impacted staff.
Overall, Taylor is optimistic about agents’ benefits (efficiency, consistency, multilingual service) but stresses realistic engineering, strong monitoring, outcome focus, and humane transitions for workers during rapid change.
