OpenAI Leadership Reshuffle, AI Unicorns, and White-Collar Work

Summary of OpenAI Leadership Reshuffle, AI Unicorns, and White-Collar Work

by The Jaeden Schafer Podcast

10mJanuary 24, 2026

Overview of The Jaeden Schafer Podcast

This episode covers recent AI industry moves: OpenAI’s enterprise leadership reshuffle, big funding rounds for AI infrastructure and inference startups, a new benchmark testing AI performance on white-collar work, and a startup using AI agents to automate calendar scheduling. The host connects these items to broader trends: enterprise competition, the rising value of inference/real‑time infrastructure, and realistic limits of current agent capabilities.

OpenAI leadership reshuffle and enterprise strategy

  • Brett Zoff has been appointed to lead OpenAI’s enterprise sales efforts. He previously headed post‑training inference at OpenAI, left in 2024 to co‑found Thinking Machine Labs, and returned to OpenAI recently.
  • Context and implications:
    • OpenAI is under pressure in the enterprise market—Sam Altman and CFO Sarah Friar have flagged enterprise growth as a priority.
    • Reported market-share shifts (Memlow Ventures): OpenAI’s enterprise LLM usage reportedly fell from roughly 50% in 2023 to about 27% by the end of 2025.
    • Competitors: Anthropic (~40% share) and Google Gemini are making inroads.
    • OpenAI has expanded partnerships (e.g., ServiceNow) to bolster large‑customer traction.
  • Why it matters: The hire signals a renewed push to reclaim enterprise momentum and suggests OpenAI will prioritize enterprise sales and integrations in 2026.

LiveKit reaches unicorn status (real‑time voice/video infra)

  • Key facts:
    • LiveKit raised $100M at a $1B valuation; round led by Index Ventures with participation from Alimeter, Redpoint, etc.
    • Founders: Russ Dessa and David Zhao (originated as an open‑source pandemic project).
    • Customers include OpenAI (powers ChatGPT voice mode), xAI, Salesforce, Tesla, emergency services, mental‑health providers.
  • Significance:
    • Real‑time voice/video infrastructure is becoming a critical layer as voice AI goes mainstream.
    • Managed cloud offerings for low-latency, interruption‑free audio/video are highly valued by large AI products.

Inference startups and Infrax (commercializing VLLM)

  • Infraq/Infrax raised $150M in seed funding at an $800M valuation (co‑led by a16z and Lightspeed).
  • Focus: commercializing popular open‑source inference tools (notably VLLM) and making model inference faster, cheaper, and more scalable.
  • Related moves: other Berkeley-origin spinouts (Radiax Arc, SG Lang) highlight a trend.
  • Why this matters: The market is shifting from headline training breakthroughs toward inference and production deployment where unit economics and latency drive real business value. Investors are backing that layer heavily.

Merkur “Apex Agents” benchmark — white‑collar work performance

  • Study: Merkur’s Apex Agents benchmark tested leading models on real white‑collar tasks (consulting, law, investment banking, etc.).
  • Results:
    • Top scores were low: Gemini 3 Flash ~24%, GPT‑5.2 ~23%; many others around ~18%.
    • Major failure mode: operating across multiple domains — i.e., integrating information from emails, documents, internal policies, Slack/Drive — maintaining coherence across contexts.
  • Key takeaway:
    • Current agent models are far from reliably replacing high‑value professional roles. They perform like “interns” that need heavy supervision: able to do isolated tasks well but poor at complex, cross‑domain workflows.
    • Implication for business: deploy agents for specific, well‑scoped tasks; avoid expecting full autonomous knowledge‑work replacement today.

BlockKit — agent‑to‑agent calendar scheduling

  • Founder: a former Sequoia partner launched BlockKit; seed $5M led by Sequoia.
  • Product approach:
    • Uses AI agents that negotiate directly with each other (rather than relying on scheduling links or manual coordination).
    • Integrations: invoked via email or Slack.
    • Capabilities: respects user‑specific rules (which meetings are non‑negotiable), priorities, tone, flexibility, and urgency heuristics.
  • Traction: used by 200+ companies (examples: Brex, TogetherAI, several VC firms).
  • Why it matters: Agent‑to‑agent coordination addresses a high‑friction everyday workflow (scheduling) and illustrates practical, time‑saving applications of agents even before full autonomy is possible.

Key takeaways and recommendations

  • Enterprise is a top battleground in 2026: expect OpenAI to push partnerships and sales hires to regain share; customers should evaluate vendor roadmaps and integrations.
  • Inference and real‑time infrastructure are where the money — and practical ROI — are moving: founders and investors should prioritize latency, cost, and scalability solutions.
  • Don’t over‑hype agent replacement of professionals yet: benchmark results show limits on multi‑domain, multi‑source tasks. Use agents to augment and speed up discrete workflows, not to fully replace expert roles.
  • Low‑friction agent applications (e.g., calendar scheduling, email triage, voice UIs) are realistic near‑term wins for productivity—teams should pilot these to capture immediate value.

If you want to explore further: look into enterprise adoption data, compare vendor integrations for voice/real‑time infra, and test agent pilots on narrowly scoped processes before broader rollout.