20VC: Benchmark's Newest General Partner Ev Randle on Why Margins Matter Less in AI | Why Mega Funds Will Not Produce Good Returns | OpenAI vs Anthropic: What Happens and Who Wins Coding | Investing Lessons from Peter Thiel and Mamoon Hamid

Summary of 20VC: Benchmark's Newest General Partner Ev Randle on Why Margins Matter Less in AI | Why Mega Funds Will Not Produce Good Returns | OpenAI vs Anthropic: What Happens and Who Wins Coding | Investing Lessons from Peter Thiel and Mamoon Hamid

by Harry Stebbings

1h 25mNovember 10, 2025

Overview of 20VC: Benchmark's Newest General Partner Ev Randle on Why Margins Matter Less in AI

Harry Stebbings interviews Everett (Ev) Randle — Benchmark’s newest general partner and former investor at Kleiner Perkins, Founders Fund, and Bond. The conversation covers lessons learned from icons (Peter Thiel, Mary Meeker, Mamoon Hamid), how to evaluate and build AI businesses (metrics, moats, product ↔ labs competition), fund strategy trade-offs (capital velocity vs. boutique craft), OpenAI vs Anthropic, the size of code-generation and AI-inference markets, and Benchmark’s investment philosophy and constraints. Ev argues for a new taxonomy for AI companies, shifting focus from traditional SaaS margin metrics to absolute gross profit per customer and terminal margin profiles.

Key takeaways

  • Margins are the wrong primary metric for many AI apps. Focus instead on absolute gross profit dollars per customer and expected terminal gross-margin structure.
  • AI app economics differ materially from SaaS (higher inference costs, different pricing and contract sizes); use a new taxonomy and KPIs for AI companies.
  • Growth velocity and usage matter more than early gross margins — product usage accelerates model improvement and moat formation.
  • Moat remains largely technological/talent-driven, not purely distributional; building exceptional AI products is very hard and requires rare talent.
  • Fund size determines strategy: large megafunds optimize capital velocity (deploy lots of capital quickly), smaller/boutique firms can optimize for cash-on-cash returns and high-touch founder partnerships.
  • Labs (OpenAI, Anthropic) set baseline product expectations — AI apps must be differentiated enough versus lab-provided offerings.
  • The AI-inference and code-generation markets are rapidly scaling into “golden categories” that can add billions of ARR; this expands upside for many apps and infra players.
  • Avoid confusing inputs (e.g., ownership % or early-stage stigma) with outputs (money-on-money returns). Different fund shapes can both win — but play the game your fund size allows.

Topics discussed

Lessons from mentors and prior firms

  • Mary Meeker: use quantitative models to tell a long-term narrative — visualize where a company could be in 8–10 years rather than getting lost in near-term growth rates.
  • Peter Thiel / Founders Fund: organizational design and conviction tests (encouraging partners to invest personally to surface conviction).
  • Mamoon Hamid: see excellence up close early; develop sharp taste in product × market × people (e.g., B2B consumer-like products).

OpenAI vs Anthropic and labs vs apps

  • OpenAI (ChatGPT) is viewed as the more defensible consumer anchor; Anthropic may have a B2B edge in commercialization and some model encodings.
  • Choice between OpenAI vs Anthropic at last-round prices was nuanced — Ev slightly favored OpenAI but would be happy with either.
  • Apps must be superior to what labs offer at base pricing; otherwise users will default to lab apps.

AI metrics & taxonomy

  • Traditional SaaS metrics (80% gross margins, NR >120%) can mislead with AI apps.
  • Propose evaluating: terminal gross margin profile, absolute gross profit per customer, contract sizes, and the degree of labor-cost displacement.
  • Example: lower margins but 4–5× higher gross profit per customer can be vastly more valuable.

Code generation & AI-inference markets

  • Code generation moved from zero to multi-billion ARR very quickly — could become a multi-billion-per-year net new category.
  • AI-inference cloud (CoreWeave, Nebius, others) became huge despite initial skepticism — high demand can overcome “commodity” worries.
  • Identify “golden categories” where the total market adds $1B+ ARR per year.

Fund strategy: capital velocity vs craft

  • Mega funds (Tiger, some mega players) emphasize capital velocity and large check deployment; smaller funds (Benchmark) focus on high-touch relations and cash-on-cash returns.
  • Conway’s Law for VC: fund size/team structure shapes what you can and should invest in.
  • Ev argues many firms have shifted toward capital-velocity priorities; that’s rational for their fund economics but changes the ecosystem.

Governance & boards

  • Boards have fiduciary responsibilities; founders shouldn’t expect sycophantic boards. Tough governance decisions are sometimes necessary.
  • “Firing founders” debate: context matters; governance vs. founder-centric philosophies are evolving.

Notable quotes & soundbites

  • “I think we should not be placing that much emphasis on margins today. We need a new taxonomy for AI companies.”
  • “If your average gross profit per customer can be 4 or 5x that of a normal SaaS company, then you actually have much more absolute dollars of gross profit per customer.”
  • “Conway’s Law — you ship your org chart. I am a huge believer … you ship your fund size.”
  • “If you’re writing billion-dollar checks, that is your main product. Go talk to the principals, the junior partners, and the associates at those firms … Capital Velocity is not the North Star of those firms — it is the North Star.”
  • “The labs set the baseline in terms of customer experience. They’re your competition at your base layer.”

Actionable recommendations (for investors & founders)

For investors:

  • Re-evaluate early-stage AI investments using terminal gross-margin scenarios and absolute gross profit per customer, not just percent margin.
  • Emphasize product usage and metrics that drive model improvement — usage begets better models and stronger moats.
  • Be mindful of fund-size constraints: pick a strategy that maps to the capital you must deploy (mega-checks vs. high-touch early bets).
  • Use models as conviction tests (base/bear/bull underwriting), but don’t overfit growth curves — models are a sanity-check more than a precise forecast.

For founders / operators:

  • Prioritize real user engagement and usage cadence — that turbocharges model improvements and defensibility.
  • Think about pricing that captures a share of formerly human-labor budgets; absolute customer value matters more than percent margin.
  • Build differentiated workflows, integrations, or data advantages that labs can’t easily replicate via an API.
  • Prepare governance: expect boards to exercise fiduciary duties; build robust reporting and governance norms early.

Quick-fire highlights

  • Most-changed mind: the AI inference/cloud business model — once dismissed as a commodity, now clearly valuable.
  • Biggest miss: passing on OpenAI’s $32B round — structural concerns obscured the product’s massive growth potential.
  • Who to back for highest cash-on-cash historically (Ev’s quick take): Founders Fund (ability to incubate large, differentiated outcomes).
  • Biggest personal worry for Benchmark: stasis — need to evolve while staying true to core North Stars (founder partnership + high money-on-money returns).
  • Ev’s ranking: People > Product > Market (people are upstream and hardest to substitute).

Final outlook

Ev is optimistic about AI’s ability to sustain GDP growth and create enormous economic value over the next decade, while cautioning that the industry will see many pump-and-dump cycles and false positives. He believes different VC models can succeed simultaneously (mega capital-velocity players and high-touch boutique firms), but fund structure should inform strategy. The core message: change the mental models used to evaluate AI companies — measure absolute value created and product usage, not just historical SaaS metrics.