Overview of Confronting the CEO of the AI company that impersonated me
This episode of Decoder (The Verge) is a long-form interview between host Nilay Patel and Shashir Mahotra, CEO of Superhuman (formerly Grammarly). The conversation centers on AI product design, creator rights, and a recent controversy: Grammarly’s short-lived "Expert Review" feature that generated editing suggestions “inspired by” named public figures (including Nilay), apparently without their permission. The episode mixes a technical/product discussion of Superhuman’s roadmap—especially Superhuman Go, its agent/platform strategy—with a pointed exchange about attribution, impersonation, legal risk, and the broader consequences of AI for creators and the information economy.
Key takeaways
- Superhuman repositioned from Grammarly (corporate rename) to a broader AI-native productivity suite: Grammarly (writing assistant), Coda (docs), Mail (email client), and Superhuman Go (platform for custom AI agents).
- Expert Review: a buried Grammarly feature that produced suggestions “inspired by” named authors/editors without explicit permission. It was criticized, offered an email opt-out, then removed; Julia Angwin later filed a class-action lawsuit.
- Shashir apologizes, calls the feature “bad,” says it had low usage and was taken down before the lawsuit; he frames the issue as design failure rather than malicious impersonation.
- Tension between “attribution” and “use of likeness”: Shashir argues the feature included clear disclosures and links and that attribution is different from impersonation; critics say using names for commercial product functionality without consent violates likeness/right-of-publicity norms in some jurisdictions.
- Business model for creator participation: Superhuman plans a platform economics model (examples given: 70/30 revenue split like many app-store models) where creators can build and monetize agents; Shashir positions agent-building as a new creator revenue path (e.g., subscriptions).
- Broader industry context: comparison to YouTube’s past copyright battles and Content ID—Shashir argues platforms should go beyond minimum legal compliance and build systems that pay/empower creators; Nilay argues AI is accelerating extraction and devaluing creative work.
Background & timeline of the Expert Review controversy
- Feature: “Expert Review” synthesized advice based on public work and listed named experts (e.g., Nilay Patel, Casey Newton, Julia Angwin, bell hooks).
- Discovery: Buried feature with low usage; reporters and authors discovered the names and objected to lack of consent/permission.
- Company response: Initially offered an email-based opt-out for named experts; then removed the feature entirely. Shashir says it was off-strategy and removed before lawsuit filing.
- Legal action: Julia Angwin filed a class-action lawsuit alleging unauthorized commercial use of names/identities. Superhuman contests the legal claims and characterizes the feature as attribution, not impersonation.
Product & business model details discussed
- Superhuman’s scope: AI-native productivity tools across many apps and surfaces; claims ~40 million daily active users and “a million unique apps/agents seen daily.”
- Superhuman Go: a platform for proactive, personal AI agents that run where users work (chrome, docs, email, etc.). Goal: let companies/creators create agents (sales agents, support agents, creator agents).
- Monetization: Superhuman referenced an R‑store-style payment model with a 70/30 split for creators offering paid agents. Shashir frames this as analogous to app stores/YouTube revenue sharing.
- Creator tooling: the proposed workflow for creators building an agent involves (a) writing a manual/guide of style/tone, (b) setting triggers and prompts, (c) iterative training with acceptance/feedback metrics.
Legal and ethical issues discussed
- Attribution vs impersonation/right-of-publicity:
- Shashir: feature included explicit “inspired by” disclosures and links; doesn’t meet impersonation standard.
- Critics (Nilay): using someone’s name for a commercial feature without consent can trigger name-and-likeness claims under NY/CA law; attribution labeling doesn’t neutralize commercial use concerns.
- Copyright input vs output:
- Output (LLM-generated content resembling a copyrighted work) has clearer legal lanes for takedowns/claims.
- Input (training on a creator’s corpus without permission) is legally unsettled and could change model economics if courts require licenses.
- Platform precedent: Shashir repeatedly invoked YouTube’s Viacom case and Content ID as an example of building creator-focused tooling beyond minimal legality; Nilay argued that AI compresses and intensifies the extractive risk in new ways.
Broader themes and implications
- Creator economy stress: many creators report traffic loss to AI summaries/overviews and worry about diminished ad/subscription revenue; platforms and AI are intensifying pressure.
- Perception of AI: public polls show negative sentiment—people fear job loss and loss of control. Shashir attributes much of the distrust to worries about employment and agency.
- Futures for creators: Shashir and Nilay both see multiple paths—subscriptions, direct agent monetization, newsletters, products/merch—but disagree on how viable/time-consuming these are and whether they fairly compensate past work ingestion.
- Platform strategy: the tension between permissive model usage and platform responsibility to creators is unresolved; executives see a need for systems that align creators’ economic interests with platforms’ growth.
Notable quotes
- Shashir: “We changed the name of our corporate entity from Grammarly to Superhuman… to broaden the scope of what we do.”
- Shashir on the Expert Review removal: “It was not a good feature. It wasn’t good for experts. It wasn’t good for users. It was fairly buried. We decided to kill it pretty quickly.”
- Nilay: “AI is pulling behind ice and only slightly above the Democratic Party”—used to convey how poorly AI is perceived in public opinion.
- Shashir comparing remedies: “The law doesn’t require us to do this, but we chose to do a lot more… Content ID was launched to support creators.”
Practical implications & recommendations
For creators
- Monitor platform experiments that cite or “model” your work; ask platforms for transparency about how names/works are used.
- Consider building direct-to-fan offerings (newsletters, paid agents, memberships) where you control pricing and consent.
- If experimenting with agentization, be prepared to codify your editorial style: write guidelines, set triggers, and iteratively review suggestions.
For platforms / product teams
- Avoid deploying features that use creator names/identities without explicit consent—legal and reputational risk is high.
- Provide clear opt-in/opt-out controls and transparent attribution with links and provenance; better yet, make creator opt-in the default for monetizable features.
- Consider creator-first monetization (rev share, subscriptions, Content ID–style tooling) rather than relying only on minimal legal defenses.
For policymakers / industry
- Input/training vs output issues remain unsettled legally; expect litigation to shape costs and model architectures.
- Standards for name/likeness commercial use in AI products need clarification across jurisdictions.
What to watch next
- Superhuman Go product launches in the coming months—look for agent templates, creator onboarding UX, and monetization flows.
- Outcome and court developments in class-action lawsuits and other AI copyright/likeness litigation that could reshape model/data economics.
- Platform responses and tooling (e.g., attribution systems, payments, likeness detection/content ID analogs for AI) aimed at creator protection and revenue sharing.
Final note
The episode captures a live tension at the heart of today’s AI product debates: rapid product experimentation and model capability vs. creator consent, attribution, and economic fairness. Shashir positioned Superhuman as trying to build a platform that creators can join and monetize; Nilay pressed on consent, attribution, and whether creators’ past work has been unfairly leveraged. The conversation is a useful case study of how product decisions, legal ambiguity, and public trust interact in an AI-native world.
