Overview of How Claude Code Claude Codes
This Vergecast episode (host: David Pierce) marks the one-year anniversary of Claude Code (Anthropic’s coding/agent product) and covers two main conversations: an in-depth interview with Claude Code creator Boris Cherny about the product’s evolution and where “vibe coding” is headed, and a privacy-focused discussion with Hayden Field about how to think about giving AI access to your data and devices. The episode closes with a Vergecast hotline question about whether to upgrade a phone mid-cycle because of looming component price increases (RAM).
Key segments
Interview — Boris Cherny (Anthropic) on Claude Code and CoWork
- What Claude Code is now: an LLM-driven developer tool/agent that writes, tests, debugs, and can interact with browsers/APIs to complete tasks — increasingly reliable for both developers and non-developers.
- Sudden capability jump: Boris describes a step-change around the Opus 4.5/4.6 model releases when Claude Code went from assisting on portions of work to producing and validating whole features reliably (he went from writing some code to “not writing code anymore”).
- Users and surfaces:
- Originally a terminal/engineer-focused product, Claude Code is now used broadly across engineers, product managers, data scientists, and enterprise teams at Spotify, Shopify, Netflix, NVIDIA, Snowflake, etc.
- Anthropic built multiple interfaces (terminal, IDE extensions, iOS/Android apps, desktop web app) and created CoWork to serve non-developer users better.
- Product design and tooling strategy:
- “Code → tool use → computer use”: models move from writing code to using tools/APIs to directly controlling computer/browser environments to perform tasks without massive upfront context.
- Agents = LLMs that can use tools. Tool access lets the model fetch context instead of needing everything in the prompt.
- Anthropic focuses on safety, security, and enterprise-grade controls (VM sandboxes, folder permissions, deletion protection).
- Surprising uses / real examples:
- Migrating notes into Obsidian, organizing screenshots, recovering photos, genome analysis, navigating government websites (buying clamming licenses), paying parking tickets, helping with taxes (with caveats).
- Claude Code writing substantial percentages of commits world-wide (third-party studies cited; private commits not counted).
- UI and future:
- Current UI is still chat/agent-like but Anthropic is experimenting; Boris says the “UI of the future has not been discovered yet.”
- Proactivity is a promising area but must avoid intrusive behavior.
Practical product points from Boris:
- Claude Code can be configured heavily; it can also reconfigure itself on request.
- CoWork uses the same agent back-end but a safer, more user-friendly surface for non-engineers.
- Built-in safeguards: folder-level access, virtual machine sandbox, runtime classifiers, and defenses against prompt injection (improved with model alignment).
Privacy & security — Hayden Field (The Verge)
- Big-picture advice: treat AI tools like any service that requests a lot of data — but with a sharper eye because many players are newer, less time-tested, and operating without settled regulation.
- Personal risk model: think about your own tolerance and the sensitivity of the data before connecting mail, calendar, files, or installing agents that can access your machine.
- Enterprise vs consumer:
- Enterprise deployments (paid, contractually protected, often HIPAA/enterprise-compliant) provide stronger guarantees than consumer/free offerings.
- Free products often use user data more aggressively (“if you’re not paying, you’re the product”).
- Terms, fine print, and ambiguity:
- Companies may claim “we don’t train on your integrations,” then add clauses about content you copy/paste or consumer product exceptions. Read the specifics — policies can and do change.
- Practical privacy rules:
- Don’t give blanket access to everything if it’s sensitive.
- Prefer paying / enterprise plans when handling critical data.
- Expect to iterate: double-check outputs (especially for consequential items like taxes/legal/medical), and move from full checking to spot-checking as confidence grows.
- Consider distributing data across specialized agents (e.g., use Google’s assistant for Gmail/calendar if you already trust Google) rather than one all-powerful assistant.
- Analogy: treat AI access decisions like seatbelts vs “YOLO” — many people choose convenience but do so knowingly.
Hotline — Phone upgrade and RAM shortage (host + Allison Johnson, senior phone reviewer)
- Question: Should someone with an iPhone 15 Pro Max upgrade mid-cycle to avoid future RAM-driven price hikes?
- Advice:
- If your current phone still meets your needs (likely for another 2+ years), waiting is sensible — phones are “boring” right now and incremental upgrades are small.
- If you already needed an upgrade soon (battery/repair/failure), consider buying now — component shortages mean buying when you actually need it is reasonable.
- People who buy flagship yearly are more likely to consider upgrading sooner; casual upgraders can often wait.
- Manufacturers (esp. Apple) typically resist direct price hikes; they may change configurations (remove cheap variants) instead.
Notable quotes
- Boris Cherny: “I don’t write any code anymore. Claude Code: 100% of my coding.”
- Boris on agent progression: “Code → tool use → computer use.”
- Hayden Field: “Treat AI tools the exact same as you would treat any other service that was requesting a lot of data from you — and maybe with a sharper eye.”
Practical takeaways & recommendations (actionable)
- If you want to try vibe coding with Claude Code / CoWork:
- Start small: use it for low-risk busywork (organizing screenshots, consolidating notes, drafting emails, canceling subscriptions).
- Keep manual checks for consequential tasks (taxes, legal, medical).
- Use CoWork’s safer surfaces for non-developer tasks rather than giving terminal-level access.
- For personal data safety:
- Decide your risk tolerance: don’t add sensitive mail/calendar/files unless you understand the product’s training/retention policies and trust the vendor.
- Prefer paid/enterprise plans for sensitive data (they typically promise not to use content for training and have contractual controls).
- Use folder-level access and sandboxed environments where possible; audit permissions regularly.
- Be cautious about copy/pasting sensitive info into consumer chat sessions (product terms sometimes permit training on pasted content).
- Consider distributing responsibilities across ecosystem-specific assistants (e.g., Google for Gmail/calendar) rather than one cross-platform agent if that matches your threat model.
- For product teams / designers:
- Ship conservative defaults: clear transparency, easy revocation of access, visible provenance (which agent/identity performed actions), and sandboxing.
- Test proactivity boundaries carefully — users dislike unexpected actions.
Who this episode is best for
- Developers and product people exploring LLM-based coding workflows and agents.
- Non-technical users curious whether an AI assistant can actually automate desktop busywork.
- Anyone weighing privacy trade-offs of connecting email/files to consumer AI services.
- Gadget buyers concerned about component-driven price changes.
Quick “gotchas” listeners should remember
- Model improvements can be sudden (a single model release can change capabilities substantially).
- “Not training on your integrations” can coexist with clauses that allow training from content you manually paste or other exceptions — read terms.
- Enterprise contracts give stronger protection than consumer/free products.
- Always verify outputs for high-stakes tasks; spot-checking will likely replace full manual verification over time as confidence grows.
If you want to experiment: try one low-risk automation (e.g., have CoWork organize a folder or draft replies to your top 3 emails), audit the outputs manually, then iterate on permission scope based on comfort.
