980: AI Coding Explained

Summary of 980: AI Coding Explained

by Wes Bos & Scott Tolinski - Full Stack JavaScript Web Developers

52mFebruary 18, 2026

Overview of 980: AI Coding Explained

Wes Bos and Scott Tolinski unpack the current AI-coding landscape: editors and interfaces, models, agents, skills, slash-commands, hooks/plugins, MCPs (model-connected plugins), and practical trade-offs. The episode is a practical tour for developers who want to understand what files and conventions matter, how to structure context, and how to use these features effectively without wasting time or money.

Key topics covered

  • The types of AI-coding interfaces: editor extensions (VS Code/Copilot, Cursor, Zed), terminal TUIs/CLIs (Cloud Code, Claude Code, Charm Crush), and full GUIs/desktop apps (OpenCode desktop, Codex app).
  • Models matter: different models behave differently and you should choose models by task (creative vs precise).
  • Agents, subagents, and agents.md: what they are, how they differ, and how to use them.
  • Skills vs agents vs slash commands: when to use each.
  • Hooks, plugins, and MCPs: automating events, bundling behavior, and connecting AI to external services.
  • Practical trade-offs: context stuffing, cost, speed, and how to get 80/20 value without over-architecting.

Tools & interfaces (what they used / recommended)

  • Editor-based GUIs: OpenCode (desktop & web), Cursor, VS Code (Copilot chat), Zed
  • TUIs/CLIs: Cloud Code, Claude Code, Charm Crush
  • Desktop/chat-like apps: Codex app, OpenCode desktop
  • Other mentions: Pi, Charm, Kiro
  • Advice: prefer tools that let you switch models (avoid tools locked to a single vendor/model if you want flexibility).

Models — which to use for what

  • Codex 5: recommended by hosts for precise, detailed JavaScript work (more exact, reliable).
  • Opus 4.6: preferred for exploratory, creative, brainstorming tasks (more “creative” output).
  • GPT-5.3 codecs, Grok, and other models are experimented with — models leave "tells" (stylistic artifacts) and behave differently.
  • Tip: model speed and cost vary (e.g., Anthropic “fast mode” can be faster but much more expensive). Tool implementation can affect latency even for the same model.

Agents, subagents, and agents.md

  • Agent (high level): the AI that modifies files and performs tasks on your repo.
  • Subagent: a spawned agent that runs in parallel on a specific subtasks (e.g., review a Svelte file) while you continue interacting with the main agent.
  • agents.md (or tool-equivalents like cursor rules, copilotinstructions.md): a repo-level markdown that primes the AI each session with essential context (project tech stack, constraints, conventions).
    • Don’t overstuff agents.md — too much context leads to slower/muddier results. Keep it minimal and essential.
  • Agents can be configured with:
    • default context / instructions
    • scoped tool access
    • specific model bindings
    • output style preferences (e.g., terse / bullet lists)

Skills, slash commands, and when to use which

  • Skills: modular capabilities/instructions that extend AI behavior for specific tasks. The AI will only pull them into context when needed (helps reduce context bloat).
    • Good for one-off or clearly-defined tasks (e.g., how to run Remotion, accessibility checks).
    • Example: “Superpowers” — an entire workflow of skills (TDD, git worktrees, code review). It worked but was expensive and produced mediocre output in the example.
  • Slash commands: function-like, argument-capable prompts you invoke manually (ideal for repeatable, interactive prompts like “scaffold page X”).
    • Good for quick, controlled actions. Hosts like slash commands a lot — ties well to shortcuts/stream deck.
  • When to choose:
    • Use a skill for a single, repeatable job that doesn’t need continuous conversational state.
    • Use an agent when you expect iterative back-and-forth, audits + fixes, or a longer-running workflow.
    • You can combine: agents can call skills when appropriate.

Hooks, plugins, and MCPs

  • Hooks: event-driven scripts (on save, pre/post tool use, etc.) used for linting, formatting, running TypeScript checks. Useful to protect code quality when agents modify files.
  • Plugins: bundle agents, skills, hooks, and MCP servers; shareable across teams. Example: Svelte OpenCode plugin that bundles Svelte-specific tooling and subagents.
  • MCP (Model Control Protocol, host-provided servers): programmatic way for AI tools to access external systems (docs lookup, Playwright for browser automation, Sentry integration, etc.).
    • Use MCPs to integrate context (logs, docs, tests) and to allow AI to interact with external services in controlled ways.

Workflows / practical tips & pitfalls

  • Interfaces:
    • GUIs are preferred for higher-information tasks (image dragging, nice diffs, integrated terminal). TUIs/CLIs can be great for quick CLI fixes.
    • Many people use a split workflow: AI GUI on one side with your primary editor (Zed/VS Code) on the other for fast edits & review.
  • Tab completion vs chat/agents:
    • Tab completion is great for small edits, especially CSS and design-y quick fixes.
    • Chat/agents are better for larger, logic-heavy changes and multi-file refactors.
  • Avoid context stuffing: adding massive project dumps to agents.md or a session degrades quality and increases cost.
  • Don’t over-engineer: agents + skills + hooks + MCPs are powerful but can be expensive and time-consuming to tune. The 80/20 often gets you most benefit with minimal setup.
  • Monitor cost: automated long-running workflows (like the example that ran for 3.5 hours, $26) can yield underwhelming results. Start small and iterate.
  • Tool/model independence: favor tools that let you change the underlying model to adapt as models evolve.
  • Cloud agents: running agents in the cloud lets work continue while your local machine sleeps — very handy for long-running jobs and mobile inspection.

Notable quotes / concise insights

  • “Agent is basically the AI going off and changing the files and doing stuff for you.”
  • “Agents.md is the place for the things that should be there in every session — but keep it minimal.”
  • Tab completion = spot edits (especially CSS). Agents/chat = flow/logic/back-and-forth work.
  • “If you want to switch models, your tool should sit in front of the model choice.”

Actionable checklist (what to try next)

  • Pick your base tool (editor plugin vs GUI vs CLI) based on how you like to interact: GUI for rich context and diffs; CLI/TUI for quick terminal tasks.
  • Start with a short agents.md: stack, main conventions, do/don’t rules (no more than the essentials).
  • Use a Svelte/React plugin or MCP when working in framework-specific code for better validation and outputs.
  • Create a few slash commands for common scaffolding tasks (add route, run tests, lint auto-fix).
  • Add hooks for pre/post changes to run linters, tsc, or tests to prevent AI-generated slop.
  • Experiment with two models: one for precise code (Codex-5 style) and one for creative brainstorming (Opus-style) — measure which suits which task.
  • Use MCPs to surface docs, logs, or run browser automation when needed (Playwright).
  • Monitor cost and iterate — stop long-running agents early if output is poor.

Recommendations & resources mentioned

  • Tools: OpenCode (desktop/web), Cursor, VS Code with Copilot chat, Cloud Code, Claude Code, Charm Crush, Kiro
  • Models: Codex 5 (precise JS), Opus 4.6 (creative), GPT-5.3 codec, Grok, Anthropic variants
  • Skills/collections: skills.sh, “Superpowers” repo (powerful but can be costly)
  • Integrations: Sentry (for production error visibility)
  • Tip: try the latest models again if you were disappointed last year — outputs have improved and feel different.

Final takeaway

AI-coding tooling is rapidly evolving into composable parts: models, agents, skills, hooks, plugins, and MCPs. Learn the distinctions, start minimal, and combine features only when they clearly add ROI. Experiment with a small set of good habits (clean agents.md, a few slash-commands, hooks for quality) and expand as the tools and models prove their value.

Thanks for reading — try one small experiment this week (e.g., add a terse agents.md + one slash command) and measure the time/cost saved.