988: Cloudflare’s Next.js Slop Fork

Summary of 988: Cloudflare’s Next.js Slop Fork

by Wes Bos & Scott Tolinski - Full Stack JavaScript Web Developers

47mMarch 18, 2026

Overview of Syntax Episode 988: Cloudflare’s Next.js Slop Fork

This episode is an interview with Steve Faulkner (Director of Engineering — Workers at Cloudflare) about VNext: Cloudflare’s “slop fork” — a Vite-based reimplementation/port of Next.js created largely with LLM assistance. The conversation covers why the project was attempted, how Steve used AI agents and tooling to build it, practical lessons about AI-assisted engineering, quality/security trade-offs, and what this implies for framework portability and the future of software development.

Key takeaways

  • VNext is Cloudflare’s port of Next.js to Vite; the community nicknamed it a “slop fork” (Steve embraced the term).
  • The project was driven by practical hosting needs plus experimentation with what modern LLMs can do for large engineering tasks.
  • Most of the heavy lifting was done interactively with LLM agents (OpenCode / Opus 4.5 → 4.6), combined with fast tooling (Vite, VTest + Playwright).
  • Steve’s process emphasized test-driven compatibility: porting relevant Next tests into a Vite/VTest setup and iterating until behavior matched.
  • AI can accelerate large, repetitive migration and porting tasks, but guardrails (tests, linting, reviews) and human oversight remain critical.
  • Security was expectedly imperfect early on; Cloudflare received and triaged vulnerability reports and used AI agents to find/fix issues iteratively.

What VNext is (and why they built it)

  • VNext = a Vite-native port of the Next.js runtime and developer experience built to run well on Cloudflare’s Workers/edge.
  • Motivation: Next has many Node/Vercel-specific assumptions that complicate hosting on other runtimes; Cloudflare explored multiple options (including Open Next) and experimented with reimplementing compatibility via AI to shorten the implementation time.
  • Practical target: give users Next compatibility in scenarios where a Vite-first approach and Cloudflare’s edge platform are advantageous.

How it was built — process and tooling

High-level process

  • Steve started with a plan (markdown files) and iteratively guided the LLM, focusing first on "porting the tests" rather than trying to run the entire Next test suite unchanged.
  • He selectively chose which Next tests mattered (Next’s test suite is ~8k tests) and migrated those tests into a VTest + Playwright environment.
  • The work was iterative: small, quick loops for fixes plus longer deep-dive sessions; he used sessions that sometimes ran tasks overnight.

Models, apps, and agents

  • Primary model: Opus (OpenCode) — started on Opus 4.5, migrated to 4.6 mid-project.
  • Client tooling: OpenCode desktop app (not terminal UI), VS Code for editing.
  • Helpful external agents/skills: Agent Browser (Vercel’s browser automation wrapper), Context7 (upstash folks index), EXA search for external code/context lookups.
  • Session management: agents.md and discoveries.md to track agent instructions and recurring ecosystem/body-of-knowledge issues.

Build/test stack

  • Test harness: VTest + Playwright used for behavioral testing and playground/debugging (App Router playground).
  • Linting/type guardrails: TypeScript LSP, Ox tools (oxlint, oxformat), TypeScriptGo where applicable — prioritized fast tooling for quick feedback loops.
  • Vite specifics: Cloudflare uses a Cloudflare Vite plugin; vNext used Vite v7 by default, with an eye toward v8 performance improvements.

AI workflow lessons & best practices

  • Use tests, linting, and small contained tasks as guardrails; allow occasional “free pass” exploratory runs for redesign suggestions.
  • Iteration and correction are powerful: agents improve rapidly when given corrective feedback and updated context (unlike human rewrites).
  • Save useful sessions (compaction → markdown) to preserve context and lessons.
  • Use agents to automate browser-driven debugging (Agent Browser + screenshots/playwright) but watch out for tool limits (large screenshots can corrupt agent sessions).
  • Keep a discoveries.md to track ecosystem incompatibilities and recurrent issues to avoid re-solving the same problems.

Code quality, maintainability & security

  • Generated code issues: LLM-generated code sometimes used large template-string code generation for client bundles (hard to lint/typecheck), and produced verbose or unidiomatic code — acceptable for compatibility experiments but requires refactoring for long-term maintainability.
  • Examples of problematic outputs: LLM-created direct SQL queries bypassing an ORM, unexpected dynamic import loops, and template-string based client code that is hard to maintain or for LLMs to reason about later.
  • Security: early security reports and triage happened (including external bug bounties). Cloudflare used LLMs to triage/fix vulnerabilities and also built AI agents that proactively found issues in the repo.
  • Maintenance cadence: many rapid releases/fixes followed launch (dozens of updates in the first weeks).

Broader implications

  • Migration barrier drops: with AI, porting apps between frameworks can be quick and inexpensive — Steve suggested that if you dislike Next, it may be cheaper (tokens/time) to migrate to a different framework using LLMs than to fight the framework’s limits.
  • New incentives for framework design: frameworks may evolve to be “AI-authorable” (clearer guardrails, typed/simpler patterns) and we may see AI-first frameworks or languages — likely with stricter typing/guardrails (Steve mentions Rust/Go-like characteristics).
  • Domains beyond dev: medicine is a likely next big domain to be reshaped by LLMs, albeit with heavy regulation and safety concerns.
  • Ethical/strategic note: Steve is both excited and cautious — large transformative benefits exist alongside real risks.

Practical recommendations / Action items

  • If you want stability: use Open Next (battle-tested recommendation from Cloudflare).
  • If you want to experiment with VNext: point an LLM/agent at the repo and ask it to migrate your app (Steve encouraged community testing, PRs, and bug reports).
  • Use guardrails: strong test suites, linters, formatters, and fast TypeScript tooling (vtest, ox tools) to reduce regressions from AI outputs.
  • Keep logs and compaction files (discoveries.md, agents.md) to record recurring ecosystem issues and agent instructions.
  • Review AI-generated outputs thoroughly — run automated tests and do human reviews for critical production code.

Notable quotes

  • On the term “slop fork”: “When I saw slop fork I almost dropped my phone — this is the greatest term ever coined.”
  • On AI-human interplay: “AI is an accelerator — it helps if you know what you need to do. If you don’t, it can amplify the wrong direction. The human still needs to set direction.”

How to try / where to give feedback

  • Try VNext by pointing an LLM/agent at the VNext repo and instructing it to migrate your project — Steve asked for feedback and PRs; expect active maintenance and rapid fixes.
  • If you want production-ready stability now, use Open Next.
  • Cloudflare encourages community reports, PRs, and security disclosures (they’re actively triaging/fixing).

Closing notes / plugs from the episode

  • Steve’s short plugs: try VNext; try Cloudflare Workers and the Cloudflare developer platform.
  • Cultural note: Steve frames this era as a rapid technological revolution (exciting but compressive), urging technologists to shepherd the tech responsibly.