Overview of The rise of the professional vibe coder (a new AI-era job) | Lazar Jovanovic
Host Lenny Rachitsky interviews Lazar Jovanovic — Lovable’s first official “Vibe Coding Engineer.” Lazar explains what a professional vibe coder does, why non‑technical people can excel in the role, and shares concrete workflows, files/templates, and frameworks he uses to build production-ready apps fast using AI tools (Lovable, Cursor, Cloud Code, ChatGPT/Codex, etc.). Key themes: AI is an amplifier, clarity > raw coding, the new value is judgment/taste, and practical techniques to avoid “AI slop” and unblock builds.
Key takeaways
- Vibe coding: using AI agents to build internal and external production apps quickly — a role that sits between product, design, and engineering.
- Primary skills to invest in: clarity (how you ask), judgment/taste (design, UX, copy), and exposure to excellent work (to develop taste).
- Coding syntax matters less; the new scarce skills are planning, specifying, and steering AI agents.
- Practical workflow: run multiple parallel builds to explore, pick a winner, then spend dedicated time building PRDs + tasks for the agent to execute reliably.
- Non‑technical backgrounds can be an advantage: less constrained assumptions, more creative experimentation.
- Engineers won’t disappear — elite engineers will be required for scaling, infra, security, maintenance.
What a Vibe Coder does (day‑to‑day)
- Owns idea → prototype → production using AI-first builders.
- Builds both external product features (e.g., templates, Shopify integration, merch store) and internal tools (feature-adoption dashboards, enterprise integrations, community tools).
- Acts as idea-to-shipping engine: clarifies requirements, directs AI agents, and validates outputs.
- Reports loosely to growth / works cross-functionally across departments.
Lazar’s core principles & metaphors
- Genie / Aladdin metaphor: AI has a limited context (token window) and won’t “read your mind.” Be specific; otherwise you get a literal and unusable result.
- Two limits:
- Machine limit: token/context window — the agent can only consider limited input at a time.
- Human limit: vague prompts and assumptions — be explicit, include references and examples.
- “Coding = calligraphy”: hand-coding as an artisanal skill; most output will be produced by agents.
- “AI is an amplifier”: it magnifies both skill and mistakes — if you lack judgment, you’ll produce garbage faster.
Lazar’s practical workflow (how he builds reliably)
-
Exploration (parallel experiments)
- Open multiple projects/tabs (he runs ~5+ in parallel).
- Approach each differently: brain dump (voice), refined text prompt, design screenshot/reference (Mobbin, Dribbble), and actual code snippets/templates.
- Compare outputs to identify the “winner” direction.
-
Spend time planning (80% planning / 20% execution)
- Once a direction is chosen, spend a concentrated day creating PRDs and docs to steer the agent instead of iterative trial-and-error.
- Create the following files (sources of truth) and upload them to the project so the agent reads them:
- masterplan.md — high level intent, who it’s for, desired feeling.
- implementation-plan.md — sequencing and order of build steps.
- design-guidelines.md — style details (colors, fonts, component constraints, example snippets).
- user-journeys.md — flows and state transitions (what happens after registration, etc).
- tasks.md (or plan.md) — actionable checklist of tasks/subtasks for the agent to execute.
- rules.md / agent.md — agent behavior rules (what to read first, how to behave, test instructions).
- Make agent read these files before taking action.
-
Delegate and monitor
- Treat the agent as a teammate: “do the next task, then report what you did and how to test.”
- Read agent output (not just code) — that’s your source of truth for status and correctness.
- Regularly update docs to keep the agent’s context dynamic.
-
Maintain small context windows
- Queue single tasks for the agent rather than huge multipronged asks (keeps token consumption low and accuracy higher).
- When debugging, point the agent exactly at the file/edge function and provide logs so it uses tokens on thinking/execution, not re-reading the whole codebase.
Debugging framework — “4x4” (how Lazar unblocks builds)
Attempt each of these once, in order:
- Ask the agent’s built-in “try to fix” — many small issues are auto-resolved.
- Add observability: have the agent add console logs / debug outputs; run in preview sandbox and capture logs.
- External AI diagnostics: export repo (GitHub), feed to Codex / ChatGPT / diagnostic model or use RepoMix to compress repo and ask for analysis. Use these tools for diagnosis; be cautious about automatic edits.
- Revert/reflect: if it’s your prompt/assumption, version revert, breathe, re-prompt with better context. Then incorporate what you learned into rules.md so the agent behaves better next time.
After fix: ask the agent to summarize what could have been asked differently — then save that guidance into the project rules to avoid repeat mistakes.
Tips & tactical tricks (high ROI)
- Build multiple versions in parallel — faster exploration, helps find the right direction and saves credits/time in the long run.
- Give the agent code snippets when you want pixel-perfect or mechanical fidelity; code is often interpreted more precisely than natural language.
- Use references: screenshots, Figma components, Dribbble/Mobbin examples — attach them to the prompt.
- Use markdown (.md) files for plan/docs — agents read MD well.
- Spend more time on “exposure time” — consume great design, UX, and copy to level up taste.
- Focus on emotional design (fonts, microcopy, motion) — these are harder for AI to get right by default and are high leverage.
- When hiring or demonstrating skill, ship small Lovable apps (lovable.app) to show capability rather than a resume.
Career guidance & what skills to invest in
- High-value skills going forward:
- Judgment, taste, and emotional design (UX, copy, fonts).
- Human-to-human skills and domain expertise (sales, community, enterprise workflows).
- Problem decomposition, specification writing, and PRD craftsmanship.
- Roles that will remain essential:
- Elite engineers for infra, scaling, security, and long-term maintainability.
- Designers and PMs who can define what “magical” looks like.
- Nontechnical backgrounds can be advantages — less biased constraints, more creative experimentation.
- To become a professional vibe coder: build in public, make and share projects, demonstrate outcomes (deployable apps), and apply to companies using these tools.
Actionable checklist (what to do next)
- Try a 5‑tab experiment: build 3–5 prototypes in parallel (brain dump, refined prompt, design reference, code template).
- Pick a winner and spend a day writing:
- masterplan.md
- implementation-plan.md
- design-guidelines.md
- user-journeys.md
- tasks.md
- rules.md / agent.md
- Run the agent: “Execute next task from tasks.md; report what you did and how I should test.”
- When stuck: add console logs, export to GitHub, consult Codex, revert if needed, then add lessons to rules.md.
- Start building and sharing publicly — create a small Lovable app to demonstrate capability.
Notable quotes
- “Coding is going to be like calligraphy — rare and artistic.”
- “AI is an amplifier: if you don’t know what you’re doing, you’ll produce garbage faster.”
- “The ceiling on the AI is what the model sees before it acts — what are you exposing it to?”
- “Master the clarity of the ask. That’s the emerging core skill.”
Resources & where to find Lazar
- Tools mentioned: Lovable (lovable.dev), Cursor, Cloud Code, ChatGPT/GPTs, Codex, RepoMix, Mobbin, Dribbble, Figma.
- Lazar’s prompts / PRD GPTs: he’s published GPTs (Lovable PRD/prompt generators) in the ChatGPT store — search for “Lovable PRD generator” or his name.
- Find Lazar: LinkedIn (most responsive); he also posts on YouTube and builds public Lovable apps.
- Companies hiring vibe coders: Lovable is hiring across roles (see Lovable jobs page).
If you want to build today: stop consuming and ship five tiny projects — then pick one and write the PRDs/rules before you let the AI agent loose.
