Overview of ⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex
This episode is a conversation between hosts (swyx + Alessio) and Alex Lieberman and Arman (Armand) Hezarkhani of 10X — an AI-first engineering firm that hires and compensates engineers based on output (story points) rather than hours. The discussion covers the company's origin, why output-based compensation matters in an AI era, hiring and interview practices, tech stack and agent usage, real project examples, scaling constraints, and longer-term technical bottlenecks for building fully autonomous AI engineers.
Key takeaways
- 10X was founded after Arman rebuilt a product org to be AI-first following a 90% reduction in headcount and saw a massive productivity leap.
- The core idea: compensate engineers for output (story points) to align incentives with rapid AI-driven productivity instead of hourly billing.
- The company expects multiple engineers to earn $1M+ in cash within a year under their output-based model.
- Major constraints for growth today are human: hiring enough skilled engineers and matching them to processes without degrading delivery.
- Technical bottlenecks to fully autonomous AI engineers include context engineering and controlling entropy (compounding errors) in closed loops.
Founding story & motivation
- Arman’s previous company had to rebuild as AI-first after drastically shrinking its engineering team; production-ready code output increased significantly.
- Alex experienced firsthand the leverage of LLMs in product conversations and partnered with Arman to build a business model that rewards high-output AI-enabled engineers.
- The belief: pay for output to give elite engineers “unlimited upside” while avoiding perverse incentives that hourly models create.
Business model and compensation
- Compensation is primarily tied to story points (units of output) rather than hours.
- Their model is intended to allow engineers to earn far more than traditional salary/hour rates when they deliver much higher throughput.
- They claim it’s likely multiple engineers will make $1M+ in cash next year from this model.
- To avoid gaming or short-termism, they hire people who are “long-selfish” (care about long-term client relationships) and team-oriented coders who care about quality.
Hiring and interview process
- They kept take-home assignments post-AI, but made them extremely difficult to filter for top talent.
- Process: two screening calls → difficult take-home → take-home review → 1–2 follow-up meetings. Fast if candidate passes the take-home (can be done in about a week).
- Interview questions probe deep thinking about building AI engineers (e.g., what bottleneck you’d solve with infinite resources).
- Biggest hiring pain: finding enough high-quality engineers who fit the culture and incentives.
Notable projects / examples
- Retail computer vision: ported and quantized multiple models to run on Gen 4 Raspberry Pi and Jetson Nanos in parallel to deliver in-store heatmaps, shelf-stock detection, queue/line detection, and theft/body-analysis prototypes — prototype in ~2 weeks.
- Snapback Sports: built a trivia mobile app in a month that reached #20 on the App Store globally (non-AI example showing speed).
- Influencer fitness chatbot: an engineer built a functioning prototype in ~4 hours that acted like a personalized fitness/nutrition coach, which helped land them as the frontrunner to do the final build.
Tech stack, agents & tooling
- Default stack: TypeScript full-stack (React frontend, TypeScript backend or Express), with shared types/schemas to enable structured agent workflows.
- Preference for higher structure (TypeScript) so agents can iterate autonomously using type errors and schema enforcement.
- They use multiple coding agents; no single favored agent — they monitor model performance frequently (e.g., Claude Code, Codex) and adapt based on task-specific strengths.
- Practical stance: formal evals are useful but a lot comes down to “feel” and real-world performance for a given workflow.
Process safeguards and roles
- Two primary roles in client engagements: AI engineer (implements) and technical strategist (responsible for NRR, retention, and account growth).
- Technical strategists sign off on engineering plans before sprints; they act as a guardrail against sandbagging or quality issues.
Constraints & scaling challenges
- Primary limit: human capital. They are currently “human-bound” — recruiting and scaling delivery processes are the main bottlenecks.
- Secondary ambitions: build proprietary technology in the long term, but for now focus on people + process.
- Matching new hires to existing processes to maintain quality as the company scales is a key operational challenge.
Views on technical barriers to fully autonomous AI engineers
- Two framed bottlenecks:
- Context engineering: supplying models the right, relevant context and getting them to attend to the right parts.
- Entropy/control: preventing compounding error in autonomous coding loops; even a small error rate compounds and derails agents over time.
- The conversation suggests improvements can come from both model-level advances and application-layer context engineering.
Culture & language in the AI community
- Debate over new acronyms and hype (example: MCP) — Arman is skeptical of buzzwording when it’s used mainly to raise capital.
- The hosts are organizing debates and sessions to clarify practical uses vs. hype, encouraging learning by contrast.
Practical recommendations & resources mentioned
- For non-technical learners: watch 3Blue1Brown’s LLM lecture and Andrej Karpathy’s materials on how ChatGPT works; take handwritten notes to internalize concepts.
- If recruiting/assessing AI engineers: use hard take-homes + short technical process; ask thought-provoking questions about building an AI engineer and bottlenecks.
Notable quotes
- “The kernel of insight that started all of 10X was, how do we hire the best engineers in the world? How do we offer them unlimited upside by compensating them for output rather than hours?”
- “Today, it’s human bound, 100% — recruiting. The thing that keeps us up at night is how can we hire enough good engineers fast enough?”
- On fully autonomous agents: “Controlling entropy…if there is some error rate…that entropy will build and multiply and derail the agent.”
Bottom line
10X is betting on an output-based compensation model to capture AI-driven engineering leverage. They combine aggressive hiring/filtering, structured TypeScript-centric stacks, and a two-role client governance model (engineer + technical strategist) to deliver fast, high-impact solutions. Their immediate challenges are recruiting and process scaling; their technical focus is on context engineering and preventing compounding errors in autonomous loops. The episode offers concrete examples of speed gains and pragmatic guidance for companies and non-technical learners getting into AI engineering.
