Linear Digressions
by Ben Jaffe and Katie Malone
Podcast by Ben Jaffe and Katie Malone
Episodes
Benchmarking AI Models
How do you know if a new AI model is actually better than the last one? It turns out answering that question is a lot messier than it sounds. This week we dig into the world of LLM benchmarks — the standardized tests used to compare models — exploring two canonical examples: MMLU, a 14,000-question multiple choice gauntlet spanning medicine, law, and philosophy, and SWE-bench, which throws real GitHub bugs at models to see if they can fix them. Along the way: Goodhart's Law, data contamination, canary strings, and why acing a test isn't always the same as being smart.
The Hot Mess of AI (Mis-)Alignment
The paperclip maximizer — the classic AI doom scenario where a hyper-competent machine single-mindedly converts the universe into office supplies — might not be the AI risk we should actually lose sleep over. New research from Anthropic's AI safety division suggests misaligned AI looks less like an evil genius and more like a distracted wanderer who gets sidetracked reading French poetry instead of, say, managing a nuclear power plant. This week we dig into a fascinating paper reframing AI misalignment through the lens of bias-variance decomposition, and why longer reasoning chains might actually make things worse, not better. - "The Hot Mess Theory of AI Misalignment: How Misalignment Scales with Model Intelligence and Task Complexity" — Anthropic AI Safety. https://arxiv.org/abs/2503.08941
The Bitter Lesson
Every AI builder knows the anxiety: you spend months engineering prompts, tuning pipelines, and chaining calls together — then a new model drops and half your work evaporates overnight. It turns out researchers have been wrestling with this exact dynamic for 30 years, and they keep arriving at the same uncomfortable answer. That answer is called the Bitter Lesson — and understanding it might be the most important thing you can do for whatever you're building right now. From Deep Blue to AlexNet to modern LLMs, scale keeps beating sophistication, and knowing which side of that line your work falls on makes all the difference. Links - Richard Sutton, "The Bitter Lesson" - Alon Halevy, Peter Norvig, and Fernando Pereira, "The Unreasonable Effectiveness of Data" - Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, "ImageNet Classification with Deep Convolutional Neural Networks"
From Atari to ChatGPT: How AI Learned to Follow Instructions
From Atari to ChatGPT: How AI Learned to Follow Instructions by Ben Jaffe and Katie Malone
It's RAG time: Retrieval-Augmented Generation
Today we are going to talk about the feature with the worst acronym in generative AI: RAG, or Retrieval Augmented Generation. If you've ever used something like "Chat with My Docs," if you have an internal AI chatbot that has access to your company's documents, or you've created one yourself on some kind of personal project and uploaded a bunch of documents for the AI to use — you have encountered RAG, whether you know it or not. It's an extremely effective technique. Works super well for taking general purpose models like ChatGPT or Claude and turning them into AIs that are aware of all the specific information that makes them truly useful in a huge variety of situations. RAG is pretty interesting under the hood, so I thought it would be fun to spend a little while talking about it. You are listening to Linear Digressions. RAG was first introduced in this paper from Facebook Research in 2021: https://arxiv.org/pdf/2005.11401
Chasing Away Repetitive LLM Responses with Verbalized Sampling
One of the things that LLMs can be really helpful with is brainstorming or generating new creative content. They are called Generative AI, after all—not just for summarization and question-and-answer tasks. But if you use LLMs for creative generation, you may find that their output starts to seem repetitive after a little while. Let's say you're asking it to create a poem, some dialogue, or a joke. If you ask once, it'll give you something that sounds pretty reasonable. But if you ask the same thing 10 times, it might give you 10 things that sound kind of the same. Today's episode is about a technique called verbalized sampling, and it's a way to mitigate this repetitiveness—this lack of diversity in LLM responses for creative tasks. But one of the things I really love about it is that in understanding why this repetitiveness happens and why verbalized sampling actually works as a mitigation technique, you start to get some pretty interesting insights and a deeper understanding of what's going on with LLMs under the surface. The paper discussed in this episode is Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity https://arxiv.org/abs/2510.01171
We're Back
It's been (*checks watch*) about five and a half years since we last talked. Fortunately nothing much has happened in the AI/data science world in that time. So let's just pick up where we left off, shall we?
So long, and thanks for all the fish
All good things must come to an end, including this podcast. This is the last episode we plan to release, and it doesn’t cover data science—it’s mostly reminiscing, thanking our wonderful audience (that’s you!), and marveling at how this thing that started out as a side project grew into a huge part of our lives for over 5 years. It’s been a ride, and a real pleasure and privilege to talk to you each week. Thanks, best wishes, and good night! —Katie and Ben