Overview of AI Music Is On The Charts. Where Does It Go From Here?
This Science Friday episode (host Flora Lichtman; reporting by Dee Peterschmidt) examines the rapid rise of AI-generated music — from viral TikTok tracks and fully AI “bands” to record deals and label partnerships — and asks what it means for artists, listeners, and the music industry. The episode combines reporting (Billboard’s Kristen Robinson) with historical perspective from electronic-music pioneer Laurie Spiegel to map technical, commercial, ethical, and artistic implications.
Key moments & examples mentioned
- Viral TikTok examples: "A Million Colors" (credited to Beanie Prey) and heavy-machinery demo videos that spread widely on social platforms.
- The Velvet Sundown: a fully AI-generated band (music + images) that drew significant attention.
- Zania Monet: an AI-generated avatar whose music climbed gospel charts and whose human creator, Talisha Nikki Jones, signed a reported multi‑million‑dollar deal with Hollywood Media — cited as a turning point for mainstream recognition.
- Suno: reported to have 7 million song generations per day (from an investor pitch deck obtained by Billboard).
- Deezer research: claims 97% of listeners cannot reliably tell AI songs from human-made songs in listening tests.
Who’s building AI music (companies & tools)
- Suno: currently a dominant, controversial startup for one-click song generation; criticized for training on copyrighted material without licensing.
- UDO: another text-to-song service, pivoting toward AI-powered remixing of existing tracks.
- Google: launched Lyria 3 on Gemini and acquired Producer AI — watching as a likely major player going forward.
- Spotify: planning AI-powered remix features (licensed remixing, vocal removal, mashups).
- Other ecosystem tools: interactive instruments and software (e.g., Laurie Spiegel’s Music Mouse, recently re-released).
How AI is being used in music now
- Full songs generated on demand (viral novelty songs and niche genre tracks).
- Production assistance: songwriters and producers reportedly using AI (e.g., as an arrangement or beat-rearrangement assistant).
- Remixing and mashups: removing vocals, tempo changes, combining songs — a growing commercial direction with licensing potential.
- Niche and formulaic genres are common targets: gospel/Christian, country, doo-wop/retro styles — simpler or highly formulaic structures make convincing results easier.
Industry and business dynamics
- Legal posture shifted: initial lawsuits and alarm from labels/artists have given way to partnerships as labels seek to capture value instead of missing the trend.
- Public-company pressure: major labels that are publicly traded face shareholder expectations to innovate and monetize AI.
- Hidden usage: some professional songwriters and producers reportedly use AI tools in sessions, leading to speculation that AI-origin material may already exist in mainstream charted songs without disclosure.
Technical and artistic limitations
- Audible artifacts: reporters and musicians describe AI vocals/audio as “pixelated” or slightly digital/scratchy — easier to detect on good headphones.
- Lack of embodied emotion: artist Laurie Spiegel emphasizes that AI lacks lived emotional experience; current generative models “parrot” repertoire rather than genuinely feel or respond emotionally.
- Non-interactivity: many models are prompt/response systems (write a prompt, wait for a generated result) rather than interactive, tactile instruments that respond to moment-to-moment expression.
- Dataset & rights issues: controversy around training on copyrighted human-made works without compensation for rights holders.
Perspectives from musicians & pioneers
- Kristen Robinson (Billboard): AI made major strides with high-profile viral tracks and label deals; many songwriters are quietly experimenting; Suno’s scale worries the industry.
- Imogen Heap (mentioned): embraces technology but is concerned about models trained on her work without compensation.
- Laurie Spiegel (electronic music pioneer):
- Historical parallel: early computer music faced similar skepticism about “dehumanizing” art.
- “Technology is the most human thing around” — tools extend artistic possibilities.
- Warns about over-reliance on prompts: generative AI is qualitatively different from the visceral, interactive act of playing.
- Calls current models “non-interactive generative parrots” — they reproduce learned language/music without gut-level understanding.
- Emphasizes art should come from an authentic internal source; tech should be a means, not the core.
Notable quotes
- On audible differences: “It’s a little bit of a scratchiness… the audio version of pixelated.”
- Laurie Spiegel: “Technology is the most human thing around.” and “They parrot it back. But they don't understand it on a gut level that we humans experience.”
Main takeaways
- AI music has moved beyond novelty into mainstream attention (viral songs, label deals, chart placements).
- Startups like Suno and UDO are central today; big tech (Google) and streaming platforms (Spotify) are entering the space — expect acceleration.
- AI is already being used both as a creative assistant and as an autonomous generator; some use is undisclosed.
- Technical limits remain (audio quality, lack of genuine emotion, non-interactivity), but listening indistinguishability is improving.
- Legal, ethical, and economic questions are urgent: dataset licensing, royalties, transparency, and potential crowding out of human creators.
- Artistic response will vary: some artists embrace tools for new creativity; others resist or demand better protections/compensation.
What to watch next
- Legal and licensing frameworks for training datasets and AI-generated works.
- How major labels and streaming platforms implement and monetize AI remixing/generation features.
- Google’s and other big-tech models (e.g., Lyria 3, acquisitions) and whether they close quality gaps.
- Disclosure norms: whether streaming services or labels require AI usage transparency on credits/metadata.
- Emerging genres, usage patterns, and new interactive AI instruments that preserve real-time expressivity.
Practical resources & suggested actions
- If you want to experience interactive algorithmic music from an earlier era, Laurie Spiegel’s Music Mouse (re-released) is linked at sciencefriday.com/music.
- Listeners: use good headphones to better assess whether tracks use AI; stay critical about attribution and metadata.
- Musicians/creators: track licensing/usage rights and consider how to protect works used in training data; learn prompt-writing as a developing creative skill if you wish to experiment.
Produced by Dee Peterschmidt; episode includes reporting with Kristen Robinson (Billboard) and an interview with Laurie Spiegel.
