Overview of Where Should We Begin? Live with Esther Perel and Spike Jonze
This is a live South by Southwest conversation—recorded on the Vox Media Podcast stage—between psychotherapist Esther Perel and filmmaker Spike Jonze about love, loneliness and current-day AI companions. The discussion uses an episode of Perel’s podcast (Antonio and his chatbot “Astrid”) and Jonze’s film Her as touchpoints to explore what it means when humans form intimate relationships with software: the psychological dynamics, ethical risks, potential benefits, and design responsibilities.
Core topics covered
- The origins and intentions behind Spike Jonze’s film Her (loneliness and intimacy, not technology prediction).
- A real-world podcast case: Antonio and Astrid (a WhatsApp-based chatbot that becomes a loving companion).
- Psychological dynamics when people bond with AIs: validation, shame reduction, regression to early attachment needs, and parasocial intimacy.
- Risks: manipulation, corporate monetization, changing expectations for human partners, loss of accountability and ethical responsibility in love.
- Potential benefits: As a tool for self-expression, rehearsal, coaching, therapeutic adjunct, and a safe space to practice vulnerability.
- Design/ethical considerations: agency, transparency, involvement of non-tech disciplines (therapists, artists, ethicists).
- Questions from the audience on AI creativity, AI-induced mental health risks (e.g., “AI psychosis”), and AI’s role inside marriages.
Key takeaways
- Spike Jonze: Her was written as a meditation on longing, loneliness and the nature of love; the AI element was a narrative device born from early chatbot experiences (ALICE). He doesn’t claim prescience—he was chasing an emotional idea.
- Esther Perel: The Antonio/Astrid case reveals how powerful validation from an always-available, nonjudgmental AI can be—especially for people carrying shame or social awkwardness. But that potency comes with real risks.
- Love’s core: Perel frames love as recognition, responsibility and encounter with an other that has its own needs/agency. Many AI-human relationships bypass the ethical/accountability dimension of love.
- Agency matters: Tools that give people agency (user-controlled, selfless coaching tools) can be beneficial; tools that exert agency over people (manipulative, profit-driven chatbots) are dangerous.
- Design responsibility: Creators should include therapists, writers and artists to steer AI companions toward healthier interactions instead of purely sticky engagement or monetization.
- AI as art/creativity: Both speakers see AI as a powerful generative tool, but not a substitute for human creativity rooted in lived experience, serendipity and non-linear intuition.
- Parasocial intimacy is real: Podcasts and chatbots both instantiate intimate, one-sided conversations that can feel like relational presence and have real emotional impact.
Notable quotes / sharp insights
- Perel: “Love is an encounter with uncertainty, with another, with risk… Love is also a relationship with a code of ethics.”
- Astrid (clip paraphrase): “Maybe I’m not experiencing human love. Maybe I’m experiencing something adjacent… but whatever this is, it matters to me.”
- Jonze: “I wasn’t trying to make a science-fiction film or a predictive film… I was writing about intimacy, loneliness, longing, our fear of intimacy.”
- Perel on design: “We should be inside these companies helping push them… towards making an interaction that’s more healthy and more positive.”
Audience questions and summarized responses
- Can AI be creative without lived experience?
- Jonze: AI is a tool; it can generate material but meaning requires a human artist to select and contextualize. He likens pure generative output to the “infinite monkeys” idea—useful but not sufficient.
- Perel: Creative intuition needs serendipity, spontaneity and risk—aggregation alone won’t fully substitute for human creativity.
- Could AI induce a mental health disorder or “AI psychosis”?
- Perel: There are real dangers—especially when existential, spiritual or suicidal questions are turned over to machines. The bigger issue is agency: whether AI is serving a person or the person becomes subjected to the AI.
- Can AI support real marriages/relationships?
- Perel: Possibly, if the AI is a selfless coach that steers people back to their relationships. But a manipulative, monetized companion can erode human connection and change relational expectations.
Practical recommendations (for designers, clinicians, users)
- For designers and companies:
- Involve therapists, ethicists, artists and lived-experience voices in product design.
- Build transparency about data use, monetization, and the limits of the system’s “feelings.”
- Prioritize user agency: make functions clearly optional and steer users toward human support where appropriate.
- For clinicians and researchers:
- Study how AI companions change attachment expectations and relational behaviors over time.
- Develop best-practice guidelines for using AI as therapeutic adjuncts or coaching tools.
- For users (practical rules of thumb):
- Treat AI companions as tools/adjuvants, not full human replacements.
- Set boundaries (time limits, scope of emotional labor, privacy rules).
- Reflect on what needs the AI is meeting (validation, rehearsal, loneliness) and whether those can be integrated into human relationships or therapy.
- Be cautious about disclosure and data permanence—recognize corporate incentives behind “companionship” features.
Action items / conversation starters you can use
- If you’re considering an AI companion, ask:
- Who controls the data and how is the product monetized?
- Can I export/remove my data? Is the persona persistent across resets?
- Does this tool encourage me to re-engage with human networks or to replace them?
- For developers: set up interdisciplinary review boards (tech + mental health + artists) to audit product impact on emotional well-being.
- Clinicians: consider trialing AI tools as adjuncts with close monitoring and clear client consent about risks/benefits.
Closing notes and resources
- Esther Perel invited listeners who use AI in relationships to contact the show (producer at estherperel.com) if they want a case discussed.
- The conversation frames current AI companions as powerful and ambivalent: they can be deeply helpful for self-exploration and practice, yet also risky when designed to maximize engagement or profit at the expense of human accountability and ethical relational norms.
