Overview of AI: The new frontier for mental health support?
This episode of Rapid Response (host Bob Safian) — recorded live at the Innovation at Brown showcase — examines the rapidly expanding role of AI in mental health. Guests are Ellie Pavlik (director of ARIA, Brown’s NSF‑funded AI Research Institute on Interaction for AI Assistance) and Soraya Darabi (lead partner at VC firm TMV, an investor in mental‑health AI startups such as Slingshot AI). The conversation covers why people already turn to chatbots for emotional support, where benefits and harms may lie, the technical and ethical limits of current foundation models, and what responsible development, evaluation, and governance might look like.
Key points and main takeaways
- Real demand is driving AI mental‑health usage: surveys and usage data show therapy/mental‑health is a top use case for large language models.
- ARIA (Brown) exists because people are already using AI for mental health and academic/public leadership is needed to set research priorities, safety standards, and evaluation frameworks.
- Foundation models were not built specifically for mental‑health work; using them as one‑size‑fits‑all tools is risky and likely suboptimal.
- There is measurable upside: AI can increase access and scale (e.g., CBT‑style coaching, triage of non‑emergencies, symptom monitoring) and can reach people who avoid clinicians (some men, people with cost/access barriers).
- Major risks include dependency, poor triage of acute crises, untested “empathy” claims, harm to vulnerable developmental stages (children/teens), and difficulty distinguishing high‑quality from unsafe products.
- Evaluation is the hardest problem: mental health outcomes are complex and qualitative; standard AI leaderboards and single metrics won’t suffice.
- Both guests express cautious optimism: the field should prototype and deploy responsibly while accelerating rigorous interdisciplinary research and participatory design.
Topics discussed
- ARIA institute: mission, NSF $20M backing, multidisciplinary team (computer science, developmental psychology, neuroscience).
- Venture perspective: TMV’s investments (Slingshot AI, Daylight Health) and market rationale (huge unmet need; access & stigma reduction).
- Use cases and examples:
- Chatbots used as informal therapy and journaling aids.
- AI‑assisted CBT and reminders to reframe thinking.
- 911/triage systems and call‑center screening to route emergencies.
- AI companionship and romantic/virtual relationships (ethical/social implications).
- Technical limits: foundation models’ opacity, lack of domain‑specific training and evaluation, guardrails that are currently guess‑and‑check.
- Social and developmental concerns: effect on youth, substance abuse cases, and acute psychological distress.
- Governance and process: participatory design, clinician involvement, regulators, and public discourse.
Notable quotes and insights
- Ellie Pavlik: The institute chose to study this precisely because “people are already using it” and "the worst thing would be to have no scientific leadership around this."
- Soraya Darabi: Framing the market and moral case — “1 billion out of 8 billion people struggle with some sort of mental health issue” and many cannot get care.
- On evaluation: “It’s not going to look like what current AI evaluation looks like — which is a leaderboard.” (Ellie)
- On empathy: We lack precise scientific definitions of “empathy” and “understanding”; some therapeutic effects (e.g., CBT) may be approximated by AI while other relational qualities may not.
Questions that need research (highlighted by guests)
- What specific outcomes constitute "success" for AI mental‑health tools (short‑term symptom relief, long‑term recovery, reduced hospitalization, improved functioning)?
- How do interactions with AI compare neurologically, behaviorally, and emotionally to human therapy or journaling?
- Which populations benefit (or are harmed) — e.g., youth, people with severe mental illness, those in acute crises?
- What architectures, interfaces, or dedicated models are preferable to generalist foundation models for mental‑health tasks?
- How to detect and prevent dependency, mis‑triage, and other harms in real‑world deployments?
Practical recommendations / action items
- For researchers and funders:
- Prioritize interdisciplinary studies that include clinicians, psychologists, ethicists, patients, and technologists.
- Define multi‑dimensional success metrics (clinical outcomes, safety, equity, engagement, long‑term effects).
- For startups and product teams:
- Build domain‑specific models or interfaces rather than assuming a single general LLM is “good enough.”
- Prototype quickly but pair deployments with rigorous real‑world evaluation, monitoring, and escalation pathways to human care.
- Involve patients and clinicians in participatory design to surface real needs and risks early.
- For policymakers and funders:
- Support public research institutes (like ARIA) to set standards and fund independent evaluation.
- Develop regulatory/quality frameworks for apps that claim therapeutic effects and ensure appropriate triage for crises.
- For clinicians and health systems:
- Consider AI as augmentative (triage, monitoring, CBT tools) not as a substitute for high‑touch care for severe cases.
- Train frontline systems (call centers, customer service) to route mental‑health signals appropriately.
Quick verdict / tone
Both guests align around cautious optimism: AI presents huge potential to expand access to mental‑health support and to automate scalable therapeutic practices (like CBT coaching and triage), but only if development, evaluation, and deployment are governed by multidisciplinary science, clear success metrics, and participatory design. The current period is a narrow window to “get it right”; failure to do so risks harms that could close that window.
Who should listen / why it matters
- Clinicians and mental‑health researchers — to understand where AI is being deployed and what evidence gaps remain.
- Tech builders and investors — to learn about responsibilities, evaluation challenges, and market needs.
- Policymakers and funders — to appreciate the scale of demand and need for oversight and public research.
- General public — to gain a balanced view of AI mental‑health tools: they can help many, but are not yet a panacea.
For further context: the episode was recorded before recent lawsuits alleging chatbot‑related harms, underscoring the timeliness and urgency of ARIA’s work and the need for coordinated research, product safeguards, and regulation.
