Overview of Big Technology Podcast — "Can AI Become Conscious? — With Michael Pollan"
This episode (hosted by Alex Kantrowitz) features bestselling author Michael Pollan discussing his new book, A World Appears: A Journey Into Consciousness. The conversation explores what consciousness is, whether and how machines might become conscious, experiments and theories aimed at cracking the "hard problem," and related ethical, spiritual, and scientific implications. Pollan combines neuroscience, philosophy, first‑person experience (including psychedelics), and reporting on current AI and biology research to argue a cautious, often skeptical stance about machine consciousness while acknowledging the research value of trying to build it.
Key takeaways
- Pollan is skeptical that current AI systems (LLMs) are conscious. He argues consciousness involves bodily feelings, vulnerability and mortality, and analog biological processes that are not reducible to computation or information alone.
- The "brain-as-computer" metaphor is powerful but limited: brains lack a clean software/hardware split, neurons operate analogically and chemically, and experience is embodied.
- Feelings (affective states) are central to consciousness in Pollan’s view and in many modern theories (e.g., Antonio Damasio, Mark Solms). Feelings are not just information; they are embedded in mortal, vulnerable bodies.
- Attempts to engineer machine consciousness are scientifically valuable because they can test competing theories and teach us about consciousness, even if they ultimately fail to produce "real" subjective experience.
- Testing consciousness in machines is hard. The Turing Test addresses linguistic imitation, not subjective experience; Pollan proposes alternative experimental designs (e.g., withholding cultural/human-feeling training data and probing a system about consciousness).
- Plant and psychedelic research broaden the discussion: plants display complex, adaptive behaviors and can be "anesthetized" in experiments; psychedelics and meditation can defamiliarize ordinary conscious experience and inform inquiry.
- Philosophical alternatives (panpsychism, idealist/receiver metaphors) remain on the table because materialist/physicalist reduction has not yet solved the hard problem of subjective experience.
Topics discussed
- What consciousness is and why it's remarkable (self-awareness, theory of mind, reflective processing).
- Limits of computational metaphors for the brain (software/hardware separation, analog chemical modulation of neurons).
- The role of feelings, homeostasis, and bodily vulnerability in consciousness.
- Contemporary AI behavior that mimics human relationships (companionship, people falling in love with chatbots) and the risk of being "fooled" by good simulations.
- Experimental attempts to build conscious machines:
- Mark Solms' “felt uncertainty” / free‑energy/homeostasis approach (avatar in a video game that experiences conflict between competing needs).
- Engineers building robots with soft sensors and vulnerability to encourage embodied feelings.
- Global workspace and modular architectures as possible routes to machine consciousness.
- Neuromorphic computing and brain organoids as other long‑range avenues.
- Testing machine consciousness: why the Turing Test is insufficient and a proposed thought experiment (train a model without human-feeling cultural context and test whether it can meaningfully discuss consciousness).
- Ethics and moral consideration: whether and when we should grant moral status to machines vs. animals and under-served humans.
- Spiritual and religious implications: solving (or failing to solve) the hard problem could either demystify consciousness or bolster revival of animist/panpsychist views; psychedelics and mystical experiences as ways of knowing.
- Plant neurobiology and sentience: examples of plant perception (hearing, responding to caterpillar sounds, kin recognition, movement with apparent intent), and experiments showing plants respond to anesthesia.
Notable quotes and framing lines
- Pollan: "Consciousness is…a precious gift" — a phenomenon many take for granted but which is central and wondrous.
- Mark Solms (as summarized): consciousness framed as "felt uncertainty" — subjective experience arising when competing homeostatic needs conflict.
- Norbert Wiener (quoted): "The price of metaphor is eternal vigilance" — warning against equating brains and computers too literally.
- Pollan (on simulation vs. reality): "A weather simulation will never get you wet" — simulations can mimic appearances but not necessarily the full reality.
- Demis Hassabis (paraphrased): "Information is the most fundamental unit of the universe" — an information‑centric view that, if true, supports the possibility of machine consciousness emerging from computation.
People, experiments and theories referenced
- Michael Pollan — author and interviewer subject (A World Appears; How to Change Your Mind).
- Alex Kantrowitz — host.
- Mark Solms — neuroscientist / psychoanalyst; theory: feelings originate in brainstem; consciousness arises from conflicts (felt uncertainty).
- Antonio Damasio — influential researcher linking feelings to consciousness.
- Demis Hassabis — DeepMind/Google: idea that information may be foundational.
- Blake Lemoine — former Google engineer who argued for machine personhood (mentioned contextually).
- Christophe Koch — worked on neural correlates of consciousness; critique of purely reductive approaches.
- Sherry Turkle — sociologist on human/computer relationships.
- Plant neurobiology researchers — experiments showing plant perception, kin recognition, chemical signaling and anesthetic responses.
- Experimental projects: avatar/game-based homeostasis models; robots with soft sensors designed to enable vulnerability/feelings; neuromorphic computing and brain organoids mentioned as future paths.
Note: Transcript contains some names that may have errors; for instance the roboticist reported as "Kingston Man" is cited in the conversation as a person building a vulnerable robot.
Distinctions Pollan emphasizes
- Sentience vs. Consciousness: sentience = basic responsiveness/ability to register positive vs negative states; consciousness = richer, self‑reflective experience (though lines can blur).
- Simulation vs. Reality: good simulations (LLMs) can convincingly mimic first‑person reports of feeling without necessarily having subjective experience.
- Map vs. Territory (information as model vs. the thing itself): is information merely our representation of reality, or the fundamental fabric of reality?
Implications and recommended actions (for researchers, ethicists, policy makers, and curious listeners)
- Continue interdisciplinary experiments (AI, neuroscience, embodied robotics, plant biology) because trying to build conscious systems can reveal what consciousness requires.
- Develop better tests for subjective experience beyond Turing-style linguistic imitation: e.g., experimental protocols that limit cultural/contextual data and probe for signs of embodied feeling or genuine uncertainty.
- Avoid premature anthropomorphism of LLMs; treat conversational behavior as potentially deceptive simulation.
- Prioritize ethical considerations for existing morally relevant beings (humans and animals) even as we debate obligations toward hypothetical conscious machines.
- Broaden public and scientific discussion of consciousness to include first‑person methods (meditation, psychedelics, phenomenology) alongside third‑person neuroscience.
- Be open to revising scientific frameworks: keep philosophical positions (panpsychism, emergent materialism, information‑theoretic accounts) in debate rather than ruled out prematurely.
Further reading / listening
- Michael Pollan — A World Appears: A Journey Into Consciousness (new book discussed in episode).
- Michael Pollan — How to Change Your Mind (background on psychedelics and subjective experience).
- Work of Antonio Damasio, Mark Solms, Christophe Koch on feelings and neural correlates of consciousness.
- Sherry Turkle on human–computer relationships.
- Coverage and criticism of Blake Lemoine's claims about LaMDA.
This episode is a good primer on why consciousness remains a hard, multidimensional problem and why attempts to build conscious machines are scientifically informative even if they do not (yet) produce subjective experience.
