The ICE Age of A.I.

Summary of The ICE Age of A.I.

by Puck | Audacy

23mFebruary 5, 2026

Overview of The Powers That Be — "The ICE Age of A.I."

This episode investigates how U.S. Immigration and Customs Enforcement (ICE) and the Department of Homeland Security are combining massive data collection with AI tools to identify, track, and predict people of interest — including immigrants and U.S. citizens. Reporter Ian Kreitzberg separates facts from social-media-fueled conspiracies (Palantir-as-sci‑fi‑boogeyman, TikTok‑enabled tracking, a single “master database”), and the hosts also discuss a Puck poll about college students’ real-world use of AI chatbots for schoolwork, mental health, and even romantic relationships.

Key points and main takeaways

  • ICE is ingesting an expanding set of data sources (including third‑party data brokers and reportedly Medicaid and IRS data-sharing agreements). Transparency about the specifics is very limited.
  • Companies like Palantir provide software to aggregate, sift, and make predictions from those datasets. Palantir and others insist they don’t “own” a master government database; they provide analytics tools — but the line between tool and centralized system is fuzzy.
  • The concern isn’t only “mystery tech” but familiar problems magnified by scale: biased/inaccurate biometric systems (facial recognition), privacy violations, and unregulated data-broker economies.
  • Example: ICE’s Mobile‑Fortify facial‑recognition app has produced documented misidentifications.
  • Governance and oversight are weak: FOIA demands and civil‑rights legal challenges are ongoing, but Congress has been slow to legislate meaningful AI or privacy safeguards in the U.S.
  • Poll highlights (college students, ~1,000 respondents):
    • 59% use AI chatbots always or sometimes for schoolwork, research, or presentations.
    • 37% say they know someone their age who uses a chatbot for mental‑health or emotional support.
    • 21% say they know someone who has used an AI chatbot for a romantic/intimate relationship.
    • (The hosts also reference other stats — e.g., Pew: 30% of teens use chatbots daily — and note some potentially conflicting phrasing across questions.)

Topics discussed

  • ICE’s data-aggregation practices and vendor contracts
  • Palantir’s role vs. the “master database” narrative
  • Types of data sources (government databases, data brokers, apps)
  • Biometric tech risks: facial recognition accuracy and bias
  • Legal, privacy, and governance gaps in U.S. AI regulation
  • FOIA requests and civil‑rights advocacy for transparency
  • Polling results on college students’ AI chatbot usage for academics, mental health, and romance
  • Societal roots of AI-driven intimacy: social‑media-driven isolation + anthropomorphized chatbots

Notable quotes and insights

  • “What we do know is that ICE is pulling in data from an ever‑expanding number of sources.” — sums up the central factual baseline.
  • On Palantir: “We’re not making a master database. We’re just helping ICE kind of sift through all the data sources that ICE has access to.” — illustrates the semantic and accountability confusion.
  • On privacy: “Privacy probably stopped existing a long time ago.” — framing the broader structural problem of the data-broker economy.
  • The risk framework: it’s not just technical ability (can they?) but constitutional and governance questions (should they?, who audits accuracy and bias?, how are agents trained?).

Examples & anecdotes from the episode

  • Viral Portland, Maine video: an ICE agent photographing a woman filming him and telling her “we have a nice little database” and calling her “a domestic terrorist” — used to illustrate public fear and murkiness.
  • Misidentification case: the Mobile‑Fortify app misidentified a woman twice as an illegal immigrant.

Implications and recommendations

  • For policymakers:
    • Increase transparency around vendor contracts, data sources, and internal evaluation/audit processes.
    • Enact clearer privacy rules and oversight for government use of commercial data and biometric tools (the EU model is a reference point).
    • Require public-facing audits of accuracy, bias, and downstream harms before deployment.
  • For advocates and journalists:
    • Continue FOIA requests and litigation to pry open how tools are used and evaluated.
  • For institutions and individuals:
    • Universities and mental‑health providers should give clear guidance to students about chatbot limitations and risks.
    • Individuals should be cautious using chatbots for mental‑health support; consider verified human care when possible.

Why this matters

  • The episode ties familiar issues (data aggregation, targeted analytics) to modern machine‑learning tooling — highlighting that AI amplifies existing privacy, bias, and governance problems rather than creating wholly new ones.
  • The college‑student poll underscores that AI chatbots have already migrated into intimate parts of daily life (education, mental health, relationships), raising both potential benefits and real harms that policy and institutions have not yet fully addressed.

Credits & sources

  • Episode: The Powers That Be (Puck) — “The ICE Age of A.I.”
  • Guest/reporter: Ian Kreitzberg
  • Polling: Generation Lab (Cyrus Beschloss) / Puck survey of ~1,000 college students
  • Noted vendors & tech: Palantir, data brokers, ICE Mobile‑Fortify

(Production and sponsorship notes appear in the episode — advertisers and production credits acknowledged in the original audio.)