Data Centers in Space + A.I. Policy on the Right + A Gemini History Mystery

Summary of Data Centers in Space + A.I. Policy on the Right + A Gemini History Mystery

by The New York Times

1h 11mNovember 14, 2025

Overview of Hard Fork — "Data Centers in Space + A.I. Policy on the Right + A Gemini History Mystery"

This New York Times Hard Fork episode covers three distinct but connected AI stories: Google’s Project Suncatcher (a proposal to build solar-powered data centers in low Earth orbit), an insider briefing on how the Trump White House approached national AI policy (with Dean Ball, former senior AI policy advisor), and a surprising experiment by historian Mark Humphries that suggests an unreleased Google Gemini model may show step‑function improvements in handwriting, numeric and tabular reasoning.

1) Project Suncatcher — data centers in space

What Google (and others) are proposing

  • Google published a paper/blog describing "Project Suncatcher": a design for space‑based, highly scalable AI infrastructure.
  • Concept: thin solar arrays and computing clusters in low Earth orbit (dawn–dusk orbit) that receive nearly continuous sunlight — potentially up to ~8× the productivity of terrestrial solar.
  • Prototype plans: Google aims to test two prototype satellites around 2027 in partnership with Planet. Startups (e.g., StarCloud) and other firms (Axiom Space) are also active; public comments suggest interest from figures like Jeff Bezos and Eric Schmidt; possible Chinese efforts exist.

Rationale

  • Terrestrial buildout faces permitting, land, water, community opposition and an energy‑grid capacity problem as AI compute demand grows exponentially.
  • Space solar promises abundant, near‑constant energy, addressing one of the biggest bottlenecks for large AI deployments.

Technical and economic hurdles

  • Launch and hardware costs: sending chips and infrastructure to orbit is currently many times more expensive than building equivalent data centers on Earth.
  • Radiation: Google tested TPUs in proton‑beam simulations and found newer TPUs were more resilient than expected, potentially surviving a five‑year mission.
  • Maintenance & repair: operators expect to rely on robotics for in‑orbit repairs.
  • Latency and data transfer: LEO latency is relatively small — comparable to existing satellite constellations (Starlink); deemed manageable for many workloads.
  • Environmental and political concerns: space debris, regulatory and geopolitical issues, and public opposition (a new NIMBY variant: "NOMPs—Not On My Planet").

How Google frames it

  • Positioning Project Suncatcher as a long-term moonshot (5–15+ years), similar to previous multi‑decade efforts (Waymo, quantum computing). Google presents it as serious R&D rather than sci‑fi whimsy.

2) AI policy on the right — interview with Dean Ball

Who Dean Ball is

  • Former senior policy advisor for AI and emerging technology in the Trump White House; led drafting of the administration’s AI action plan. Now at the Foundation for American Innovation and author of the newsletter Hyperdimensional.

Key takeaways on the administration’s stance and right‑of‑center views

  • Coherent intuitions in the administration: AI is an enormous opportunity; there are familiar and novel risks; AI is central to U.S. strategic leadership.
  • Right‑wing factions are diverse:
    • Accelerationists/pro‑industry voices (skeptical of “doomer” rhetoric).
    • Doomer/nationalist voices (e.g., Steve Bannon‑aligned figures) who emphasize existential and catastrophic risk.
    • National security-focused conservatives who prioritize competition with China.
    • Kid‑safety/consumer‑harms conservatives concerned with content harms and youth safety.
  • Industry is not monolithic: hyperscalers (Google, Microsoft, Amazon) have nuanced positions (may favor export controls to slow foreign competition), frontier labs want chip access and infrastructure; firms are building infrastructure to create moats.

Federal vs. state regulation

  • Dean Ball argues for federal standards for high‑impact models (interstate commerce, training costs, and scale) to avoid a patchwork (California currently functions as a de facto regulator).
  • He supports federal action on tail risks and national security; otherwise he accepts a largely reactive posture for many harms, using liability laws and targeted statutes as harms materialize.

The "woke AI" executive order (procurement)

  • The administration’s executive order targets procured models for government use, asking agencies to avoid models with engineered ideological bias — it governs procurement, not public model training.
  • Ball argues the government can require disclosure of system prompts for procurement, but mandating training changes for public models would raise First Amendment problems.

Outlook

  • Ball thinks incremental policy advances are possible without catastrophic triggers, but acknowledges politically contentious issues will likely fragment into topic‑specific debates (e.g., data centers, kid safety, export controls).
  • He sees real incentives for labs to invest in safety (bankruptcy risk if catastrophic harms occur), and believes some tail‑risk measures should be bipartisan.

3) A Gemini history mystery — Mark Humphries’s experiment

The experiment

  • Mark Humphries (history professor, Wilfrid Laurier University) uses AI to transcribe and extract metadata from handwritten historical records (fur trade, 18th–19th century).
  • In Google’s AI Studio he repeatedly saw an A/B test response from a mystery model (likely an unreleased Gemini variant — possibly Gemini 3) that markedly outperformed prior models.

What stood out

  • Error rate: Humphries reports a word error rate around ~1% for the mystery model — comparable to human transcribers and roughly a 50% drop from Gemini 2.5 Pro’s errors (which were around 95% accuracy in his tasks).
  • Tabular and numeric reasoning: the model correctly interpreted compact, old ledger notations (e.g., “145” meaning 14 pounds 5 ounces) and converted/aggregated values within historical currency/measurement systems (pounds, shillings, pence).
  • This suggests improved numeric, tabular and symbolic reasoning beyond simple token prediction — capabilities notably hard for earlier LLMs.

Implications

  • If replicated, historians can trust models for richer data extraction tasks (e.g., extracting itemized purchases, quantities, bookkeeping math) and scale research that previously required large human transcription efforts.
  • More broadly, improvements that let models reliably handle structured numeric reasoning and domain‑specific symbolic conversions could generalize across knowledge work (legal, financial, scientific domains).
  • The finding supports a view that continued scaling/improvements can produce qualitatively new capabilities, not just incremental betterness.

Notable quotes & pithy lines from the episode

  • "The sun is a really freaking good source of energy." — highlights the energy motivation for space solar.
  • "From NIMBYs to NOMPs — Not On My Planet." — on potential public opposition to space data centers.
  • On the procurement executive order: it's about the versions sold to the government, not public model training.
  • Humphries: the model's handling of "145" → "14 lb 5 oz" felt like symbolic reasoning, not mere pattern matching.

Key takeaways / what to watch next

  • Project Suncatcher is a serious R&D moonshot: expect prototypes in the next few years but wide deployment remains costly and years away; watch for Google’s 2027 test launches and StarCloud/Axiom activity.
  • Space data centers could solve energy constraints for massive AI compute but raise new technical, environmental, regulatory and geopolitical questions.
  • U.S. AI policy is still forming: expect fragmentation across issue domains, with federal action prioritized for tail risk, national security and procurement rules; states will continue to act on immediate harms (e.g., kid safety).
  • Model capabilities continue to surprise domain experts: the Humphries experiment indicates next‑generation models may reliably perform structured, numeric and domain‑specific reasoning — a potential inflection point for knowledge work automation.
  • Keep an eye on Gemini 3’s public release and independent benchmarks, plus further experiments in AI Studio.

Practical actions / recommendations

  • If you’re a policy watcher: track federal procurement rules, California’s SB 53 implementations, and any congressional efforts on export controls and tail‑risk mitigation.
  • If you’re in tech or infrastructure: evaluate long‑term energy and compute projections; consider supply chain, launch cost, and space‑debris/environmental risk analyses.
  • If you’re a researcher or historian: pilot next‑gen models on domain test sets (with held‑out benchmarks) and validate numeric/tabular outputs against human ground truth before automating pipelines.
  • If you’re a business leader using AI: plan for faster growth in compute demand and watch how model capabilities could shift knowledge‑work workflows — invest in data governance and safety processes now.

Credits: episode hosts Kevin Roose and Casey Newton; guests Dean Ball and Mark Humphries.