Overview of Managing The Risks Of Spaceflight, 40 Years After Challenger
This Science Friday interview (host Ira Flatow) features former NASA astronaut Jim Wetherbee reflecting on what the space program learned about risk since the 1986 Challenger disaster, how organizational culture and leadership shape safety, and how those lessons apply to modern crewed missions such as Artemis II. Wetherbee—six‑time shuttle flyer and former head of flight crew operations—emphasizes operator control, institutional memory, and the interplay of rules, principles, and human judgment in managing hazardous operations.
Guest background
- Jim Wetherbee: former NASA astronaut (6 flights, commanded 5 shuttle missions), former director of flight crew operations, and author of Controlling Risk: 30 Techniques for Operating Excellence.
- Host: Ira Flatow (Science Friday).
Key topics discussed
- The Challenger disaster (1986): cause (failed O‑ring in a solid rocket booster linked to cold temperatures), impact on NASA risk management.
- How accidents (Apollo 1, Challenger, Columbia) drive culture change, and why that change can erode over generations.
- The role of leadership and institutional memory in safety.
- Operator vs. manager/engineer perspectives on risk and system design.
- The balance between automation and human judgment; why humans must remain in the loop for critical contingencies.
- Commercialization of spaceflight and whether private operators change the risk calculus.
- Readiness for Artemis II (upcoming crewed lunar‑flyby mission) and how crews prepare and control risk.
Main takeaways
- Controlling risk is distinct from merely accepting it: operators must have ways to control and mitigate hazards, not just tolerate them.
- Leadership matters. Safety culture improves after accidents when leaders elevate operational voices; but culture can degrade if institutional memory and leadership attention wane.
- Include operators in decision-making. The Rogers Commission after Challenger recommended elevating flight operations’ influence so operational concerns (pilots, crews, controllers) get more weight.
- Use both rules-based and principles-based approaches: procedures/rules are necessary, but operators also rely on principles, judgment, and techniques to control unforeseen risks in real time.
- Humans remain essential despite automation: computers excel in routine precision, but humans provide judgment, intuition, and the ability to improvise when systems fail.
- Commercialization doesn’t automatically change the physics of risk—what matters is leadership and whether organizations instill a safety culture that values operator input and long-term risk control over short‑term goals.
- Institutional memory is best preserved through stories and vicarious learning (training, debriefs, sharing past tragedies), because only a subset of people will ever learn experientially.
Notable quotes & paraphrased insights
- “We never really want to accept risk. What we want to do is control the risk as an operator.”
- “There are only two ways humans learn: experientially, or vicariously through the experience of others — so tell the stories.”
- “If you really understand the system, when bad things happen you will be able to figure out the answer, even if it’s not in the checklist.”
- “Automation is great 98% of the time; the human must be ready for the other 2%.”
Artemis II — crew readiness & risk control
- Wetherbee reports conversations with Artemis II crew members (including commander Reid Wiseman and pilot Victor Glover) indicate they are well trained and aware of risks.
- Crews participate in cross‑organizational meetings, elevate operational concerns, and train to take manual control if needed—examples include design feedback like windows that aid visual control and contingency alignment during reentry.
Practical recommendations (for organizations operating in high‑risk domains)
- Preserve institutional memory: regular storytelling, formal debriefs, training that includes lessons from past accidents.
- Elevate operator/flight operations input in decision chains—ensure operational voices influence go/no‑go decisions.
- Combine rules/procedures with principles-based training so operators can improvise safely when procedures don’t cover an anomaly.
- Select and sustain leaders who visibly prioritize safety and operational rigor over schedule/throughput pressure.
- Maintain human capability in critical phases: design systems that keep operators in the loop for off‑nominal events and train them to intervene.
- For commercial entities: embed safety culture early; market incentives alone won’t replace leadership and robust risk governance.
Bottom line
Wetherbee’s central message is that managing the risks of spaceflight requires more than technology or checklists: it demands leaders who preserve lessons from tragedy, systems that integrate operator judgment with procedures, and organizational practices that keep humans capable and empowered to control the rare but catastrophic events that automation cannot fully prevent.
