Overview of How scientists predict big winter storms (Shortwave from NPR)
This Shortwave episode explains why meteorologists were able to warn tens of millions of people about a massive, multi-thousand-mile winter storm days in advance. Hosts Regina Barber and NPR climate reporter Rebecca Hersher trace that forecasting capability to modern computer weather models, decades of continuous Earth observations (satellites, balloons, radars, ships, planes and buoys), and coordinated, publicly supported data systems — and they flag how funding and staffing cuts could threaten that capacity.
Key takeaways
- The recent winter storm affected roughly half of the U.S., hitting at least 29 states, causing heavy snow, ice, high winds, and widespread outages.
- Long lead-time forecasts (several days) are now common because of advances in computer weather models and extensive observational data.
- Multiple models (e.g., the “European” model and others) are used together; forecasts are typically ensemble or weighted averages because different models excel at different scales and phenomena.
- High-quality forecasting depends more on good data than on raw computing power: data must be plentiful, granular (spatially and vertically), and continuous over long periods.
- Key data sources: satellites, weather balloons, radar, aircraft, ships, ocean buoys, and long-term observational records.
- Much of this data and the modeling infrastructure are publicly funded; recent staffing shortages and proposed budget cuts to agencies (NOAA, NASA, and federally funded labs like NCAR) risk degrading forecasting ability if they move forward.
- As weather becomes more extreme, losing observational capacity or research support would make it harder to maintain current forecast accuracy and lead times.
How the models work (concise)
- Models numerically simulate the atmosphere’s behavior (clouds, winds, temperatures, pressure) using physical equations.
- They are fed with massive observational datasets and produce probabilistic scenarios (e.g., likely snow vs. chance of rain).
- Ensembles of different models help forecasters assess uncertainty and produce usable forecasts for regions and cities.
Why observational data matters
- Plentiful: the atmosphere is complex; many measurements are needed to capture relevant states and interactions.
- Granular: observations are needed across space (land, sea, air) and vertically through the atmosphere.
- Continuous: long time series (decades) are crucial to identify patterns, especially for rare extreme events.
- Public infrastructure (government satellites, buoys, balloon launches) provides most of these datasets.
Risks and policy context
- The episode notes real-world problems: mass staffing shortages that interrupted balloon launches and proposed budget cuts that could constrain NASA, NOAA, and research centers.
- Reduced data collection or research capacity would likely degrade forecast lead times and accuracy, particularly as extremes increase in frequency and intensity.
Notable quotes
- Paraphrased Kevin Reed (Stony Brook University): predicting an event in New York several days ahead “wasn’t something we could do 50 years ago” — improvements are the result of coordinated observations and better models.
- Rebecca Hersher: “Garbage in, garbage out” — emphasizing that model quality depends on input data quality.
Practical implications for listeners
- Longer forecast lead times (days) are possible today because of sustained investments in observations and modeling — which help people prepare (e.g., buy shovels, salt, protect power-dependent equipment).
- Continued public support for weather and Earth-observing programs matters for community safety and infrastructure planning.
Additional resources mentioned
- Related Shortwave episodes: improved storm prediction in the tropics; how Santa Ana winds affect California fire season (linked in the show notes).
