Has the US fallen behind in numerical weather prediction: a response from a NOAA scientist.

{ }Another guest blog courtesy of Thomas M. Hamill. The thoughts and views expressed in this blog do not represent those of WJLA & the StormWatch7 Weather team.-Bob Ryan{ }

{ }

{ }

Cliff Mass of the University of Washington posted part one of a two-part series on how the US has fallen behind in numerical weather prediction. I think he’s done a great service to our enterprise by reminding us of the importance of numerical guidance. You can look at a satellite image, but that depiction of the clouds right now will not tell you much about whether it will snow or rain tomorrow. For that you need data assimilation, whereby the satellite data is used to adjust our estimate of the current state of winds, temperature, and humidity. And you need numerical weather prediction, a codification of our understanding of how the basic laws of physics such as Newton’s laws apply to the atmosphere. Those satellites and radars can be very expensive, and they’re just the first step in making a forecast for you. Thanks to Cliff for reminding us of the rest.

I’d like to add to the debate that Cliff started about the state of numerical weather prediction in the US. While I am a NOAA employee, this is my own opinion and doesn’t reflect the opinion of NOAA, the Department of Commerce, or the administration. Like Cliff, I’ve worked in this field for several decades, and like Cliff, I itch to see more rapid progress, to see the US become world leaders in weather prediction once again. A better weather prediction pays for itself many times over in improved decisions and saved lives and property.

Cliff is absolutely right to highlight the issue of NOAA’s restricted computational resources. Doing a back-of-the-envelope calculation, the European Centre for Medium-Range Weather Forecasts (ECMWF), has 10-100 more CPUs humming than we do for their 1-day to 10-day forecasts. Time and again, we’ve learned the value of improving the “resolution” of our numerical models, say, describing the weather on a grid where the points are separated by 15 km instead of 30 km. But that’s computationally expensive; doubling the resolution in both the north-south and east-west directions increases the computations by a factor of eight, the extra factor of two coming from marching forward in time by steps half as long as before.

Typical GFGS Model Output

Still, there’s a lot more to improving the forecast than increasing the resolution. Typically when we increase the resolution, the change permits us to notice deficiencies in the model that we didn’t worry about before. Maybe at the coarser resolution we didn’t expect the model to get water-to-land breezes in the Chesapeake forecast very well. Then we up the resolution and we notice that, say, there is a sea breeze in the afternoon, but it’s too strong. That extra resolution may really show its full benefit after the model is re-coded somewhat, perhaps adjusting how the atmosphere interacts with the more carefully specified terrain.

This is a labor-intensive process, continually maintaining a weather forecast modeling system. To really understand what’s wrong with your model, you need a lot of people looking at its output. You need experts in land-atmosphere interactions to diagnose and correct those problems. You need experts in ocean-atmosphere interactions; in the way the clouds are predicted in the model, down to the level of ice particles and rain droplets. And these experts need to understand each other’s work, for perhaps the errors in land-surface interactions might affect the errors in predicting low cloud cover.

With so many talented scientists needed to maintain and improve upon a single modeling system, we in NOAA need to decide whether it really is in our best interest to continue to develop multiple independent modeling systems as we have done in the past. Just for our global prediction, we have GFS, NMM-B, FIM, NIM, and cubed-sphere models. The acronyms aren’t important, but that we are developing many instead of one or two models is important. Our precious computer power is split many ways to test and simply maintain many models. Our dwindling staff is asked to maintain and improve many models rather than one or two, and consequently the models are not generally state-of-the-art. This same proliferation of models occurs across agencies in the US government. The US Navy has its own set of numerical models. The National Center for Atmospheric Research has theirs. Ditto universities, NASA, and the Department of Energy.

In comparison, with so many people working on a common system, ECMWF staff can delve deeper into the guts of the model and how its components interact. Their staff is more focused on the intricacies of how to develop better methods for representing the weather that happens in between the individual grid points, or the methods for exploiting the satellite data as effectively as possible in the data assimilation process, or the methods for constructing ensembles of forecasts and probabilistic weather forecast guidance. As Cliff showed, the result is a better forecast, no matter whether you’re looking at the skill of extreme precipitation or surface temperature forecasts or hurricane tracks.

How did we in the US get so wedded to this multiple-model idea? First, working together on a common model is very difficult, especially when the collaborators are spread across the country. Configuration management of the forecast model’s code would be challenging, as perhaps dozens of groups would be trying to simultaneously improve different parts of it. Also, all things being equal, two good weather prediction models are better than one, for then you have two pieces of data to evaluate. Consequently, we convinced ourselves that we were better off with many models. However, our yardstick for measuring success shouldn’t be that models A and B can beat either A or B individually. Instead, our yardstick is what we might have done had the resources for A&B all gone into, say, model A (and ECMWF is a good surrogate for that).

The other reason for multiple models in NOAA and the US, I think, is our too-reductionist way of looking at the weather prediction process. Let’s say tomorrow there’s a record-setting flood somewhere in the western US, and none of the weather prediction models did a good job of forecasting it. If the past is a guide, what NOAA may do is to form a team to work on the heavy precipitation forecast problem in the western US. That team chooses a model and then focuses on how to make that particular model better at forecasting precipitation. In the end they may have a new model that does somewhat better at precipitation forecasting in the western US but which is no better for temperature or wind forecasting, or for that matter precipitation forecasting in the eastern US! But users clamor for a better western US precipitation forecast, so perhaps their new modeling system gets piled onto the suite of existing modeling systems. One more modeling system for NOAA to maintain.

What’s of course wrong with this way of thinking is that the weather is interconnected. Today’s eastern US heat wave may well have not happened were it not for unusually active thunderstorm clusters in the tropical Indian Ocean a week ago. Hence, focusing on local heat-forecasting issues may be wasted time if the real deficiency of your model is its inability to model those Indian Ocean thunderstorm clusters. When we split up our efforts by process and try to develop better hurricane models, better severe storm models, better precipitation forecast models, better aviation forecast models, our reductionist way of doing forecast model development may be self-defeating.

Here’s an interesting story. ECMWF’s hurricane track and intensity forecasts are currently the standard for the rest of the numerical weather prediction enterprise. I asked one of their staff a few years ago how many people they had working on improving hurricanes in their model. The answer, at least at the time: NONE. They had scientists working to improve the representation of thunderstorms in their one model, scientists working to improve the physical description of air-sea interactions in their one model, and so on. Working on these more general problems improved their hurricane forecasts, for (of course!) hurricanes are organized thunderstorms and hurricanes get their energy from the warm ocean. Like a dim star that’s more easily visible with peripheral vision, they were smart enough not to focus on hurricanes directly, smart enough to know that developing a separate new model for hurricane forecasting was counterproductive.

So: it may be boring to repeat what has worked elsewhere, but we don’t need to be radical in the US to improve. ECMWF has showed us what works. Everyone pitching in, working on a common modeling system.

Why don’t we just buy data from the Europeans and be done with it? Save the US taxpayers a lot of money, right? Well, for a few reasons. First is that our government has an open-data access policy. Your taxpayer money paid to buy the supercomputers, the satellites and radars, paid our salaries, and the US government then makes sure that the weather data is freely available to all. ECMWF may share data with the NWS for internal use, but the US government can’t then share that data with the rest of you. All the TV stations out there, all the private value-added weather companies, all you internet surfers, right now you all get NWS data basically for free (having paid your taxes). In the brave new world, you would have to pay ECMWF for access to their forecasts, and we’re talking a lot of money, not pocket change. A second reason is national security. Suppose the Europeans didn’t approve of what the US was doing in some military operation and they remove US access to the ECMWF forecasts. Then where are we as a country, having let our weather-prediction capacity wither and die? So, we need a homegrown weather prediction capability. Every major country around the world recognizes this, from Korea to Brazil.

Let’s come back to the issue of NOAA’s limited supercomputing facilities. Why has it been so difficult for NOAA to upgrade? Part of this is a generally tight federal budget. Still, for the last five or so years, every other part of NOAA has been squeezed because of cost overruns in fielding a next-generation polar-orbiting weather satellite. While not having that satellite ready would be detrimental to NOAA, we need to learn a lesson from this and do better the next time. Before signing on to deploy complicated and expensive new satellites, we need to conclusively demonstrate that the data they provide will improve weather predictions in proportion to their expected costs. Similarly, we ought to be able to evaluate the return on investment from a much bigger supercomputer. Right now, the supercomputers are far, far cheaper and hence easier to justify. As Cliff mentioned, for perhaps a few percent of the cost of the new satellite, we could build back the computational capacity we need to compete with the Europeans.

NOAA Computer Facility

Let me finish by saying that despite the challenges that Cliff mentions, I’m extremely proud to work for NOAA. Bob Ryan and his team add a lot of value and context to what NOAA provides, but Bob can’t do what he does without us. Weather prediction is an incredibly complicated enterprise. NOAA deploys satellites, weather balloons, radar data, and more. Our data assimilation algorithms synthesize this data. Our models and our supercomputers crank out the numerical guidance 24/7. Our forecasters are always on the job and bust their humps in ways you could not believe when severe weather is on the way. All of this costs taxpayers pennies a day. And the data is free to all and free to you without advertising.

Thanks to Cliff Mass for his post and to Bob Ryan and the Storm Watch 7 team for the chance to contribute to this discussion.

{ }

{ }

{ }