Climate models may never produce predictions that agree with one another, even with dramatic improvements in their ability to imitate the physics and chemistry of the atmosphere and oceans. That's the conclusion of a report by James McWilliams, an applied mathematician and earth scientist at the University of California, Los Angeles. The mathematics of complex models guarantees that they will differ from one another, he argues. Therefore, says McWilliams, climate modelers need to change their approach to making predictions.
I had been under the impression that this was already known. It's known as "sensitive dependence on initial conditions"; if we know the weather to within a certain precision ε right now, then after one day we know the weather to within kε, after two days we know it to within k2ε, and so on, where k is some constant larger than 1 . More technically, the Lyapunov exponent of the weather is positive.
I've probably read several dozen versions of the story that is usually told about Edward Lorenz's toy model of the weather. (It would be interesting to see a web page that gives the various ways this particular story has been told, something like this page which gives over a hundred versions of the story of the young Gauss summing 1 + 2 + ... + 100.) The story, if you're not familiar with it, goes like this: Lorenz had a toy model of the weather in his computer, a system of differential equations. (I want to say it was a system of three equations, but I might be confusing it with the Lorenz attractor. Then again, they may actually be the same system.) It was the sixties, so computers were slow. Lorenz had his computer print out the position of the system in phase space at time 0, 1, 2, ...; one day he was looking at one of these printouts and saw a pattern he wanted to investigate. He fired up the computer again and typed in a line from the printout and told it to evolve the system from that point. The system evolved differently in the second run than the first; Lorenz thought it was a mistake, but eventually realized that the figures on the printout were rounded versions of the actual numbers in the computer, so he was introducing a small error by doing this, which was quickly amplified.
The MathTrek article is about climate, not weather, though, and it addresses this point (even anticipating my complaint about Lorenz!). Still, my instinct would have been -- even before reading this -- that the climate is a complex system just like the weather. The Lyapunov time -- the reciprocal of the Lyapunov exponent -- is much larger for climate than for weather. (I am quite confident saying that the average high temperature in Philadelphia next August will be about eighty-three degrees, and I am confident enough in this that when I take my air conditioner down when the summer ends, I will store it in my closet, instead of selling it. But I have a much worse idea what the weather will be on August 2, 2008.) Roughly speaking, climate is the average of weather, and averages change much less quickly than the things being averaged. I am confident that the Phillies will win about 29 of their remaining 55 games and just miss the playoffs, which is something I've gotten quite used to. (I'd be pleasantly surprised if they prove me wrong.) I have no idea whether they'll win tomorrow. (I would have said "I have no idea whether they'll win today," but they're up by four runs right now. It's only the fourth inning, though, so they have time to fall apart.)
The actual study is available here. Apparently the state of the art in climate and weather forecasting is to run a variety of different models on the same initial data; if they end up giving similar results then you can be fairly confident in the correctness of the forecast, while if they vary widely you know the forecast isn't so good. This is an experimental way of determining how sensitive the forecast is to the assumptions of the model. Although I'm not a meteorologist, it would be kind of interesting to see this on weather forecasts. I'm not sure how useful it would be for temperature; would I take a forecast high of "94, plus or minus 3" any differently than a forecast high of just "94"? Probably not. But for, say, snowfall estimates it could be incredibly useful. I don't care so much if they say there will be "two inches of snow". What I really want to know is if there's a chance of having some amount of snow that will seriously inconvenience me (say, over six inches). But I doubt you'll hear this on the TV news, because "we don't really know what the weather is going to be" kills the ratings -- even though "everyone knows" that the TV weather people don't really know what the weather is going to be. I've been known to actually use something like this "ensemble forecasting" myself -- I go to a bunch of different weather forecasts and see what they say. I'm not sure if it actually helps me, but it makes me feel better, usually because when I'm checking multiple weather web sites it means I'm procrastinating.