Can Climate Science Live Up to Expectations?

There’s a huge demand for local and regional climate projections. Policy-makers, planners and everyday people all over the world are looking for scientists to provide “data” on the future climate of their region, their town, their coast, their water supply, in order to better inform long-term decisions. This demand being met by the fast-spreading institution of “climate services”. In the past few years, there have been many new national and international initiatives and forums, like the recent Pacific Islands Climate Services Forum, with the aim of getting scientists, government agencies and the private sector to “supply” these services.

Pacific Islands Climate Services Forum (Jan 2013)

The initiatives face some real-world obstacle. For one, whether better “data” can ensure, or even contribute much to, good decision-making is itself an open question. Before we can even think about the decisions themselves, we need to deal with the expectations surrounding the “data”.

There is a gap between what science can deliver and what people expect science to deliver, as Mike Hulme discusses in Why we disagree about climate change?.

The recent RealClimate post on regional climate modelling illustrates the size of that gap. It describes a couple of recent publications, summarized in Science (Kerr, 2013), which question the effectiveness of the regional models  based on comparison analysis of model output and climate observations. The RealClimate post rightly takes the articles to task, reminding everyone that no climate model, regional or global, should be expected to recreate the exact year-to-year variation in the weather. The system is too chaotic and sensitive to the initial conditions in the model. So models can describe the frequency and magnitude of climate variability, but “these fluctuations are not synchronized with the real world.”

The gap is common with climate change science, but hardly unique to climate change science. Think of going to the doctor with a sprained ankle. You hope for a clear diagnosis and timetable for recovery. Instead, you receive a vague answer on a simple three point scale about the severity of the sprain, and a range in weeks for the likely recovery time.

The expected recovery time from the sprain is the medical equivalent of a multi-model ensemble prediction: we can’t tell you exactly when the ankle will heal or how much the climate will change, but we can tell you given the input data, it “should” occur in this range. It is, statistically-speaking, possible that it will not occur in that range, because there is a chance that the data on which that range was based did not capture 100% of the range of possible experiences.

As a patient worried about being able to walk, you quite likely to want the “expert” to do more definitive tests to improve the answer. However, the high-technology test, be it an MRI or a new climate model, is not guaranteed to radically improve the diagnosis of what’s happened nor the prognosis. There are many things we can’t know with 100% confidence.

Many of the potential users of climate change projections, not educated on the fine technical points of climate modelling or statistics, are often expecting answers with a precision that scientists and our models will never be able to provide. This unreasonable expectation is embedded in the very language that is used. At the Pacific Islands Climate Services Forum in January, I was told many times about the need for “data”, a word I’ve intentionally place in quotes in this post, for decision-making. The word “data” implies a precise measurement. Yet what scientists can provide is a “prediction”, which comes with uncertainty, itself a combination of known and unknown elements

It is clearly important to develop and properly evaluate methods for regional climate prediction. Even with the uncertainty, some of which is irreducible, in future predictions, the information can still be of use in decision-making. We are, after all, able to decide whether it is safe to start running again after an ankle sprain, despite imperfect knowledge on the exact state of the ligaments, muscles and tendons.

Scientists, however, need to recognize the core challenge is not just improving models, but improving understanding of what can be modelled. Otherwise, scientists and decision-makers will be at cross purposes.

If you’re interested in more of these ideas, I recommend the third chapter – “Performance of Science” – of Hulme’s book. I assign it to my undergraduate students every year.

Reposted with minor edits from the author’s blog, Maribo

Comments:

  1. I like this as a starting point for a discussion, but let me start with a quibble instead:

    Yet what scientists can provide is a “prediction”, which comes with uncertainty, itself a combination of known and unknown elements

    This is a reasonable thing to say if you don't follow the debates closely. In fact people have parsed things further in distinguishing between a "prediction" and a "projection". In this language, climate science does not provide predictions at all, not even fuzzy ones, but rather conditional predictions, predictions conditioned on emission scenarios.

    After all, it is not the role of science to predict collective behavior. More sensibly, it should be the role of science to act as the sensors for society, to provide information to modify collective behavior.

    The importance of scenarios in climate prognosis is pretty important.

    Now, if you're unaware of that particular linguistic tussle, it's perfectly reasonable to say what you said. But given the fallback to "projection" in discussing the scenario-dependence of the prognosis, missing that tends to confuse the point.

    This is political thinking (words gradually attain quite narrow and secular culturally specific meanings and even agendas) not scientific thinking (where words are carefully defined in one context and can be redefined for another purpose in another context, the important fact being the conveying of the meaning, not the choice of symbols that do the work.

    Again, this is nothing against your point, which indeed I agree with. But it's also a bit unnerving that it's the first comment that struck me.

  2. The relationship between science and democracy is indeed fraught. And we are not supposed to say anything about it, else we jeopardize our position as scientists and become "mere" political actors. At least that seems to be the upshot of Roger Pielke Jr's career.

    He takes a fairly extreme version of that position this week, calling any specific warnings a "power grab".

  3. The first comment is even more telling than you realize. I use "projection" in all scientific work involving climate futures, and am extra, one might argue annoyingly, vigilant about the proper terminology with students and papers I review. Here I chose to use "prediction", not because I think it is the correct term, but because it is in my experience the term used by people asking for future "data", the people to whom this post was originally aimed, and wanted to avoid the linguistic explanation you gave. But you are right, and rather than talking about the use of the word "data", I could, and probably should, have framed the whole example around the common use of the word "prediction" rather than the more accurate word "projection". It is another linguistic example of the gap between what we provide (projections) and what people want us to provide (predictions).

  4. Good stuff. It's a difficult problem. Public bodies, as you say, clamour for this information as starting point prior to considering strategy. Model-makers do not always push back firmly enough with their explanations of the limitations of what they're doing.

    This sort-of worries me in that a lot of money is being thrown at dubious modelling ideas claiming too much (maybe I'm guilty of this...) but I'm also aware that you don't get successful new solutions without about 99% of attempts failing. Funding bodies are, not unreasonably, highly unlikely to want to sell that idea to the public.

    But the specific case of regional projects seems relatively clear: they won't work. Someone should be telling users very clearly! Our problem is uncertainty: we need to change the whole responsive structure of governance to deal with that, not get stuck in thinking our current institutions can stick to this dreadful approach of "academic makes policy-relevant statement; public body considers; public body perhaps begins to use modelling approach in planning..."

    None of this is new. Nice quote from my dept's famous prof, Stan Openshaw:

    Without any formal guidance many planners who use models have developed a view of modelling which is the most convenient to their purpose. When judged against academic standards, the results are often misleading, sometimes fraudulent, and occasionally criminal. However, many academic models and perspectives of modelling when assessed against planning realities are often irrelevant. Many of these problems result from widespread, fundamental misunderstandings as to how models are used and should be used in planning. (Openshaw 1978 p.14)

  5. I think it is important in this context to distinguish between seat-of-the-pants models and physically well-constrained computational models, which I'd call "simulations".

    Weather prediction codes nowadays do a pretty good job a week or so out and better than chance two weeks out; before the codes existed (even after the math had been worked out) nobody could do much better than a couple of days.

    The relationship between climate models and weather models is not easy to summarize briefly, but there in a basis is highly tested well-constrained and successful calculations underlying climate calculations. (Many other fields do not have a comparable tool.)

  6. Regional projects can and do "work", in that they provide an envelope of likely future climates for the region. In mountainous regions or in islands, regional modelling or even statistical downscaling can clearly improve model representation of rainfall. The issue is what people expect from such projects. Scientists need to improve translation of the process, and hence the uncertainty, and many users need to calibrate their expectations.


Leave a Reply

Your email address will not be published.