The initiatives face some real-world obstacle. For one, whether better “data” can ensure, or even contribute much to, good decision-making is itself an open question. Before we can even think about the decisions themselves, we need to deal with the expectations surrounding the “data”.
There is a gap between what science can deliver and what people expect science to deliver, as Mike Hulme discusses in Why we disagree about climate change?.
The recent RealClimate post on regional climate modelling illustrates the size of that gap. It describes a couple of recent publications, summarized in Science (Kerr, 2013), which question the effectiveness of the regional models based on comparison analysis of model output and climate observations. The RealClimate post rightly takes the articles to task, reminding everyone that no climate model, regional or global, should be expected to recreate the exact year-to-year variation in the weather. The system is too chaotic and sensitive to the initial conditions in the model. So models can describe the frequency and magnitude of climate variability, but “these fluctuations are not synchronized with the real world.”
The gap is common with climate change science, but hardly unique to climate change science. Think of going to the doctor with a sprained ankle. You hope for a clear diagnosis and timetable for recovery. Instead, you receive a vague answer on a simple three point scale about the severity of the sprain, and a range in weeks for the likely recovery time.
The expected recovery time from the sprain is the medical equivalent of a multi-model ensemble prediction: we can’t tell you exactly when the ankle will heal or how much the climate will change, but we can tell you given the input data, it “should” occur in this range. It is, statistically-speaking, possible that it will not occur in that range, because there is a chance that the data on which that range was based did not capture 100% of the range of possible experiences.
As a patient worried about being able to walk, you quite likely to want the “expert” to do more definitive tests to improve the answer. However, the high-technology test, be it an MRI or a new climate model, is not guaranteed to radically improve the diagnosis of what’s happened nor the prognosis. There are many things we can’t know with 100% confidence.
Many of the potential users of climate change projections, not educated on the fine technical points of climate modelling or statistics, are often expecting answers with a precision that scientists and our models will never be able to provide. This unreasonable expectation is embedded in the very language that is used. At the Pacific Islands Climate Services Forum in January, I was told many times about the need for “data”, a word I’ve intentionally place in quotes in this post, for decision-making. The word “data” implies a precise measurement. Yet what scientists can provide is a “prediction”, which comes with uncertainty, itself a combination of known and unknown elements
It is clearly important to develop and properly evaluate methods for regional climate prediction. Even with the uncertainty, some of which is irreducible, in future predictions, the information can still be of use in decision-making. We are, after all, able to decide whether it is safe to start running again after an ankle sprain, despite imperfect knowledge on the exact state of the ligaments, muscles and tendons.
Scientists, however, need to recognize the core challenge is not just improving models, but improving understanding of what can be modelled. Otherwise, scientists and decision-makers will be at cross purposes.
If you’re interested in more of these ideas, I recommend the third chapter – “Performance of Science” – of Hulme’s book. I assign it to my undergraduate students every year.