Is Climate Risk Systematically Understated?

Oreskes thinks so.

When applied to evaluating environmental hazards, the fear of gullibility can lead us to understate threats. It places the burden of proof on the victim rather than, for example, on the manufacturer of a harmful product. The consequence is that we may fail to protect people who are really getting hurt.

And what if we aren’t dumb? What if we have evidence to support a cause-and-effect relationship? Let’s say you know how a particular chemical is harmful; for example, that it has been shown to interfere with cell function in laboratory mice. Then it might be reasonable to accept a lower statistical threshold when examining effects in people, because you already have reason to believe that the observed effect is not just chance.

This is what the United States government argued in the case of secondhand smoke. Since bystanders inhaled the same chemicals as smokers, and those chemicals were known to be carcinogenic, it stood to reason that secondhand smoke would be carcinogenic, too. That is why the Environmental Protection Agency accepted a (slightly) lower burden of proof: 90 percent instead of 95 percent.

I think that the underlying point is valid but the statistical argument is a bit muddled. Statistical significance of what?

Science often reaches conclusions where statistics is uninformative. Suppose a guy shows up in the emergency room with various symptoms. The hospital makes its best guess as to what is going on. They have only a single sample. No statistics are possible. But the treatment is still based in science.

Comments:

  1. Pingback: Greg Craven’s viral climate ‘decision grid’ video – Stoat

  2. I thought the whole thing could charitably be called muddled. Her explanation of type 1 and type II errors (which she says are familiar to the readers: an improbable claim) are confusing, and of no particular relation to the subject of climate science. She writes:

    "It also means that if there’s more than even a scant 5 percent possibility that an event occurred by chance, scientists will reject the causal claim."

    Which is wrong: the scientist will fail to reject the null hypothesis, which is subtly different from rejecting the causal claim. The causal claim is not rejected as false, it is just that there is insufficient basis to believe it, based on a statistical level of confidence a researcher decided to use during his or her design of the experiment. I know Oreskes knows her stuff: but this is the kind of basic error that may crop up when an inexpert like her (and possibly the NYT editor helping with the piece) decides to take up the difficult burden of explaining sciency-stuff to the public.

    The article was supposed to be a critcism about how scientists express their confidence in various theorems of climate science. Not one solitary example is given by Oreskes in that domain. She had the space to do so, but instead hypothesized that science (and presumably climate science) bases its approach to statistical testing in the long shadow of its ancient historical ties to religion, which is something she may well be able to offer an opinion about, as an historian, but which has minimal relevance to policy makers or the interested public in interpreting scientific claims as found, say, in the IPCC reports. That section of the article has post-modern vibe that makes me gag, and surely makes the more educated deniers chuckle to themselves: "Is that the best argument the alarmists writing in their liberal totem can come up with?"

    Climate scientists should be able to speak up for themselves, rather than play the game through ardent, yet inexpert, public intellectuals like Oreskes.

  3. Well, there is a whole lengthy argument to be had, and this 5% threshold does come in.

    Deniers certainly have been encouraging the over-valuation of "statistical significance" as well as its natural misinterpretation. See for example Michael Crichton's famous rant.

    Once you prove to 95% that the globe is warming, deniers start to focus on regional trends. Once you prove that to 95% that species ranges are retreating poleward, they ask for proof about specific species. Prove, prove, prove.

    It is absolutely the wrong standard. And sometimes it is literally misapplication of the 95% threshhold that the deniers fall back upon. We are in a position where "balance of evidence" should be more than enough, but politicians are happy to support the vested interests by demanding absurd standards of "proof".

    Far too much energy has been spent on applying statistics to problems where statistics don't really matter. The reason is only secondarily scientific reticence. But it's also true that scientists go with the herd unless they have very strong evidence, or very strong personalities, or both. And the herd started where everybody else did, not eager to believe that we could be in as serious difficulty as in fact we are.

  4. When a scientist says something "could" happen, such as explosive releases of methane in the Arctic Ocean could happen in the future, what is the degree of certainty on that? 30% 50%, 75%, 90%? I am curious as to that.

  5. The whole null hypothesis significance testing approach to risks of this sort is silly and misleading. It's not as though statistical significance marks some kind of magical threshold above which secondhand smoke causes disease and below which it's entirely safe.

    Indeed, as Andrew Gelman has emphatically pointed out many times, the difference between statistically significant and statistically insignificant is not, itself, statistically significant.

    What seems to me a better approach would be instead of describing risks or causal relationships as Boolean yes/no things, with some arbitrarily selected threshold for significance (5%, 10%, whatever...), why not work in a Bayesian framework and present the posterior distribution for the size of the causal effect or for the magnitude and frequency of the risk.

    Thus, risk to smokers could be the prior, new empirical work the evidence, and use these to establish a posterior distribution of risk.

    The same kind of thing could work well for attribution of warming: Here are our priors, here's the data, here is the posterior about what fraction of the 20th century warming is due to anthropogenic factors.

    A Bayesian approach would mean that instead of silly "burden of proof" arguments over Type I vs. Type II errors, we could be talking about measuring the size of the effect. And since the issue with risks is not just whether they exist or not, but how severe they are, it seems much more useful and practical to talk quantitatively about magnitudes and probabilities.

    Ziliak and McCloskey's "The Cult of Statistical Significance" is highly recommended reading for those who want more on this topic.

  6. I think climate risk is greatly understated but it has nothing to do with what p-value you select, which after all has zero to do with impact. My study has been in nonlinear systems, particularly in a medical context, and often found that statistically significant differences had little material significance, while statistical "tendencies" (p ~0.7-0.9) often were correlated with massive material significance.

    The key to understanding this lies in control theory. If an adaptive system is getting pushed hard enough, its defense systems can switch from negative to positive runaway feedback. For instance, the immune system fights infection but with enough stresses on the body, can cause general inflammation that kills cells, leading to greater immune response and eventually systemic failure. In fact, one of my colleagues believes that hospitals kill a lot of people through taking too many diagnostic tests and thus aggravating immune response.

    In our experiments, we gathered evidence for this hypothesis but found that there was a critical point in which all subjects were quite sick and if they could make it through that point they lived, while if they remained sick a day or so longer they almost always died due to sepsis. This very sensitive -- probably even chaotic -- outcome relied almost exclusively on the response of the whole body and thus was not statistically significant, until the last few moments.

    On the other hand, most statistically significant outcomes often have to do with single angles that produce some behavior but the system is adaptive and can thus get around the difference in one way or another.

    I think the climate system is the exact same, and by having enormous forcings, we're going out of the "design parameters" of the control system, with unknown consequences. However, the hallmark of all non-linear systems that are pushed out of their stability is volatility, so even though we might not know what will happen in the medium term (long term we should hit steady state based on total forcings) we can know it will be very very difficult to respond to.

    Thus, my issue isn't with statistical certainty, but the inability to use frequentist approaches in the first place. It's not that climate scientists are too conservative because of waiting to confirm the system was changing, it is that they are too conservative because most of the literature is still driven by steady state modelling and communications about calculable risk, whereas I would say the risk is inherently incalcuable after BAU for another decade or so.

    From this perspective, I do think that climate change is unable to be addressed by the scientific method, because we will have complete inability to predict behavior and volatility will be so high that trying to rationally adapt will fail.

    Put another way, in the early days of Fukushima I thought it would get into partial or full meltdown, because I knew that Alvin Weinberg (one of the co-inventors of the LWBR) said it was fundamentally unstable at utility scale with cooling removed. I guessed all those "backup safety systems" were likely to fail in interesting and unpredictable ways, perhaps exacerbating the issue. Needless to say, hydrogen explosions fall into that category.

    Someone I know who is a climate change skeptic asked why I had no faith in the models that showed Fukushima would be OK but I did have faith in global warming models. I replied that it was simple: the nuclear plant models were about control whereas the global warming models merely had to convince me we'd get into an uncontrollable state. If people tried to use them for control (e.g. sulfate geoengineering) then I'd greatly object.

    So I'd like to see the establishment say that we must not, under any circumstance, get over 1.5C. Close isn't good enough, and 3C isn't just twice as bad, it is literally unimaginably worse.

    P.S. I should note that I don't mean the planet will literally die; merely jump to another stable state quite rapidly, which appears to be roughly +6-8 degrees (but won't immediately hit there, it could oscillate +/- 2 degrees) and lead to social + much ecological system collapse. Eventually the volatility would lower and things would move on from there.

  7. MT,
    Do you really think that your final example of someone showing up in an Emergency Room is correct? It's true that in that instance there is only a single person with a single set of symptons. However, there will have been other similar cases and so the decision that is made is still based on what those symptoms most likely suggest, not based on guesswork from that single sample.

    However, I do agree with your point in the above comment. There is a huge tendency to try to determine the statistical significance of the smallest possible sample and when it doesn't pass the statistical test to argue that we can't say whether or not there is a trend, or whether or not there is an AGW contribution. In fact, it's often worse than that in that some will phrase things to make it sound like not passing the statistical test allows us to say that there is no trend, rather than we don't know if there's a trend.

    Of course, you would expect people to consider the overall picture, rather than focusing on the smallest possible sample and ignoring everything else. Additionally, a Bayesian approach would probably give a different answer. How likely is it, given the presence of increased anthropogenic forcings, that certain apparent trends are unrelated to AGW? Of course, there are probably reasons why some would rather not approach it from this perspective.

  8. In the case of the earth, there is also an immense amount of evidence (from theory, from paleoclimate, even from observations of other planets) to bring to bear. But on the treatment strategy for the presenting symptoms, there is a single instance.

    In each case there is no doubt that statistical reasoning played a part in developing the expertise being brought to bear on the problem. But the problem itself cannot be construed as statistical. That doesn't make it unscientific.

    If all science is statistics, do we throw away Galileo, Newton, Maxwell, Einstein? It really wasn't until after Einstein that statistics entered into physics, and not in the way that frequentists like to assert in any case.

    Frequentism is entirely appropriate for clinical trials. The idea that it is the basis of all science is just BS.

    The obsession with "the attribution question" has been driven by (political) denialism using (statistical) frequentism as a weapon. Reason is fundamentally Bayesian, and frequentism should be considered just a weird corner of Bayesian thought.

    If you want to consider medical treatment as a sort of informal Bayesian process you will get no argument from me.

    But realistically I doubt that individual cases are treated with any attention to a formal Bayesian calculus. Perhaps that will change some day. It doesn't strike me as an unreasonable idea. But in any case there's no place in an individual case for T-tests and statistical significance, which is the point I am making.

  9. In fact the word "Bayesian", implicitly applied as a sort of "alternative" to "conventional" frequentism, is one of the most pernicious pieces of pseudoscientific bullshit around, now that I think about it.

    We are cornered into saying "Bayesian" when all we really mean is "this isn't a multi-trial experiment where we are trying to filter out an effect of interest when other effects add noise to the individual measurements". Which is an important case, to be sure, but to pretend that anything else is some novel alternative is beyond ludicrous.

  10. The quantification of the answer to that sort of question is sadly informal, even if you're more specific about "in the future" and "explosive". There is a consensus process at IPCC aimed at getting actionable estimates of this sort of likelihood, but it is certainly not above criticism.

    To be precise about such matters is not within the realistic capacities of science.

    That said, I feel compelled to note that the following is fair: IF something takes civilization out, the likelihood that the main cause will be an explosive release of methane is negligible.

  11. Thanks. We're probably largely in agreement, then. I wasn't arguing for statistics above all else (in fact, I'd normally argue the reverse).

    I think this point you make is interesting and I've wondered something similar myself
    The obsession with "the attribution question" has been driven by (political) denialism using (statistical) frequentism as a weapon.

  12. We are cornered into saying "Bayesian" when all we really mean is "this isn't a multi-trial experiment where we are trying to filter out an effect of interest when other effects add noise to the individual measurements". Which is an important case, to be sure, but to pretend that anything else is some novel alternative is beyond ludicrous.
    Add, I did wonder what you meant by the earlier part of this comment. Yes, I agree.

  13. Michael, I agree with you that a major abrupt methane release is probably negligible, but I'd be lying if I said the recent findings in the Kara Sea, ESAS, and Yamal Peninsula didn't make me a bit uneasy. That being said, I disagree with Mikkel that climate risk is "greatly" understated. It it true that some parts are understated, such as projected temperature rise and ice melting, but it can be overstated, such as being linked to extreme weather events (there was a recent article on that, I believe) and the timeline of an ice-free summer Arctic Ocean. In general, the risks are understated, but to automatically conclude it is greatly so is missing the bigger picture. The bottom line is that scientists put out these estimates of risk for the most part because they believe them to be reasonable. They make adjustments to these estimates as time goes by, such as the excellent work in modelling done by Gavin Schmidt, to provide us an accurate picture as possible.

  14. If all science is statistics, do we throw away Galileo, Newton, Maxwell, Einstein? It really wasn't until after Einstein that statistics entered into physics,

    I have to protest. There's a nice chapter on the history of the Gauss distribution in ET Jaynes' "Probability Theory - The Logic of Science". The name "Gauss distribution" is due to his 1809 astronomy book "Theoria Motus Corporum Celestium", which methinks nailed Newtonian gravitation. Another derivation is by the astronomer Herschel (1850) who got the Gaussian from simple assumptions on measurement error. It is also known as Maxwell's distribution in thermodynamics. And finally it was Einstein's theory of Brownian motion that nailed atomism.

    The first serious use of probability in astronomy is due to Laplace (1787) -- There's some eery parallel to today: He proved that no doom will come from the "great inequality of Jupiter and Saturn", which was noted by Halley in 1676.

    Jaynes: "... this situation was of more than ordinary interest, and to more people than astronomers. Its resolution called forth some of the greatest mathematical efforts ... either to confirm the coming end; or preferably to show how the Newtonian laws would ... save us."

  15. Pingback: Under confident | …and Then There's Physics

  16. MT:

    "Looking glass" version of Oreskes here:

    Good grief. Those pseudo-skeptics are so convinced climate science is driven by some sort of official anti-capitalist policy agenda. You and ATTP both showed commendable restraint, but it's hard to believe you could overcome the motivated reasoning on display. Presumably none of them attended AGU14. If they had, their confidence might have been seriously challenged.

  17. "Suppose a guy shows up in the emergency room with various symptoms. The hospital makes its best guess as to what is going on. They have only a single sample. No statistics are possible."

    But they do not have only one sample. They have had many people with the same symptoms passing through the hospital and others, over many years. Using the statistics from those patients, it is easy to diagnose the most likely cause of his symptoms, and the second most likely cause.

    From paleclimate we know that when atmospheric CO2 increases, the climate in the Northern Hemisphere responds abruptly e/g entry into the B-O and Holocene interstadials. The most likely response of the climate to the anthropogenic increase in CO2 is thus an abrupt warming, not the monotonous rise in temperature related to the linear rise in CO2 that everyone expects. Could it be that the current hiatus is just the calm before the storm!

  18. There is no hiatus. This is stupid statistical nonsense, if not utter bullshit. (El Nino et al is possibly not modellable in principle beyond predicting statistics of ocean weather: Stochastic resonance.)

    Essential HOMEWORK: Have a look at surface temp since 1970, draw in the trend line, study the wiggles around the trend line. What do you see? Who is too lazy for this homework should look up at least one of Tamino's illustrated mathematical exorcisms of this zombie crock.

    The abruptness of climate change at the end of a glaciation is mostly due to "quick" dissolution of ice sheets. We don't have much of these ice sheets anymore. What is left will lead to abrupt sea level rise, but won't accelerate temp change that much. This is one difference of the current, abruptest, warming.

  19. Global temperature rise has not stopped, but it has not kept up with expectations either.

    http://www.nature.com/nclimate/journal/v3/n9/full/nclimate1972.html?WT.ec_id=NCLIMATE-201309

    I think climate change (other than mean temperature) has exceeded expectations nevertheless, in several ways. But that's a complicated case to make for the most part.


Leave a Reply

Your email address will not be published.