[*This article originally appeared at Open Mind, author retains copyright*]

One of the analogies sometimes used to explain the impact of global warming on the weather is that we’re “loading the dice.” Perhaps a better description is that we’re *changing* the dice.

An ordinary die has six faces, with a single spot on one face, two spots on another, etc. etc. up to six spots. When you roll the die, you get an essentially random result between 1 and 6. It’s not uncommon in many games (craps, for instance) to roll *two* dice and add their numbers to compute the result. We could even roll three, or four, or as many as the game requires (Yahtzee, anyone?).

In any case, let’s call the result you get *weather*. The way the faces of the dice are numbered, with the six faces having the numbers 1 through 6, let’s call that*climate*.

Climate (the labels on the dice) determines what you *expect* to get. It also determines how much you can expect your result to *vary*. This fits the definition of actual climate, which is the mean and variability of weather over long time spans over large areas. Although climate determines the average result and how much variation we can expect, it doesn’t by itself determine the actual result. That’s a random result of rolling the dice … or if you prefer, a random result of natural variations in weather.

Suppose we roll three standard dice. The result will be some number between 3 and 18, and here’s the probability of each possibility:

The mean value is 10.5 with a standard deviation of 2.958. Now let’s change just one of the dice so that instead of having faces with 1, 2, 3, 4, 5, and 6 spots, its faces have 1, 3, 4, 5, 7, and 8 spots. Now the result of rolling all three will be some number between 3 and 20. Here’s the probability with the changed dice (in red) compared to that with the standard dice (in black):

The mean result with the changed dice is 11.667, its standard deviation is 3.375. Notice that we haven’t just increased the mean, we’ve also increased the expected variation.

Note also that this change doesn’t alter the chance of getting the lowest possible result. The chance that all three dice will turn up “1″ is still 1/216. But we’ve reduced the probability in the middle of the distribution, and for most of the low numbers.

It’s on the high end that we see the greatest change, relatively speaking. The chance of rolling an “18″ is way more than it was before. We even have a chance (albeit a small one) of results so high they used to be impossible — we could get a 19 or a 20.

When we change either the mean value or the variance of a distribution, then relatively speaking the most profound changes in the probability are likely to occur in the *tails* of the distribution, i.e., for the extreme events. Let’s take a look at how this might affect a different probability function, the *normal distribution* (the familiar “bell curve”).

Here’s the “standard normal” distribution (in black) compared to the same distribution shifted to a higher mean value (0.3) in red:

Here’s the *ratio* of the probability for a given result with the shifted distribution, to that with the standard distribution:

Note that for almost all results above zero the probability is higher but for all results below zero the probability is lower. For really extreme results the ratio becomes sizeable. The chance of a one-in-a-million result (roughly 4.75 standard deviations) is about 4 times greater.

While increasing the mean definitely increases the likelihood of high extremes and lowers that of low extremes, the greatest danger of more extreme events happens when we increase the variance of our normal distribution. Here’s the standard normal again (in black) compared to a normal distribution with increased mean value of 0.3, and only slightly increased standard deviation (from 1 to 1.1):

Here’s the probability ratio between the two distributions:

Note that for extreme highs, the probability is *much* increased. The chance of a 4-sigma even is about 10 times greater. We can perhaps get a clearer picture by plotting the same thing with a logarithmic y-axis:

This makes clear an interesting phenomenon: we have also increased the probability of extreme *low* events! The chance of a one-in-a-million low is twice as great as before, while the chance of a one-in-a-million high is just about 20 times greater.

And this illustrates one of the greatest potential dangers of global warming. If we increase the mean temperature (and we already have), of course we increase the likelihood of extreme heat waves (and we already have). But if, in addition, global warming increases the *variance* of regional temperatures, then we increase the likelihood of extreme heat waves by *a lot*. A helluva lot. The effect was profound when we only increased the standard deviation by a factor of 1.1 — what if it increases by a factor of 1.2 or even more? The increased likelihood of extreme heat would be astounding. What’s more, we would also increase the likelihood of extreme cold spells!

In fact this applies to all weather phenomena, not just temperature. Climate change is likely to change the mean value of each, and increased variance will dramatically increase the likelihood of extremes, bringing more heat waves *and*cold spells, more flood *and* drought, etc. We may, in fact, already be witnessing exactly this phenomenon. Welcome to the rest of our lives.

Pingback: What does a climate disaster look like?