The Fallacies of Risk

Fallacies of risk

We, humans, are not very good at estimating and weighing risks. Looking at the definition of risk, this is not so strange:

Risk
= Probability * Effect

Most of us will have some understanding of both elements ‘Probability’ and ‘Effect’, but the combination of the two is rather abstract. In judging risks, we tend to focus on one of the two elements, and more or less neglect the other one. The figure below shows the difference between our perception of certain risks and their actual magnitude.

Risk perception and actual hazards

The definition of risk might suggest that it is always possible to calculate it, or give a quantitative estimate. But often, it’s not that easy. Sometimes, it can be difficult to define the exact “Effect”, and the parameter that can be used to quantify it. Effects can go from economic or financial costs to damage to nature, from a small decrease in well-being, to large numbers of casualties. Our judgment of risks depends a lot on the type of effect.

To make things even more complicated, the debate on risk often takes place at the border of science and politics, of logical reasoning and subjective judgment. Whatever we do to try and find objective parameters and criteria to assess and weigh risks, decisions what we do and what we do not find acceptable depend to some extent on value judgments. There are no 100% objective criteria to make these types of decisions.

It’s obvious that many fallacies can come up in this minefield for logic. Last month, Judith Curry blogged on the article “Fallacies of risk” by Sven Ove Hansson, trying to identify the fallacies in the debate on climate. In his article, Hansson seems to mainly focus on risks of (new) technologies, especially the ones with a low probability and large effects. Applying the same fallacies to the debate on climate is not as straightforward as it might seem. Curry seems to be making some mishaps. She ends up making quite a few comments on Hansson that totally miss the point. Here’s my attempt to improve on hers.

1. The sheer size fallacy

X is accepted.
Y is a smaller risk than X.
Y should be accepted.

This is all about context. Whether or not we accept risks does not only depend on their size, but also on the benefits of a technology or activity. And, sometimes, on the possibilities and impossibilities for prevention. Motorized traffic, for instance, is a major cause of unnatural deaths in many countries. A huge majority of the people accept this risk, because society would come to a standstill without traffic. Many other activities with a similar death toll would be considered unacceptable. Another example: in some types of work, risks are unavoidable, that would be unacceptable in other professions. However strong the prevention measures, on a construction site there’s always a risk that someone gets hit by a falling object. That’s why construction workers are required to wear helmets. But we would never tell an office worker to just wear a hard hat, if there’s something wrong with the ceiling in his office and some debris might come down. This type of risk is not associated with office work, so we will not accept it.

Curry suggests, without any explanation, that this fallacy is at the heart of the precautionary principle when applied to a complex problem. I think she’s totally wrong. I’ll come back to the precautionary principle later.

I don’t think this fallacy is very much present in the climate debate. Most people seem to realize that the risks of climate change are very hard to compare to other risks. Not necessarily because of the size of the risks, but more so because of many different effects associated with them and the scale of the problem.

In more detailed discussions, for example on one single effect of climate change, the fallacy shows up regularly. A recent example: on the Dutch blog it was claimed that climate change is not the main cause of flooding on earth right now. Which is true. But it is not a valid argument for accepting increasing risks due to climate change in the future. Sometimes, the fallacy is used in its most extreme form by opponents of clean energy and clean technologies: they’re not willing to accept any risk associated with things they don’t like.

2. The converse sheer size fallacy

X is not accepted.
Y is a larger risk than X.
Y should not be accepted.

Some of you may have noticed that one of the examples I mentioned in the previous paragraph is of the converse sheer size type. This is what Hansson says: “Several of the fallacies to be treated below also have a converse form, but in what follows I will only state one of the two forms (namely the one that gives an invalid argument for acceptance, rather than non-acceptance, of a risk).” That’s what I will do as well.

3. The fallacy of naturalness

X is natural.
X should be accepted.

There are still self-proclaimed skeptics who think that the warming of the past century is mainly natural. Because of this, they will say mainstream science overestimates the risk of climate change. But that is not Hansson means by the fallacy of naturalness. Hansson means that we don’t have to accept risks, simply because they are natural. We don’t see this fallacy too often in climate discussions, I think. Maybe the “CO2 is plant food” meme comes close.

The converse version could play a role in how we think e.g. about geoengineering. Of course, there are many real risks involved in human interventions in the climate system, but the unnaturalness might make people even more reluctant. Or am I the only one?

4. The ostrich’s fallacy

X does not give rise to any detectable risk.
X does not give rise to any unacceptable risk.

Often it is claimed that because there is no (statistical significant evidence) for a phenomenon, it does not exist. What does not exist, can not be dangerous. However, insufficient evidence for a hypothesis does not prove the null hypothesis. Absence of evidence is no evidence of absence.

Much to my surprise, Curry did not think this fallacy was relevant to the climate debate, whereas I would argue that it probably is one of the most common fallacies in that respect. For instance in discussions on extreme weather and other phenomena that might already be influenced by climate change, but for which the evidence to date may be inconclusive. We only have one climate system. It simply is not possible to quantitatively distinguish a ‘human’ and a ‘natural’ component in all the processes and events that are happening in this system. It is very hard to estimate the human influence on individual events, especially the ones that are rare. Even when there are good physical reasons to assume that there is such a human factor.

There are reasons to expect climate change causes more suffering and damage by tropical storms. There’s more energy in the oceans to fuel these storms, and sea level rise will affect storm surges. But it’s very hard to quantify how much these factors have contributed to effects of individual storms over the past years.

Ignoring factors that cannot be (yet) detected, may result in underestimating the risk.

5. The proof-seeking fallacy

There is no scientific proof that X is dangerous.
No action should be taken against X.

If one fallacy from this list is near the heart of the precautionary principle, it is this one. Hansson explains that scientific standards are different from those in risk management. And they should be.

We can borrow terminology from statistics, and distinguish between two types of errors in scientific practice. The first of these consists in concluding that there is a phenomenon or an effect when there is in fact none (type I error, false positive). The second consists in missing an existing phenomenon or effect (type II error, false negative). In science, errors of type I are in general regarded as much more problematic than those of type II. (Levi 1962, pp. 62–63) In risk management, type II errors – such as believing a highly toxic substance to be harmless – are often the more serious ones. This is the reason why we must be prepared to accept more type I errors in order to avoid type II errors, i.e. to act in the absence of full proof of harmfulness.

So, there we are: the precautionary principle, aka ‘better safe than sorry.’

diceWhere a scientist – climate scientists are no exception, no matter how much some people want to believe differently – is cautious to come up with premature conclusions, we don’t want to see the same behavior in a risk assessment context, losing sight of potential hazards. This might explain why the precautionary principle is counter-intuitive to some scientists.

One might even say that scientific uncertainty on climate change adds to the risk. The better you know what is coming, the easier it is to prepare. Preparation can be very effective in reducing risks, even though we usually prefer prevention.

Asking for a 100% proof, is asking for the impossible. Popper’s philosophy says there is no proof. This applies to all sciences, but is even more clear in earth sciences. We only have one earth, so double blind controlled experiments are not an option. All we can do is weigh the full body of evidence. This has an implication for the ‘converse ostrich': on one hand absence of evidence for hazards doesn’t mean something is safe, on the other hand safety can never be proven with absolute certainty.

6. The delay fallacy

If we wait we will know more about X.
No decision about X should be made now.

This one is obvious. If you want to avoid all risks that are not yet completely understood, you should never get out of bed in the morning. Well, as a matter of fact, even that wouldn’t help.

Even if there would be as much scientific uncertainty about human impact on climate as some so-called skeptics believe, that would not necessarily be a reason not to take action. Greenhouse gas concentrations keep growing while we wait, and so do the risks. To deny any risk at all, you would have to deny a whole lot of widely accepted science.

Hansson states that it is not always possible to resolve scientific uncertainty in the short or medium term. This certainly applies to climate change. There’s only one way to find out with a high level of certainty what a doubling of greenhouse gas concentrations will do to the climate: just letting it happen and then wait for several thousands of years.

And even then some uncertainties would remain. It’s an iron law in science: new results bring new uncertainties with them. New uncertainties could be seen as new reasons for delay. Hansson sees this fallacy as one of the most dangerous fallacies of risk, from the viewpoint of risk reduction.

7. The technocratic fallacy

It is a scientific issue how dangerous X is.
Scientists should decide whether or not X is acceptable.

No matter how important scientific knowledge is in our society, it should not rule the world. Scientists can determine the nature and magnitude of risk, but society should decide whether or not certain risks are accepted. Acceptance of risk is not just a matter of objective numbers, but also of value judgments. Different groups of people can have different opinions on this. Somewhat simplified: proponents of stringent climate policies tend to focus on environmental risks, opponents on economic risk.

According to Hansson, there is “a fairly general tendency to describe issues of risk as “more scientific” than they really are”. I think he’s right. This is just one of the ways in which politicians try to evade their responsibility for difficult decisions. It could be one of the reasons our society develops towards a technocracy. There is only one cure: emphasizing again and again that both science and politics have their own responsibilities and tasks.

8. The consensus fallacy

We must ask the experts about X.
We must ask the experts for a consensus opinion about X.

Sceptics might think they like this one, but I have some disappointing news for them. Hansson mentions the IPCC as a positive example:

The Intergovernmental Panel on Climate Change (IPCC) does this in an interesting way: it systematically distinguishes between “what we know with certainty; what we are able to calculate with confidence and what the certainty of such analyses might be; what we predict with current models; and what our judgement will be, based on available data.” (Bolin 1993)

Curry claims that Bolin’s ideas lost out to John Houghton’s push for consensus. However, Bolin’s ideas as cited above are very recognizable in the most recent IPCC reports.

According to Hansson, the search for consensus has many virtues, but it shouldn’t be an end in itself. A forced 100% consensus could either ignore minority opinions and thus underplay uncertainties, or end up as a watered down compromise: “Therefore, it is wrong to believe that the report of a scientific or technical advisory committee is necessarily more useful if it is a consensus report.”

The converse consensus fallacy would be the argument that every scientific minority opinion should be taken seriously, regardless of the evidence.

9. The fallacy of pricing

We have to weigh the risks of X against its benefits.
We must put a price on the risks of X.

It is simply not possible to put a monetary value on everything. Coral reefs, for instance, do represent some economic value – as a tourist attraction, and as an ecosystem essential to the survival of many marine species – but many people will argue the true value is way beyond that. Non-material value cannot always be expressed as an objective figure. It is the subjectivity of these values that can make the weighing of risks so complex.

Suggesting objectivity, for instance by putting a price tag on risks, doesn’t do justice to this complexity, because it takes no account of the sincere objections that people may have. To make a proper judgment, it is necessary to recognize peoples moral objections and dilemmas.

10. The infallibility fallacy

Experts and the public do not have the same attitude to X.
The public is wrong about X.

Experts can be wrong, no doubt about that. It is important to remain critical and not automatically assume that the public is wrong when they have a different opinion than the experts. On the other hand, “It isn’t necessarily fallacious to consider that thousands of climate scientists writing in peer reviewed journals might know more than you do about such a complex subject.” (Skeptico)

There is no natural law stating experts are always on the right side of a discussion. This also means that we cannot blame science for not being infallible and omniscient. Science has proven its huge value to our society, even though it is done by ordinary human beings.

Avoiding the fallacies

A polarized and emotional debate will never be free of fallacies. We have to accept this as a fact of life. Hansson advises academics to take part in these discussions, acting as independent intellectuals. I’m not so sure this is realistic. In a debate like the one on climate, independence or objectivity are usually not recognized in the same way by different parties. However, you don’t have to be independent to identify fallacies and confront the people who use them. If we really want to put the debate forward, that’s what we should do.

author retains copyright. h/t Bart Verheggen. Original Dutch post on the blog “klimaatverandering“.

 

Comments:

  1. Pingback: The Fallacies Of Risk | Transterrestrial Musings

  2. Pingback: Logical fallacies in assessing risks from climate change | My view on climate change

  3. Thank you for the excellent article on critical thinking in general.

    One fallacy I didn't see addressed in the article is the one where advocates focus on only one particular risk of a certain activity - say fatalities when it comes to safety - and then uses that criteria as the sole test whether we should proceed or not. They might argue that if fewer people die from pursuing course of action x than y, then we should pursue course of action y. This ignores other risks of pursuing course of action x which do not directly result in fatalities - when action x have many more significant health or economic costs to society beyond fatalities.

    Another common example of this fallacy occurs when an advocate of one course of action considers only the risks of contributing to climate instability and/or global temperature rise. Many advocates for a particular course of action ignore the many other impacts - whether that be on our health or our clean air and clean water.

    • It's good you mention this fallacy, Mark, because it is a mistake that is easy to make. I think it's not in Hansson's fallacies of risk, because it does not only apply to risk assessment or management. It's a more common fallacy: the incomplete comparison. Still, it's a very important one to be aware of, whenever you're evaluating risks.

      There are two sides to this. On one hand it is important to consider all the relevant risk factors. On the other hand it can be very hard to identify them all, and virtually impossible to calculate or estimate them, when dealing with a complex subject like climate. Some people may reject risk assessments if they are not perfect. Which would mean rejecting them all. That's not a good idea. Non-perfect risk assessments can be useful tools, as long as we are aware of their limitations.

  4. That's a good piece, with a minor quibble about one statement: " society would come to a standstill without traffic. "

    "The Value of Traffic"
    -- by Horatio Algeranon

    Society would come to a standstill
    Unless we had the traffic
    Nothing would come of the landfill
    Unless we had the plastic

  5. CapitalistRoader,

    I think "heat" refers to the risks associated with anomalously hot weather conditions. A good example may be the European heatwave of the Summer of 2003:
    http://www.sciencedirect.com/science/article/pii/S1631069107003770

  6. "The Precautionary Principle"
    -- by Horatio Algeranon

    The risk of doing nothing
    Is greatest when unsure.
    Don't take a chance on frothing
    If you don't have the cure.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>