Economists are always weighing in on whether it makes economic sense to cook the planet. They generally find it to be a close call. This seems peculiar. Some like to put their thumbs on the scale. This is not as helpful as they imagine. The recent “methane bomb” comment in Nature provides an example.
Economists are always weighing in on whether it makes economic sense to cook the planet. (*) They generally find it to be a close call. The press likes this, of course, because it builds suspense. Politicians and businessmen like it as well, being very much in favor of business as usual, and opposed to disrupting any particular constituency, particularly thriving and economically active ones.
We need to keep this context in mind in contemplating the fiasco of the comment by Gail Whiteman et al in Nature entitled “Vast Costs of Arctic Change“.
For the purposes of this essay, please take as a given that the trivial amount of physical science claiming “a 50 Gigatonne reservoir of methane … is likely to be emitted … either steadily over 50 years or suddenly” is wrong.
Even Andy Revkin, usually an exemplar the split-the difference make-no-calls school of thought, calls it an “Arctic Methane Credibility Bomb“. I believe Skeptical Science is working on a detailed analysis which I expect will amount to a rebuttal, and I’ve had contact with a well-known professional journalist also working on a feature. Meanwhile, if you have any confidence in my own abilities to know when I know what I’m talking about, take my word for it. I have some knowledge in this field. Wadhams’ and Shakhova’s claims are incoherent and implausible; for practical purposes there remains no evidence of the purported massive emission.
These claims exist, essentially, as a weapon in the war between the conventional economists. Of course, most people with any appreciation of nature (the phenomenon, not the publication) are aware of the many insults and disruptions the environment was already subject to before the climate started to go all wonky. Given that the disruption is only just getting started, any reasonably ethical person will want to minimize it, so as not to leave a horribly depleted planet to our successors.
This puts the person who is ethical and unalienated in a very difficult position if they happen to be a mainstream economist.
Economics is a drastically weaker discipline than climate science, because it is not based in observed, immutable, mathematically precise laws of nature, but instead on generalizations from which conceptual models are constructed. Among the observations that go into the generalizations is that economies “grow” and that the more they “grow” the more likely it is that people will be connected to “jobs” which they can use to allocate resources to be comfortable. They then weigh policies based on which ones are most likely to promote “growth”.
To evaluate climate policies, the models must be quite complex. They contain agriculture, manufacturing, energy, healthcare, etc.
In order to determine climate effects, baseline rates of how well each sector performs in ‘warm’ or ‘cool’ years are extrapolated to a crude economic sensitivity. Costs incurred in the future are discounted in the same way that a banker discounts any obligation in the distant future – both growth and uncertainty reduce the evaluation of the future costs and benefits.
The trouble is that climate change is only beginning, and it’s quite noisy. Effects are largely nonlocal and economic impacts can be far from the location of a particular disruption. Consider how crop failures in South America and Russia influenced unrest in Egypt and Tunisia recently for an obvious example. Then there’s the nonlinearity problem, where particularly odd climate events don’t appear in trends and are misattributed by averaging processes. And there’s just the sad, ubiquitous, and probably somehow debilitating nonfinancial cost of living in a deteriorating natural environment, which is captured, if at all, as a perturbation on the tourist industry. All in all, the whole approach could hardly be better designed to be blind to the enormity of quite plausible enormous future climate impacts.
Let me put it this way. Is the cost of a 5 meter sea level rise really just 1000 times the cost of a 5 millimeter sea level rise?
(UPDATE – I don’t claim this is literally in the model; actually I don’t know whether it is or not. My point is that this is the flavor of error that economics makes conventional economic models make, because it is essentially a game of optimizing small signals in a small section of the problem domain. They Such models, insofar as I understand them really have no serious concept of nonlinear system dynamics at all.)
Mainstream economics assures us that these are just the tools for the job, though; that we should just tell them about climate impacts and let their models tell us whether we need to do something about it. And when this happens, it often comes up “no big deal”. Because even though the climate has the potential to kill many or even most of us, the small signal linear trends fed into the models are tiny.
In real sciences, when the number comes out wrong (the mass of precipitate in the beaker was 11 billion tons…) the scientist goes back to look at the model, which probably was wrong in some way. Some economists do this, but most seem instead to either believe the implausible result, or try to tweak it somehow.
The most famous case of this was the Stern Review, commissioned by the UK government, which reached rather daunting conclusions but was widely criticized for (whatever the opposite of cherry-picking is) selecting parameters to make the situation look as bad as possible. Predictably, he was mocked by Richard Tol for his economics, and perhaps less predictably, by James Annan for his climatology.
“If a student of mine were to hand in this report [the Stern Review] as a Masters thesis, perhaps if I were in a good mood I would give him a ‘D’ for diligence; but more likely I would give him an ‘F’ for fail (Cox and Vadon, 2007). There is a whole range of very basic economics mistakes that somebody who claims to be a Professor of Economics simply should not make. […] Stern consistently picks the most pessimistic for every choice that one can make. He overestimates through cherry-picking, he double counts particularly the risks and he underestimates what development and adaptation will do to impacts.” Tol has referred to the Stern Review as “populist science.” In a paper published in 2008, Tol showed that the Stern Review’s estimate of the social cost of carbon (SCC) along a “business-as-usual” emissions pathway was an outlier in the economics literature.
On top of the high climate sensitivity range, Stern uses the rather extreme A2 scenario (and essentially describes it as “business as usual”) for his projections, even though it is already clear even 5 years on that we are falling behind this emissions pathway. I really think it’s time the economists got their act together on this. And then he adds some feedbacks on top, based on results like those of the Hadley Centre model which has an extreme Amazon dieback due to having way too little rainfall in this region even before any global warming is considered. If the Japanese model had this behaviour everyone would just say it’s a crap model but because it is HADCM3 it is supposed to be alarming 🙂 … nyway, my main beef is with the probabilistic estimation, because that’s what I understand best. It seems crystal clear that the methods are intrinsically faulty – indeed the errors seem rather elementary once they are stated clearly – and it is long past the time that people should have been prepared to accept this and talk about it openly. Nature’s comment that our criticisms “apply more generally to a widespread methodological approach” is hardly a valid defence of the science! Stern’s results appear to be heavily dependent on the small probability of extremely bad consequences, so these problems may substantially weaken the value of his report. OTOH, it might be the case that even with a climate sensitivity of 2.5C and assuming a more moderate “business as usual” emissions growth, mitigation is still amply justified (personally I think action is justifiable on a number of grounds irrespective of the supposed “climate catastrophe”).
The point here is that it is likely that the reasons for us to act on climate change are real, but are not captured by economics models. This seems to be incomprehensible to mainstream economists.
After all, the “tourism” section, wherein all of our love for Nature (and our implicit dependence on it) is filed (the only place natural beauty is exchanged for dollars) is rather small. However, to make it come out “right” there are enough degrees of freedom in those models that a study which supports climate policy can be constructed.
It’s in this context that we need to view the aforementioned Whiteman, Hope and Wadhams comment in Nature. The work is very much based on Stern’s methodology, and adds in the Shakhova methane bomb. The advantage of treating an unlikely or impossible disaster as “likely” is that you can make the numbers come out scary.
So let’s for a moment, consider what we are talking about. An abrupt release of 50 GT CH4 would rival the entire CO2 trillion ton limit that we need to observe to avoid increases in global temperature greater than 2 C, moving us into a wholly unprecedented climate. While the pulse would be short-lived, it would also be unaccompanied by aerosols. Suddenly if only for a few decades we’re in a drastically disrupted climate, far worse than what is imagined for the end of the century otherwise. In short, what many of us fear would be chaos, starvation, anarchy, warfare, disease, and totalitarianism.
The economic models value this at $60 trillion, with suitable discounts, over all time. This sounds like a very big number, but it isn’t. Divided among ten billion people it is $6000 per capita. WIth a century to pay it off, your payment on principal is $5 per month. (Actually the “interest”, the back-calculated impact of future loss of wealth is much higher – say $20 per month per capita)
I don’t see any way that this, the worst catastrophic climate result anybody has imagined, which most experts give absolutely zero credence to, can amount to five dollars a month. I’m sorry, but if I believed this I’d lose interest in the climate question altogether.
So what we have here is two drastic errors. An economic model which has to be pushed very hard to take any note of climate whatsoever, and a climate scenario that has essentially no scientific support, coming together to threaten us with an expense comparable to a Netflix subscription.
In normal science, when somebody says something worthless, it just gets ignored, and vanishes. But in postnormal science, the more spectacular the error, the more it gains zombie status, the more influential it is in the debate. The bad guys are having a lot of fun with this, of course, mocking the earth science. Even Lubos is readable and passably amusing when he has the facts more or less on his side. (UPDATE – I apologize for the self-indulgence of linking to Lubos Motl, from whom I have received similar treatment on many occasions. See more extensive update below.) But nobody is taking on the economics!
It’s clear enough to me that $60T amortized over a century is not a reasonable number for this spectacular scenario. By arguing over the imaginary methane crisis we are giving yet another pass to bad economics. The fact that it – barely – comes up with the right answer is little consolation. We cannot cede thinking about the future to a field as badly in disarray as economics. The fact that they need to draw on bad geophysics to get anything reasonably “right” is a symptom not of the state of geophysics but of economics.
They have used two wrongs to make a half-right. The half-right is that we need to “do something” about the climate. The trouble, the reason it’s only half-right is that once we convince everybody to do something, we will have to figure out what. If our conviction is based on false beliefs it seems unlikely that we’ll pick good directions.
It’s not enough to get people worked up. We have to get people concerned about what is actually real.
(*) Use of the word “cook” is my own rhetorical device, not attributable to any economist in this context to my knowledge. This is in response to a request for clarification here.
I have been in touch with Chris Hope, one of the authors of the study. We had been corresponding via Twitter, which is fun but very low bandwidth. Now I have a letter from him.
Unsurprisingly he is unhappy with this article, and he has some points of rebuttal that it would be unfair of me to ignore.
I will separate them into ones I consider important and ones I consider secondary.
“In order to determine climate effects, baseline rates of how well each sector performs in ‘warm’ or ‘cool’ years are extrapolated to a crude economic sensitivity.”
This is only one way in which the effects are estimated. There are also cross-sectional studies and detailed scenario analyses of altered climate states.
Not sure I fully understand that, but it seems secondary. The point is that we only have small-signal data to calibrate a large signal model.
“Costs incurred in the future are discounted in the same way that a banker discounts any obligation in the distant future”.
The range of discount rates in the default PAGE09 model is much lower than those a banker would use. The range for the pure time preference rate is 0.001 to 0.02 per year.
Sloppy on my part. It was clear to me that using low discount rates in the tradition of Stern is what they are doing in order to get a big number. I am not objecting on this score: I fully agree that low discount rates are needed if we’re denominating in dollars at all.
“both growth and uncertainty reduce the evaluation of the future costs and benefits.”
Uncertainty is explicitly accounted for in the PAGE09 model. It has been shown to increase the evaluation of impacts compared to models that do not include uncertainty.
I should have separated these concerns.
Regarding uncertainty, this is the key to my whole complaint. I see many systematic underestimates of uncertainty. This is not particular to economics – climate modelers are prone to it as well. The key is to separate “aleatory” vs “epistemic” uncertainty, i.e., uncertainty in the optimal parameters of the model vs errors in the structure of the model. What I’m hoping to get from Hope is some idea of the epistemic uncertainty, which is actually rather small in climate physics on century time scales despite what some would have you think.
Regarding “growth” it is generally the goal in economic studies. Those of us who are unconvinced that it is a reasonable or even realistic goal are left out in the cold from the beginning. But this means that some of us think that even epistemic uncertainty is a minor issue. The first thing I need to be convinced of is that we are solving the right problem!
“there’s the nonlinearity problem, where particularly odd climate events don’t appear in trends and are misattributed by averaging processes.”
Nearly all the impact functions in the default PAGE09 model are highly non-linear.
Maybe so; it still remains (from my point of view) to be seen how they are obtained, and how they couple and cascade. It is my suspicion that, given the difficulty of attribution of severe events, much of the damage we have already seen will be very hard to capture and extrapolate.
In your blog post you linked to a blog post that called us ‘retarded’ and the ‘three imbeciles’, and called it ‘readable and passably amusing’. Well, I won’t make any further comment on that.
Lubos is a bit of a clown; we go back a long way, and I get this treatment from him all the time. On the whole, academic decorum is a bit relaxed on the internet. Developing a thick skin is in order. However I suppose I ought to apologize for indulging myself for sharing this nugget of amusement publicly.
“there’s just the sad, ubiquitous, and probably somehow debilitating nonfinancial cost of living in a deteriorating natural environment, which is captured, if at all, as a perturbation on the tourist industry.”
The PAGE09 model has a separate impact category called ‘non-economic impacts’ devoted to capturing this type of impact. It is in no way just included as a perturbation on the tourist industry.
It’s peculiar to dollar-denominate it, but that said I am pleased that this is included in some form. I withdraw this point.
“Such models, insofar as I understand them really have no serious concept of nonlinear system dynamics at all.”
Nearly all the impact functions in the default PAGE09 model are highly non-linear. Positive feedback mechanisms from temperature change to the carbon cycle are included.
I don’t really know how you’d capture cascading system failures in an Excel spreadsheet, but pending looking at it in detail, I have to reduce this to a suspicion.
Much discussion of how big the methane pulse is:
In composing the article I used the century-scale GWP to get a ballpark; arguably I should triple it and use the decadal-scale GWP.
On twitter, Chris Hope asked me to estimate the forcing in watts. I made a stupid error and slipped a decimal point. I was off by exactly tenfold. Chris Colose’s number made me revisit my calculations and I found my mistake. This doesn’t affect the initial estimate that the impact of an abrupt release of 60 GT C would be enormous to the point of being destabilizing.
My whole point, though, is that models of this sort do not really indicate when they are being pushed out of range. I tried to explain this in detail in my email to Dr. Hope, but I still see no signs of grappling with the issue of whether the model is applicable at all.
No discussion as yet on the plausibility of the Wadhams scenario.
I accepted it for the purpose of argument and let’s leave it that way. I am thoroughly convinced that it is not in the realm of the plausible, never mind the “likely”. But if it happens, IAMs notwithstanding, I believe we are toast. At some level of disaster, economics does not apply, at least not dollar-denominated economics. In asking economists to bound the regime of their models, I get blank looks. So far this is a case in point.
On Twitter, Dr. Hope offers the following resources for further investigation:
Q: Q’s for people publishing on “integrated assessment models”: 1) What are the data 2) what are the constituent equations;
A: See papers 4, 5, 8, 9 here http://ow.ly/nNTj7 All since published but easier to find here.
Q: On IAM’s 2/3 @cwhope 3) How do you bound the estimate? Are orders of magnitude error excludable, on either side of your estimate?
A: @mtobis All papers show error bounds. In Nature paper 90%CI is $10 – 225 tn, mean is $60 tn
Q: @cwhope on IAMs 3/3; 4) any IAMs in open source? On what platform and/or in what language are they typically coded?