Ten Things I Learned in the Climate Lab

[This article originally appeared on ClimateSight, author retains copyright]

  1. Scientists do not blindly trust their own models of global warming. In fact, nobody is more aware of a model’s specific weaknesses than the developers themselves. Most of our time is spent comparing model output to observations, searching for discrepancies, and hunting down bugs.
  2. If 1.5 C global warming above preindustrial temperatures really does represent the threshold for “dangerous climate change” (rather than 2 C, as some have argued), then we’re in trouble. Stabilizing global temperatures at this level isn’t just climatically difficult, it’s also mathematically difficult. Given current global temperatures, and their current rate of change, it’s nearly impossible to smoothly extend the curve to stabilize at 1.5 C without overshooting.
  3. Sometimes computers do weird things. Some bugs appear for the most illogical reasons (last week, the act of declaring a variable altered every single metric of the model output). Other bugs show up once, then disappear before you can track down the source, and you’re never able to reproduce them. It’s not uncommon to fix a problem without ever understanding why the problem occurred in the first place.
  4. For anyone working with climate model output, one of the best tools to have in your arsenal is the combination of IDL andNetCDF. Hardly an hour of work goes by when I don’t use one or both of these programming tools in some way.
  5. Developing model code for the first time is a lot like moving to a new city. At first you wander around aimlessly, clutching your map and hesitantly asking for directions. Then you begin to recognize street names and orient yourself around landmarks. Eventually you’re considered a resident of the city, as your little house is there on the map with your name on it. You feel inordinately proud of the fact that you managed to build that house without burning the entire city down in the process.
  6. The RCP 8.5 scenario is really, really scary. Looking at the output from that experiment is enough to give me a stomachache. Let’s just not let that scenario happen, okay?
  7. It’s entirely possible to get up in the morning and just decide to be enthusiastic about your work. You don’t have to pretend, or lie to yourself – all you do is consciously choose to revel in the interesting discoveries, and to view your setbacks as challenges rather than chores. It works really well, and everything is easier and more fun as a result.
  8. Climate models are fabulous experimental subjects. If you run the UVic model twice with the same code, data, options, and initial state, you get exactly the same results. (I’m not sure if this holds for more complex GCMs which include elements of random weather variation.) For this reason, if you change one factor, you can be sure that the model is reacting only to that factor. Control runs are completely free of external influences, and deconstructing confounding variables is only a matter of CPU time. Most experimental scientists don’t have this element of perfection in their subjects – it makes me feel very lucky.
  9. The permafrost is in big trouble, and scientists are remarkably calm about it.
  10. Tasks that seem impossible at first glance are often second nature by the end of the day. No bug lasts forever, and no problem goes unsolved if you exert enough effort.


  1. 8) I don't think modern GCM's (at least, the CESM, the model I work with most often) don't include any sort of stochastic parameterizations for "weather"; they should be entirely deterministic. That is, if I branch a control simulation from the same point in time, and integrate it forward the identical forcings and model configuration twice - including having the model compiled by the same compiler - I presume I should get bit-for-bit exact matching answers. This probably won't be true if I run the same model simulation at NCAR vs our home-grown cluster.

    That said, it is interesting to talk about stochastic parameterizations for things in climate models, especially for sub-grid scale stuff. For instance, There's interesting work by Graf and Nober which attempts to build a "convective cloud field model" - a parameterization of deep convection which uses the notion of predator-prey models to generate a distribution of convective plumes. If I recall correctly, they compute this distribution stochastically, so it might not necessarily give the same result twice for the same initial conditions.

    4) Have you considered abandoning IDL and embracing another analysis environment? I'm a huge advocate for Python (we have a support group to help people embrace the language! http://pyaos.johnny-lin.com/); it's a free and open language, and there are myriad third-party libraries that are often extremely easy to use and implement into your code. On top of that, you don't really have to sacrifice any sort of computational speed - third-party mathematical libraries like NumPy are already very fast, and you can easily drop in optimized C/C++ or Fortran code into any software your write. There are also powerful visualization tools which rival IDL - Matplotlib+Basemap is good, and there is a Python wrapper of NCL called PyNGL. Python is also good at handling NetCDF output.

  2. Interesting to hear that CESM is also bit-for-bit reproducible. I seem to remember Ben Santer mentioning something about stochastic processes (particularly in the ocean, to simulate ENSO) but perhaps I am mistaken.

    I really wanted to like Python, and was excited to learn it, but unfortunately am not a fan. The indentation requirements drive me crazy because they're slightly different than the form of indentation I like to use when I code.

    Out of the 4 or 5 languages I know, IDL is definitely my favourite. However, I agree that its proprietary nature is a huge disadvantage. I've heard that GNU is working on a version of IDL - I hope they keep that up.

  3. There is growing interesting in stochastic sub-grid scale parameterizations. I've just been to a workshop on this, still ongoing. Tim Palmer is a noted advocate (e.g. here, and this whole special issue). Two schemes in current use are the stochastic kinetic energy backscatter (SKEBS; also see spectral stochastic backscatter, SPBS) and stochastically perturbed parameterization tendencies (SPPT; much more sophisticated versions are now available but I don't have good intro references handy). These schemes are being used for operational monthly-to-seasonal forecasting at ECMWF. These slides are useful.

  4. As it is generally accepted there are random elements in the weather system the stochastic treatment of these areas shouldn't be a problem for modellers.
    Some questions and my guessed answers:
    How does he albedo of cloudy areas change WRT turbidity and convection in the local atmosphere? Some sort of parametrization according to cloud types could give the model more accurate albedo and evaporation values for the area, giving increased accuracy for prediction of rain in areal climate models.

    What are the approximate parameters of open-cell to closed-cell marine stratocumulus transition? I'd imagine f.e. the closed-cell clouds are lower in height when they form, though containing more droplets for rain over the continent in case they should wander over land.

    It's a complex subject, no doubt.

  5. Johnny Lin is right (Hi Johnny) -- IDL is an expensive kludge, OK as a netCDF browser and for making graphs, but very limited as a programming language. The combination of Python with Ngl or MatPlotLib graphics is not only free, it's far more powerful -- and arguably easier to learn than IDL.

    Python is so versatile we use it for everything -- not just analysis, but actually running climate models. Sometimes this is done by building climate model commands into Python commands and then gluing together in Python, and sometimes by using Python calls to initiate execution of entire atmospheric models (followed by analysis and even asynchronous coupling to other climate components, like glacier models or carbon cycle models). Python Rules!

    I'm writing a primer on scientific computing in Python for Princeton University Press. Look for it in about a year.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.