P.S. There is an up-coming article about downscaling in the Oxford Research Encyclopedia.

References

  1. R.E. Benestad, "A Simple Test for Changes in Statistical Distributions", Eos, Transactions American Geophysical Union, vol. 89, pp. 389-390, 2008. http://dx.doi.org/10.1029/2008EO410002
  2. R.E. Benestad, "On latitudinal profiles of zonal means", Geophysical Research Letters, vol. 32, 2005. http://dx.doi.org/10.1029/2005GL023652

Reader Interactions

28 Responses to "Do regional climate models add value compared to global models?"

  1. Gavin, interesting topic. I can think of processes where storm types and tracks can have dramatic differences across short distance. For instance for a mountain range, the direction the winds are blowing during the event can have a huge influence on whether the east of west slopes get the most precipitation. And this can differ depending upon the type of storm; orographic, or driven by convective instability. For the former, usually the upslope side gets the lion’s share, but for convective precipitation (summer thunderstorms), usually the mountain serves as a seed for convection and the downwind side gets the rain after the storms mature downwind. For many water sheds, it would be valuable to be able to predict how the precipitation would change on each side of the range.

  2. When the global models do not work, there’s little point even trying long term regional models.

    However, when we get to the stage of focussing on what is important – then short range regional models will become important to short-range forecasts on the week – to month scale.

  3. @ 2. Comment by Mike Haseler (Scottish Sceptic) — 22 May 2016 @ 11:58 AM

    “When the global models do not work, there’s little point even trying long term regional models.”

    Which logical fallacy is this?

  4. MH@2
    “Global models do not work” Really? I’ve seen plenty of graphs of what can be considered calibration runs of models that simulate past observed climate. The agreement is amazing. With such good agreement one should therefore be confident of projections into the future made by these same models.

    Unfortunately the models cannot know the degree of stupidity that humans will display in coming decades. We (collectively) could be amazingly stupid and continue with business as usual. We could be very stupid and make only minor changes. Or we could be just ordinarily stupid and make more but still insufficient changes. How are the people running the models supposed to know? That’s why scenarios were invented.

    The global models do work; it’s just that we don’t know how stupid humankind is going to be.

  5. Rasmus, I don’t think I would be the only person who gets confused by the words used in climate science. I often see people being taken to task for saying GCMs failed to predict xyz, and are told the GCMs (especially in the IPCC) were not predictions in the first place, although the glossaries in the IPCC reports do define their word meanings… but these are still confusing for the lay person imo – because they chop and change words in their definitions and are not clear in how or when they apply.

    For example: Climate forecast see Climate prediction. Climate projection, Climate scenario. Predictability, Prediction quality/skill, and A projection is a potential future evolution of a quantity or set of quantities … Unlike predictions, projections are conditional on assumptions ….
    https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_AnnexIII_FINAL.pdf and
    Climate models are applied as a research tool to study and simulate the climate, and for operational purposes, including monthly, seasonal, and interannual climate predictions.
    https://www.ipcc.ch/pdf/assessment-report/ar5/wg2/WGIIAR5-AnnexII_FINAL.pdf

    Rasmus in your article you say: “or that it is futile to predict local climate conditions.” and “Statistics is often predictable and climate can be regarded as weather statistics.” and “and “most RCMs predict the area average for 100 km2″ – You don’t use any of the other words listed.

    Now this may seem pedantic or irrelevant to some, but I think it’s critical if climate scientists want lay people and politicians (and even deniers) to actually understand what you are trying to get across every time with clarity.

    So Rasmus do or will RCMs provide climate predictions or not? and/or does it not really make a difference anymore – because in the past I thought these words really mattered. Thx

    http://www.etymonline.com/index.php?term=prediction
    http://www.etymonline.com/index.php?term=predict
    http://www.oxforddictionaries.com/definition/english/prediction

  6. Warning: http://www.cordex.org wants you to install a new version of flash player. Since flash player wants you to install a new version every time you encounter it, I do not trust flash player. When are they going to get a stable version? Or is it a virus installer?

  7. 4 Digby Scorgie is right. Local officials won’t consider the idea that their whole city should be moved 100 miles inland or abandoned. They want You to tell them how to fix the town where it is now as it is now and for no money. And they want definitive answers to political questions. They can’t handle the idea that probability is involved at all.

    If you can’t deliver a perfectly accurate 100 year weather forecast, they think that you are no good.

  8. Making models work on past observed climate is not hard. It is curve fitting. What is amazing about that?

  9. @rasmus: ‘climate can be regarded as weather statistics’

    Just a quibble, but: is there any other way to regard climate? Wikipedia at least seems unequivocal: ‘Climate is the statistics (usually, mean or variability) of weather'[1].

    [1]: https://en.wikipedia.org/wiki/Climate

  10. Another student question, regarding the following pair of statements:

    @rasmus: ‘Downscaling may be done through empirical-statistical downscaling (ESD) or regional climate models (RCMs)’

    but

    @rasmus: ‘There are many strategies for deriving local (high-resolution/detailed) climate information in addition to RCM and ESD.’

    What are the other strategies for *deriving* high-resolution climate information? One presumably only seeks to derive or model information when it does not already exist–in this case, when one does not have sufficiently-reliable observations of sufficiently-fine resolution over the desired spatiotemporal domain. Given that, IIUC, there are only 2 ways to downscale: either deterministically (i.e., RCM) or statistically (i.e., ESD) … or am I missing something? Pointers to doc esp appreciated.

  11. Many are familiar with time-based spectral concepts like aliasing and band limiting. Yet these apply to spatial samples as well, with frequency represented as essentially wavenumbers. The same bandwidth product uncertainty relations apply there as well as in time.

    The conversation about informing policy ought not be one way. Loss functions over space and time are very useful when constructing recommendations for leadership, not to mention inference.

  12. DC@4
    “Unfortunately the models cannot know the degree of stupidity that humans will display in coming decades.”

    Can the models deal with infinity? :-)

    As Einstein said: “There are two things that are infinite, the universe and human stupidity, and I’m not sure about the former.”

    I am very skeptical that on a global population level, humans will bother to do anything other than lip service when it comes to addressing climate change and the changes to our lifestyle required to significantly reduce our emissions, until it is far too late. I would love to be proved wrong, but (my limited) observations so far suggest I’m not far out.

  13. Mike Haseler (Scottish Sceptic),2: Read the next post, on the AMOC and the specified CM2.6 coupled climate model, please. It’s quite accessible. When you have finished, I’ll suggest a second step, and I’ll name your fallacy. Thanks for your attention.

  14. @3 Alf: “Which logical fallacy is this?”

    I believe that would be the argument from false premises.

  15. Denier Mike Haseler wrote: “the global models do not work”

    Moderators, please Bore Hole this troll.

  16. the three laws of climate models

    1. All climate models are wrong.
    2. Earth climate is chaotic and unpredictable.
    3. Climate models can be useful.

    that is my current position. I think this is more nuanced and accurate than “climate models are shite” or “climate models don’t work”.

    Mike

  17. I’m curious, what role do recent advances in deep learning have in helping infer local effects from larger-scale climate model results? While this probably should be considered a statistical approach, it does seem to be of a different kind than the common usage of that phrase. Some searching indicates this has at least been tried – is there any effort to make it more common or operational? Would a well-trained network be fast enough to generate useful sub-grid-scale info during a model run? The more common use seems to be pattern detection after the fact, which in itself could be quite beneficial.

  18. On the reliability of climate models, the InsideClimateNews report on Exxon’s climate research efforts of the late 1970s and early 1980s makes for interesting reading:

    “Over the past several years a clear scientific consensus has emerged,” Cohen wrote in September 1982, reporting on Exxon’s own analysis of climate models. It was that a doubling of the carbon dioxide blanket in the atmosphere would produce average global warming of 3 degrees Celsius, plus or minus 1.5 degrees C (equal to 5 degrees Fahrenheit plus or minus 1.7 degrees F).

    http://insideclimatenews.org/news/15092015/Exxons-own-research-confirmed-fossil-fuels-role-in-global-warming

    Those estimates have been refined since then, but the projections are within range of how things are turning out. Other successful projections include modeling the atmospheric response to the Pinatubo eruption. All in all, confidence is pretty good, by any rational measure.

    On global vs. local, how about the global model prediction of a deepening and widening of the tropical atmospheric circulation, which leads to the Hadley cell expansion and the projection of the dry zones expanding polewards. This general prediction seems ominous, but what does it mean for California, India, Spain, etc.? Will southern Europe end up looking like North Africa? Central California like Baja California? Will El Nino years be the only years with anything like 20th century ‘normal’ rainfall levels across the southwestern United States, as we move into a permanent drought regime? Can these regional models answer these questions with much certainty, over the next 50 years, say?

    If politicians and media and businesses can trust these projections, then it has implications for infrastructure planning (perhaps we can all live underground, like termites in the desert with those nifty air conditioning systems their tunnels provide?).

    This is where bad policy choices and human fallibility come into play. As others have noted, Katrina the Hurricane didn’t have to give rise to Katrina the Human Disaster; scientists and engineers had given much advance warning about the need for new levees and better infrastructure. Given that warming over the next 50 years seems inevitable, some serious long-term planning is needed – but financial centers seem to have a hard time looking beyond next quarter’s results, and the politicians don’t seem to look much farther than the next election cycle. How do we move to more long-term thinking?

    Similar issues apply to helping out farmers in the developing world – for example, while shiploads full of grain might seem like a good response to regional drought, in practice that may be the worst option as it destroys local markets for poor farmers who then can’t afford to buy seed and fertilizer for the next growing season. Solar- or wind-powered water pumps that allow such farmers to tap into aquifers in dry spells are a much better kind of aid.

  19. At #8,Dan:
    Making models work on past climate is not curve fitting–you should do more research on climate models.

    In a hindcast GCM, the model is forced with the known changes in insolation, volcanism, human sulfate aerosols, and GHG emissions and see if the model follows the known trajectory in temperature or precipitation, etc. This has been done multiple times, since with a model it is possible to remove or add one of the influences and then see how important that influence has been to the total change. You should look at Chapter 9 from the latest IPCC report, downloadable from IPCC.ch

  20. “Making models work on past observed climate is not hard. It is curve fitting.”

    No, it isn’t. ‘Curve fitting’ is a statistical procedure; GCMs numerically simulate the actual physics. An entirely different thing.

  21. Dan wouldn’t know a computer model if it smacked him in the face. Curve-fitting my arse.

  22. Very good new book on climate models: “Demystifying Climate Models” by Andrew Gettelman and Richard B. Rood. It is open access and can be downloaded as an ebook or PDF for free. Print book is around $50. This is well written for the layman. I highly recommend it especially for those that believe “climate models don’t work.” No excuses for the denier ignorant since it can be obtained absolutely free.

  23. #20, #21 OK, it is not curve fitting. If you adjust variables to fit history and the process helps future models then it is useful. Otherwise it is somewhat a tiny bit like curve fitting. My question is how do you know when you are making useful improvements.

  24. techish-optimist-today asked “I’m curious, what role do recent advances in deep learning have in helping infer local effects from larger-scale climate model results? While this probably should be considered a statistical approach, it does seem to be of a different kind than the common usage of that phrase. Some searching indicates this has at least been tried – is there any effort to make it more common or operational? Would a well-trained network be fast enough to generate useful sub-grid-scale info during a model run? The more common use seems to be pattern detection after the fact, which in itself could be quite beneficial.”

    I’ve experimented quite a bit with machine learning on climate data. For example, one series of experiments found that QBO is likely forced by seasonally-aliased monthly tidal cycles. This should be used as input to the larger models, as I don’t know if anyone has realized the tidal connection before. Lindzen hinted at it but he failed to find any connection (and is now retired, so that’s that).

    Doing the same kind of machine learning with ENSO is a tougher nut to crack, but from what I have learned with QBO, one can also find related forcings with ENSO. For example,the biennial component in the ENSO forcing is very strong. Again, simpler models are needed to “prime the pump” for the larger GCM’s, as it seems almost impossible to generate the deterministic output necessary to simulate QBO and ENSO behavior. These climate behaviors are better described as non-autonomous systems and so require the correct forcing. From the looks of the way the GCMs are set up, they appear to be formulated as autonomous systems, expected to spontaneously oscillate — which I think is not physically correct. Think in terms of ocean tides; these are not spontaneous oscillations but are always forced by lunisolar cycles. QBO and ENSO are closer to that than I think anyone realizes, or is willing to admit (check out NASA JPL memos for some contrarian views).

    And as far as “curve fitting” is concerned, name a physics model that does not involve a curve of some type! It could be a 2D curve or a 3D surface or some other manifold, but everything in physics is described by curves. The act of curve fitting can be used to extract parameters, and doesn’t have to be statistical. In fact, something like ENSO is not statistical at all — it is a single behavior described by a single standing-wave that covers a large expanse of the equatorial Pacific. There is not a set of ENSO behaviors to draw from, as if it was a statistical phenomenon. So the curve to be fit in the case of ENSO is a complicated standing-wave oscillation — likely more complex than a tidal gauge time series, but potentially doable. Why can’t GCMs model this behavior in terms of a curve fit for long stretches of time? I think climatologists punt on this task, believing it hopeless and following Tsonis’ suggestion that it is likely chaotic (Tsonis is the guy who just joined the GWPF as committee member alongside Lindzen this month, ugh).

    This may all sound provocative, but you never know what you will find until you try it. To get back to the original question, machine learning, deep learning, and data mining are proper for these kinds of analyses because you can let a computer waste its time looking down dead-ends and you don’t have to do that yourself. Only a few climate science groups are looking at this approach.

    My analysis is at ContextEarth.com, with more threaded discussions at AzimuthProject.org under the ENSO and QBO topic headings. Good place for an extended discussion on deep learning topics since comments are not moderated and equation markup, graphs, charts, and CURVES TOO! are easy to post.

  25. We recently had a longish “pause” in global temperatures where it seems the excess heat sequestered itself in the oceans, only to emerge quite spectacularly in the last couple of years. I imagine this sort of thing would be even more of a problem with regional models.

    For example, since the mid 1970’s winter rainfall in Perth, Western Australia has plummeted. Is this a genuine regional feature of climate change, or has this rain temporarily gone elsewhere, and will come back as quickly as it left?

    I’m quite happy with efforts to predict the big scale climate – global warming, polar amplification, minimum temps rising faster than maximums, etc. But trying to figure out how some small particular part of a chaotic system responds to gradual heating seems a bit ambitious.

  26. The answer from our research activities to this question can be found, for example, in these papers

    Pielke Sr., R.A., and R.L. Wilby, 2012: Regional climate downscaling – what’s the point? Eos Forum, 93, No. 5, 52-53, doi:10.1029/2012EO050008. http://pielkeclimatesci.files.wordpress.com/2012/02/r-361.pdf

    Rockel, B., C.L. Castro, R.A. Pielke Sr., H. von Storch, and G. Leoncini, 2008: Dynamical downscaling: Assessment of model system dependent retained and added variability for two different regional climate models. J. Geophys. Res., 113, D21107, doi:10.1029/2007JD009461. http://pielkeclimatesci.wordpress.com/files/2009/11/r-325.pdf

    Lo, J.C.-F., Z.-L. Yang, and R.A. Pielke Sr., 2008: Assessment of three dynamical climate downscaling methods using the Weather Research and Forecasting (WRF) Model. J. Geophys. Res., 113, D09112, doi:10.1029/2007JD009216. http://pielkeclimatesci.wordpress.com/files/2009/10/r-332.pdf

    Pielke Sr., R.A. 2013: Comments on “The North American Regional Climate Change Assessment Program: Overview of Phase I Results.” Bull. Amer. Meteor. Soc., 94, 1075-1077, doi: 10.1175/BAMS-D-12-00205.1. http://pielkeclimatesci.files.wordpress.com/2013/07/r-372.pdf

    In the past, Gavin agreed with me on the very limited value of downscaling multidecadal climate predictions. I posted his comment in a reply to a tweet by Larry Kummer.