Posts Tagged ‘storms’

Thunderstorms in the IPCC AR5

January 28, 2014

It’s been a while since I blogged; I hope you didn’t think I’d forgotten you! My workload has “shifted” recently and I’m doing a bit more teaching/supervision/management these days. Blogging has taken a bit of a backseat. So I’m a bit late on this one but thought that it was still interesting. Anyway, enough of the excuses…

I’ve often thought it was odd that the potential changes in frequency and/or intensity of small scale severe storms/thunderstorms – one of my areas of research – was absent from the IPCC TAR, AR4 and SREX.

This has been put right in the IPCC AR5, which was published in late 2013 but, if anything, it highlights some of the problems with the slow and rigidly structured IPCC process.

So here’re a few sentences from IPCC AR5 that deal with severe thunderstorms:

The large-scale environments in which [severe thunderstorms] occur are characterized by large Convective Available Potential Energy (CAPE) and deep tropospheric wind shear (Brooks et al., 2003; Brooks, 2009). Del Genio et al. (2007), Trapp et al. (2007; 2009), and Van Klooster and Roebber (2009) found a general increase in the energy and decrease in the shear terms from the late 20th century to the late 21st century over the United States using a variety of regional model simulations embedded in global-model SRES scenario simulations. The relative change between these two competing factors would tend to favour more environments that would support severe thunderstorms, providing storms are initiated.

Overall, for all parts of the world studied, the results are suggestive of a trend toward environments favouring more severe thunderstorms, but the small number of analyses precludes any likelihood estimate of this change.

It’s a pretty good, concise summary of work in this area up to 2012/13. (I’ve not included some of the text on examples and the few studies outside of the US, you can find the full text here towards the end of section 12.4.5.5 Extreme Events in the Water Cycle. There’s another bit in 2.6.2.4 Severe Local Weather Events as well.)

However, whilst the IPCC report was being published, this paper came out:

Diffenbaugh, N. S., Scherer, M. and Trapp, R. J. (in press) “Robust increases in severe thunderstorm environments in response to greenhouse forcing” PNAS, doi: 10.1073/pnas.1307758110

They say:

We use an ensemble of global climate model experiments to probe the severe thunderstorm response. We find that this ensemble exhibits robust increases in the occurrence of severe thunderstorm environments over the eastern United States. In addition, the simulated changes in the atmospheric environment indicate an increase in the number of days supportive of the spectrum of convective hazards, with the suggestion of a possible increase in the number of days supportive of tornadic storms.

It’s a much more up-to-date and robust analysis of the problem and even uses the CMIP5 climate projections that form the backbone of the IPCC AR5. (I’ve been working on something similar for the Northern Hemisphere but not quite finished it yet!) I guess that this paper must have been accepted for publication after the deadline for the IPCC process so it isn’t mentioned. It’s a shame as a citation to this paper would have added something to the argument.

And this seems to be a problem with the IPCC. Climate science research is a much bigger area now than when the IPCC process started in the late 1980s/early 1990s. So a whole area of research (e.g. severe thunderstorms in a changing climate) becomes a couple of sentences with the most up-to-date paper missing.

As good as the IPCC has been over the years, perhaps it’s time to move on. The SREX example seems to be a good one: a multi-disciplinary, timely analysis of an important area. I think that a series of special reports like SREX would be a better use of valuable time than an AR6.

Climate change and extreme events on Nature Soapbox Science

January 10, 2012

I wrote a post for the Nature Soapbox Science community blog on climate change and extreme events. If you want to take a look, it’s here. UPDATE (11/1/2012): well, I’d may as well just put the post here as well…

As I type, I have a massive chapter for the next full Assessment Report (due to be published in 2014) sitting on my desk to review and a couple of analysis routines churning their way through terabytes of climate model data. There’ll be hundreds of other people around the world focussed on similar things. The aim is to produce the 5th series of Assessment Reports since the IPCC was formed in 1988 to help decision makers, well, make decisions.

But the IPCC has been up to other things recently as well. In November 2011 it published a Special Report Summary for Policymakers on “Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation” (or SREX, the full report will be published in February 2012). Understanding how extreme events might change in the future is really important as it’s these things that will really impact people: heat waves, flash floods, hurricanes, droughts and sea level rise related inundation. This is far more useful to know than the quite abstract concept of global mean temperature change. This report looks like an advance in the IPCC procedure as it involved a far more integrated approach than usual IPCC outputs, having authors from climate science, impacts and adaptation backgrounds as well as disaster risk management experts.

Although it sounds obvious, one of the key conclusions of SREX was that the impact of extreme climatic events is greatest where vulnerability is highest. On the ground, this has manifested itself as higher fatality levels in developing nations and higher economic losses in developed countries. There’s a lot to think about here in terms of how developing nations move forward and how developed nations approach things sustainably to reduce exposure. That’s not really my area though.

From a scientific point of view, they also point out that analysing extremes is relatively difficult as they are rare and data from around the world are not always up to the job. That said, this depends a lot on the particular “extreme” being investigated – this has always struck me as slightly odd about the climate extremes community in that the only common theme is the statistics and not the science behind the phenomena.

Looking to projections, the IPCC SREX assign their highest confidence assessment (“virtually certain”) to increases in temperature extremes by 2100. This is because this is pretty much a direct response to the radiation changes forced by atmospheric greenhouse gas emissions. Everything else is a slightly more messy consequence of the temperature changes and these other fields vary much more amongst the 12 different models used in this analysis making their projections uncertain. However, it also looks likely that heavy precipitation events will increase in certain regions and that the maximum winds associated with tropical cyclones will increase whilst their total number will likely decrease.

Oddly enough, the emissions pathway that we take in the future (the IPCC analyses different sets of projections based on different socioeconomic and technological development assumptions) has little impact on extreme events in the next 30 years or so – they don’t appear to have an impact until the latter half of the 21st Century when inter-model variability masks most of the climate signal anyway. This highlights how making projections of extreme events is a difficult game. In that spirit, here are two of the key problems as I see them relating to my area of research on severe storms in Europe:

Loading the dice or getting new dice?

If we assume that climatic quantities have a normal distribution (which isn’t always the case, especially with precipitation) then you can view the extremes as the tails at either end of the distribution e.g. hot or cold. So climate change could be viewed as like loading dice – you start rolling more sixes (or getting more hot days). However, when the climate regime changes this analogy breaks down as, instead of just rolling more sixes, you start needing to roll sevens as climate records are broken (see the figure below). This poses a problem for climate models as, like a six sided die isn’t designed to roll a seven, climate models haven’t been designed (or at least haven’t been verified against) conditions that have never been observed.

The green curve represents the distribution of Swiss summer temperatures from 1864 to 2002. Clearly, 2003 does not align well with that distribution and is an example of an extreme breaking a previous record. This figure has been taken from the IPCC AR4, for more details click the image.

We’re gonna need a smaller box.

The second problem is that some important things – like severe storms, tornados and regional and local changes such as river catchment area precipitation changes – are too small for climate models to represent or resolve. The reason for this is that these computer models split the atmosphere (and oceans) into a 3D array of boxes. The important equations are solved in each box and then they pass information to neighbouring boxes as appropriate at each model time step. These boxes usually have horizontal dimensions of around 100-400 km to allow for a convenient computational time. However, storms and tornados work on scales of significantly less than 100 km so there’s no way that the models can tell us anything about these things. This problem is particularly acute in relation to the IPCC SREX as this analysis used a suite of climate model data from a project called CMIP3, which was completed in 2006 for the last IPCC assessment and, therefore, does not use the most up-to-date and highest resolution model data. (The data currently being prepared for the next IPCC Assessment Report called CMIP5 is, however, not yet complete so perhaps this criticism is a bit unfair.)

Is this good enough?

So does this mean that analyses using these model data are not useful or reliable? When faced with this question I struggle to get past the fact that, however much they can improve in the future, these models are still the best and only tool we have for making climate projections. Beyond that, we can take comfort in the fact that the very basic physics of climate science is really well understood – even very simple energy balance models can tell us useful things about the effects of increasing atmospheric greenhouse gas changes. What we’re talking about here are the details, albeit very important details, and in that respect our current analyses are consistent with the things that we’re pretty sure of.

Storms and climate change

March 25, 2010

I’ve been pretty distracted recently with the Institute of Physics issue. I’ll hopefully draw that chapter to a close in the next couple of weeks (it looks like the IoP are going to stick their head in the sand and wait for it to blow over) but right now I’m bringing my current project to a close so I thought I’d look at how that work has gone and where it’ll go in the future.

The work I’ve been doing at Manchester over the last few years is looking at how storms form – the kind of storms that lead to floods like the one seen in Boscastle in 2004.

These type of storms are pretty difficult to forecast because they are often smaller than the computer model grid box size that such models use to chunk up the atmosphere into a more manageable mathematical problem. I spoke about this in a recent edition of the NERC Planet Earth podcast.

I’ve been part of a couple of big projects that try and observe these storms as they form. We do this by getting big teams of scientists, around 50 or so, to go to a certain location for a few months and then use surface stations, aeroplanes, weather balloons, radar, lidar and satellite data to get a really good picture of what’s going on.

The video below shows the Chilbolton radar scanning convective clouds as they develop whilst someone launches weather balloons in the bottom left hand corner – this is pretty typical of what goes on at several locations during these campaigns, although this is the biggest dish we’ve used.

I’ve only been looking at a small part of the storm problem: how does air that descends from the stratosphere influence these storms?

I’ve shown that this upper-level air can sometimes make storms less likely by introducing thin layers into the atmosphere that can cap storms before they get going (Russell et al., 2008) and sometimes make storms more likely by changing the temperature structure of the atmosphere to make the convection more powerful (Russell et al., 2009).

That doesn’t sound much like progress, does it? Well, now that we know a lot more about these two types of effect, we can start to generalise them and use that knowledge to help forecasters improve their predictions and computer models. This is what we are working on now.

So how does climate change fit in to all this?

The next big step in this work is to try and define a “storm environment”, including what we now know about these upper-level features, and apply this to the kind of projections that climate models produce. These projections use even bigger grid boxes than weather models so this step will not be easy but if we can show that this method works using the data that is available now (and hope for better resolution soon) then we can start to think about the likely changes in this storm environment. These big storms can really affect people’s lives so this type of work is something that would influence how we start to prepare for the future.

References:

ResearchBlogging.orgRussell, A., Vaughan, G., Norton, E., Morcrette, C., Browning, K., & Blyth, A. (2008). Convective inhibition beneath an upper-level PV anomaly Quarterly Journal of the Royal Meteorological Society, 134 (631), 371-383 DOI: 10.1002/qj.214

ResearchBlogging.orgRUSSELL, A., VAUGHAN, G., NORTON, E., RICKETTS, H., MORCRETTE, C., HEWISON, T., BROWNING, K., & BLYTH, A. (2009). Convection forced by a descending dry layer and low-level moist convergence Tellus A, 61 (2), 250-263 DOI: 10.1111/j.1600-0870.2008.00382.x