Posts Tagged ‘IPCC’

Thunderstorms in the IPCC AR5

January 28, 2014

It’s been a while since I blogged; I hope you didn’t think I’d forgotten you! My workload has “shifted” recently and I’m doing a bit more teaching/supervision/management these days. Blogging has taken a bit of a backseat. So I’m a bit late on this one but thought that it was still interesting. Anyway, enough of the excuses…

I’ve often thought it was odd that the potential changes in frequency and/or intensity of small scale severe storms/thunderstorms – one of my areas of research – was absent from the IPCC TAR, AR4 and SREX.

This has been put right in the IPCC AR5, which was published in late 2013 but, if anything, it highlights some of the problems with the slow and rigidly structured IPCC process.

So here’re a few sentences from IPCC AR5 that deal with severe thunderstorms:

The large-scale environments in which [severe thunderstorms] occur are characterized by large Convective Available Potential Energy (CAPE) and deep tropospheric wind shear (Brooks et al., 2003; Brooks, 2009). Del Genio et al. (2007), Trapp et al. (2007; 2009), and Van Klooster and Roebber (2009) found a general increase in the energy and decrease in the shear terms from the late 20th century to the late 21st century over the United States using a variety of regional model simulations embedded in global-model SRES scenario simulations. The relative change between these two competing factors would tend to favour more environments that would support severe thunderstorms, providing storms are initiated.

Overall, for all parts of the world studied, the results are suggestive of a trend toward environments favouring more severe thunderstorms, but the small number of analyses precludes any likelihood estimate of this change.

It’s a pretty good, concise summary of work in this area up to 2012/13. (I’ve not included some of the text on examples and the few studies outside of the US, you can find the full text here towards the end of section 12.4.5.5 Extreme Events in the Water Cycle. There’s another bit in 2.6.2.4 Severe Local Weather Events as well.)

However, whilst the IPCC report was being published, this paper came out:

Diffenbaugh, N. S., Scherer, M. and Trapp, R. J. (in press) “Robust increases in severe thunderstorm environments in response to greenhouse forcing” PNAS, doi: 10.1073/pnas.1307758110

They say:

We use an ensemble of global climate model experiments to probe the severe thunderstorm response. We find that this ensemble exhibits robust increases in the occurrence of severe thunderstorm environments over the eastern United States. In addition, the simulated changes in the atmospheric environment indicate an increase in the number of days supportive of the spectrum of convective hazards, with the suggestion of a possible increase in the number of days supportive of tornadic storms.

It’s a much more up-to-date and robust analysis of the problem and even uses the CMIP5 climate projections that form the backbone of the IPCC AR5. (I’ve been working on something similar for the Northern Hemisphere but not quite finished it yet!) I guess that this paper must have been accepted for publication after the deadline for the IPCC process so it isn’t mentioned. It’s a shame as a citation to this paper would have added something to the argument.

And this seems to be a problem with the IPCC. Climate science research is a much bigger area now than when the IPCC process started in the late 1980s/early 1990s. So a whole area of research (e.g. severe thunderstorms in a changing climate) becomes a couple of sentences with the most up-to-date paper missing.

As good as the IPCC has been over the years, perhaps it’s time to move on. The SREX example seems to be a good one: a multi-disciplinary, timely analysis of an important area. I think that a series of special reports like SREX would be a better use of valuable time than an AR6.

Papers based on CMIP5 data

March 14, 2012

I happened to ask over on Ed Hawkin’s blog whether he knew of any list of publications based on CMIP5 data (i.e. the suite of climate projections being produced for the IPCC AR5). He pointed me to the PCMDI page, which at the time was blank. So, here is my list of papers based on CMIP5 that I’ll try and keep up to date:

UPDATE: There are now papers being added to the official PCMDI list. It seems as though anyone can add papers to that list so there are lots of “Submitted” papers. The list below is of papers that have been published.

Individual papers

Ahlström et al (2012) “Robustness and uncertainty in terrestrial ecosystem carbon response to CMIP5 climate change projections” Environ. Res. Lett., 7, 044008.

Andrews et al. (2012) “Forcing, Feedbacks and Climate Sensitivity in CMIP5 Coupled Atmosphere-Ocean Climate Models” Geophys. Res. Lett., doi:10.1029/2012GL051607

Arora et al. (2011) “Carbon emission limits required to satisfy future representative concentration pathways of greenhouse gases” Geophys. Res. Lett., 38, L05805.

Biasutti (2013) “Forced Sahel rainfall trends in the CMIP5 archive” J. Geophys. Res., DOI: 10.1002/jgrd.50206, in press.

Bellouin et al. (2011) “Aerosol forcing in the Climate Model Intercomparison Project (CMIP5) simulations by HadGEM2-ES and the role of ammonium nitrate” J. Geophys. Res., 116, D20206.

Branstator and Teng (2012) “Potential Impact of Initialization on Decadal Predictions as Assessed for CMIP5” Geophys. Res. Lett., doi:10.1029/2012GL051974, in press.

Cai et al. (2012) “More extreme swings of the South Pacific convergence zone due to greenhouse warming” Nature, 488, 365–369.

Chang et al (2012) “CMIP5 multi-model ensemble projection of storm track change under global warming” J. Geophys. Res., doi:10.1029/2012JD018578, in press.

Christensen and Boberg (2012) “Temperature dependent climate projection deficiencies in CMIP5 models” Geophys. Res. Lett., 39, doi:10.1029/2012GL053650, in press.

Dai et al. (2012) “Increasing drought under global warming in observations and models” Nature Clim. Change, doi:10.1038/nclimate1633.

Dobrynin et al. (2012) “Evolution of the global wind wave climate in CMIP5 experiments” Geophys. Res. Lett., doi:10.1029/2012GL052843, in press.

Driscoll et al. (2012) “Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions”  J. Geophys. Res., doi:10.1029/2012JD017607, in press.

Dunn-Sigouin and Son (2013) “Northern Hemisphere blocking frequency and duration in the CMIP5 models” J. Geophys. Res., DOI: 10.1002/jgrd.50143, in press.

Gillett and Fyfe (2013) “Annular mode changes in the CMIP5 simulations” Geophys. Res. Lett., DOI: 10.1002/grl.50249, in press.

Good et al. (2011) “A step-response simple climate model to reconstruct and interpret AOGCM projections” Geophys. Res. Lett., 38, L01703.

Guilyardi et al. (2012) “A first look at ENSO in CMIP5” CLIVAR Exchanges, 17, 29-32.

Haywood et al. (2011) “The roles of aerosol, water vapor and cloud in future global dimming/brightening” J. Geophys. Res., 116, D20203.

Heuzé et al. (2013) “Southern Ocean bottom water characteristics in CMIP5 models” Geophys. Res. Lett., DOI: 10.1002/grl.50287.

Jiang et al. (2012) “Evaluation of Cloud and Water Vapor Simulations in CMIP5 Climate Models Using NASA “A-Train” Satellite Observations” J. Geophys. Res., doi:10.1029/2011JD017237. GFDL summary.

Jones et al. (2011) “The HadGEM2-ES implementation of CMIP5 centennial simulations” Geosci. Model Dev., 4, 543–570.

Kamae and Watanabe (2012) “On the robustness of tropospheric adjustment in CMIP5 models” Geophys. Res. Lett., doi:10.1029/2012GL054275, in press.

Kawase et al. (2011) “Future changes in tropospheric ozone under Representative Concentration Pathways (RCPs)” Geophys. Res. Lett., 38, L05801.

Kelley et al. (2012) “Mediterranean precipitation climatology, seasonal cycle, and trend as simulated by CMIP5” Geophys. Res. Lett., doi:10.1029/2012GL053416, in press.

Kim and Yu (2012) “The Two Types of ENSO in CMIP5 Models” Geophys. Res. Lett., doi:10.1029/2012GL052006. [paper pdf here]

Kim et al. (2012) “Evaluation of short-term climate change prediction in multi-model CMIP5 decadal hindcasts” Geophys. Res. Lett., doi:10.1029/2012GL051644.

Knutti and Sedlácek (2012) “Robustness and uncertainties in the new CMIP5 climate model projections” Nature Climate Change, doi:10.1038/nclimate1716, in press.

Knutti et al. (2013) “Climate model genealogy: Generation CMIP5 and how we got there” Geophys. Res. Lett., 40, doi:10.1002/grl.50256.

Kug et al. (2012) “Improved simulation of two types of El Niño in CMIP5 models” Environ. Res. Lett., 7, 034002, doi:10.1088/1748-9326/7/3/034002.

Lau et al. (2013) “A canonical response of precipitation characteristics to Global Warming from CMIP5 models” Geophys. Res. Lett., DOI: 10.1002/grl.50420

Liu et al. (2012) “Co-variation of temperature and precipitation in CMIP5 models and satellite observations” Geophys. Res. Lett., doi:10.1029/2012GL052093, in press.

Massonnet et al. (2012) “Constraining projections of summer Arctic sea ice” The Cryosphere Discuss., 6, 2931-2959.

Meijers et al. (2012) “Representation of the Antarctic Circumpolar Current in the CMIP5 climate models and future changes under warming scenarios” J. Geophys. Res., doi:10.1029/2012JC008412, in press.

Mizuta (2012) “Intensification of extratropical cyclones associated with the polar jet change in the CMIP5 global warming projections” Geophys. Res. Lett., doi:10.1029/2012GL053032, in press.

Monerie et al. (2012) “Expected future changes in the African monsoon between 2030 and 2070 using some CMIP3 and CMIP5 models under a medium-low RCP scenario” J. Geophys. Res., doi:10.1029/2012JD017510, in press.

Nam et al. (2012) “The ‘too few, too bright’ tropical low-cloud problem in CMIP5 models” Geophys. Res. Lett., doi:10.1029/2012GL053421, in press.

Oleson (2012) “Contrasts between Urban and Rural Climate in CCSM4 CMIP5 Climate Change Scenarios” J. Climate, 25, 1390–1412.

Osprey et al. (2013) “Stratospheric Variability in Twentieth-Century CMIP5 Simulations of the Met Office Climate Model: High Top versus Low Top” J. Climate, 26, 1595–1606.

Reichler et al. (2012) “A stratospheric connection to Atlantic climate variability” Nature Geoscience, doi:10.1038/ngeo1586.

Rotstayn et al. (2012) “Aerosol- and greenhouse gas-induced changes in summer rainfall and circulation in the Australasian region: a study using single-forcing climate simulations” Atmos. Chem. Phys., 12, 6377-6404.

Sabeerali et al. (2013) “Simulation of boreal summer intraseasonal oscillations in the latest CMIP5 coupled GCMs” J. Geophys. Res., doi: : 10.1002/jgrd.50403, in press.

Sarojini et al. (2012) “Fingerprints of changes in annual and seasonal precipitation from CMIP5 models over land and ocean” Geophys. Res. Lett., doi:10.1029/2012GL053373, in press.

Seager et al. (2012) “Projections of declining surface-water availability for the southwestern United States” Nature Climate Change, doi:10.1038/nclimate1787

Seth et al. (2013) “CMIP5 Projected Changes in the Annual Cycle of Precipitation in Monsoon Regions”  Journal of Climate 2013, doi: http://dx.doi.org/10.1175/JCLI-D-12-00726.1

Sillmann et al. (2013) “Climate extremes indices in the CMIP5 multi-model ensemble. Part 1: Model evaluation in the present climate” J. Geophys. Res., doi: 10.1002/jgrd.50203, in press.

Stevenson et al. (2012) “Will There Be a Significant Change to El Niño in the Twenty-First Century?” J. Climate, 25, 2129–2145.

Stroeve et al. (2012) “Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations” Geophys. Res. Lett., 39, L16502, doi:10.1029/2012GL052676.

Su et al. (2012) “Diagnosis of regime-dependent cloud simulation errors in CMIP5 models using “A-Train” satellite observations and reanalysis data” J. Geophys. Res., doi:10.1029/2012JD018575, in press.

Taylor et al. (2011) “An Overview of CMIP5 and the Experiment Design” Bull. Am. Meteorol. Soc.

Tian et al. (2013) “Evaluating CMIP5 Models using AIRS Tropospheric Air Temperature and Specific Humidity Climatology” J. Geophys. Res., 118, in press.

Todd-Brown et al. (2012) “Causes of variation in soil carbon predictions from CMIP5 Earth system models and comparison with observations, Biogeosciences Discuss., 9, 14437-14473.

Turner et al. (2013) “An Initial Assessment of Antarctic Sea Ice Extent in the CMIP5 Models” J. Climate, 26, 1473–1484.

Villarini and Vecchi (2012) “Twenty-first-century projections of North Atlantic tropical storms from CMIP5 models” Nature Climate Change, doi:10.1038/nclimate1530.

Villarini and Vecchi (2013) “Projected Increases in North Atlantic Tropical Cyclone Intensity from CMIP5 Models” J. Climate, 26, 3231–3240.

Wang  and Overland (2012) “A sea ice free summer Arctic within 30 years-an update from CMIP5 models” Geophys. Res. Lett., doi:10.1029/2012GL052868, in press.

Watanabe et al. (2011) “MIROC-ESM 2010: model description and basic results of CMIP5-20c3m experiments” Geosci. Model Dev., 4, 845-872.

Williams et al (2012) “Diagnosing atmosphere–land feedbacks in CMIP5 climate models” Environ. Res. Lett., 7, 044003

Xu and Powell Jr. (2012) “Intercomparison of temperature trends in IPCC CMIP5 simulations with observations, reanalyses and CMIP3 models” Geosci. Model Dev. Discuss., 5, 3621-3645

Yang and Christensen (2012) “Arctic sea ice reduction and European cold winters in CMIP5 climate change experiments” Geophys. Res. Lett., doi:10.1029/2012GL053338, in press.

Yeh et al. (2012) “Changes in the Tropical Pacific SST Trend from CMIP3 to CMIP5 and Its Implication of ENSO” J. Climate, 25, 7764–7771.

Yin (2012) “Century to multi-century sea level rise projections from CMIP5 models” Geophys. Res. Lett., doi:10.1029/2012GL052947, in press.

Ying and Chong-Hai (2012) “Preliminary Assessment of Simulations of Climate Changes over China by CMIP5 Multi-Models” Atmospheric and Oceanic Science Letters, in press.

Zhang and Jin (2012) “Improvements in the CMIP5 simulations of ENSO-SSTA meridional width” Geophys. Res. Lett., doi:10.1029/2012GL053588, in press.

Zunz et al. (2012) “How does internal variability influence the ability of CMIP5 models to reproduce the recent trend in Southern Ocean sea ice extent?” The Cryosphere Discuss., 6, 3539-3573.

Special Issues/Collections (some of the above papers may be in these Special Issues)

Climatic Change special issue on the Representative Concentration Pathways

Climate Dynamics IPSL and CNRM CMIP5 Special Issue (mostly submitted papers as of 15/03/2012)

Geoscientific Model Development Special Issue on Community software to support the delivery of CMIP5 (no papers linked to it as of 15/03/2012)

Journal of Climate Special Collection on CCSM4 and CESM1

Journal of Climate Special Collection on C4MIP

Books

Dong et al. (2012) The Atlas of Climate Change — Based on SEAP-CMIP5. Springer. 200pp.

Climate change and extreme events on Nature Soapbox Science

January 10, 2012

I wrote a post for the Nature Soapbox Science community blog on climate change and extreme events. If you want to take a look, it’s here. UPDATE (11/1/2012): well, I’d may as well just put the post here as well…

As I type, I have a massive chapter for the next full Assessment Report (due to be published in 2014) sitting on my desk to review and a couple of analysis routines churning their way through terabytes of climate model data. There’ll be hundreds of other people around the world focussed on similar things. The aim is to produce the 5th series of Assessment Reports since the IPCC was formed in 1988 to help decision makers, well, make decisions.

But the IPCC has been up to other things recently as well. In November 2011 it published a Special Report Summary for Policymakers on “Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation” (or SREX, the full report will be published in February 2012). Understanding how extreme events might change in the future is really important as it’s these things that will really impact people: heat waves, flash floods, hurricanes, droughts and sea level rise related inundation. This is far more useful to know than the quite abstract concept of global mean temperature change. This report looks like an advance in the IPCC procedure as it involved a far more integrated approach than usual IPCC outputs, having authors from climate science, impacts and adaptation backgrounds as well as disaster risk management experts.

Although it sounds obvious, one of the key conclusions of SREX was that the impact of extreme climatic events is greatest where vulnerability is highest. On the ground, this has manifested itself as higher fatality levels in developing nations and higher economic losses in developed countries. There’s a lot to think about here in terms of how developing nations move forward and how developed nations approach things sustainably to reduce exposure. That’s not really my area though.

From a scientific point of view, they also point out that analysing extremes is relatively difficult as they are rare and data from around the world are not always up to the job. That said, this depends a lot on the particular “extreme” being investigated – this has always struck me as slightly odd about the climate extremes community in that the only common theme is the statistics and not the science behind the phenomena.

Looking to projections, the IPCC SREX assign their highest confidence assessment (“virtually certain”) to increases in temperature extremes by 2100. This is because this is pretty much a direct response to the radiation changes forced by atmospheric greenhouse gas emissions. Everything else is a slightly more messy consequence of the temperature changes and these other fields vary much more amongst the 12 different models used in this analysis making their projections uncertain. However, it also looks likely that heavy precipitation events will increase in certain regions and that the maximum winds associated with tropical cyclones will increase whilst their total number will likely decrease.

Oddly enough, the emissions pathway that we take in the future (the IPCC analyses different sets of projections based on different socioeconomic and technological development assumptions) has little impact on extreme events in the next 30 years or so – they don’t appear to have an impact until the latter half of the 21st Century when inter-model variability masks most of the climate signal anyway. This highlights how making projections of extreme events is a difficult game. In that spirit, here are two of the key problems as I see them relating to my area of research on severe storms in Europe:

Loading the dice or getting new dice?

If we assume that climatic quantities have a normal distribution (which isn’t always the case, especially with precipitation) then you can view the extremes as the tails at either end of the distribution e.g. hot or cold. So climate change could be viewed as like loading dice – you start rolling more sixes (or getting more hot days). However, when the climate regime changes this analogy breaks down as, instead of just rolling more sixes, you start needing to roll sevens as climate records are broken (see the figure below). This poses a problem for climate models as, like a six sided die isn’t designed to roll a seven, climate models haven’t been designed (or at least haven’t been verified against) conditions that have never been observed.

The green curve represents the distribution of Swiss summer temperatures from 1864 to 2002. Clearly, 2003 does not align well with that distribution and is an example of an extreme breaking a previous record. This figure has been taken from the IPCC AR4, for more details click the image.

We’re gonna need a smaller box.

The second problem is that some important things – like severe storms, tornados and regional and local changes such as river catchment area precipitation changes – are too small for climate models to represent or resolve. The reason for this is that these computer models split the atmosphere (and oceans) into a 3D array of boxes. The important equations are solved in each box and then they pass information to neighbouring boxes as appropriate at each model time step. These boxes usually have horizontal dimensions of around 100-400 km to allow for a convenient computational time. However, storms and tornados work on scales of significantly less than 100 km so there’s no way that the models can tell us anything about these things. This problem is particularly acute in relation to the IPCC SREX as this analysis used a suite of climate model data from a project called CMIP3, which was completed in 2006 for the last IPCC assessment and, therefore, does not use the most up-to-date and highest resolution model data. (The data currently being prepared for the next IPCC Assessment Report called CMIP5 is, however, not yet complete so perhaps this criticism is a bit unfair.)

Is this good enough?

So does this mean that analyses using these model data are not useful or reliable? When faced with this question I struggle to get past the fact that, however much they can improve in the future, these models are still the best and only tool we have for making climate projections. Beyond that, we can take comfort in the fact that the very basic physics of climate science is really well understood – even very simple energy balance models can tell us useful things about the effects of increasing atmospheric greenhouse gas changes. What we’re talking about here are the details, albeit very important details, and in that respect our current analyses are consistent with the things that we’re pretty sure of.

The Independent Climate Change Email Review not so bad for CRU…

July 7, 2010

The final of the three UK reports that resulted from the CRU email leak/theft was published today. It all sounds pretty good for CRU.

Their first of 3 key findings is very positive:

“Climate science is a matter of such global importance, that the highest standards of honesty, rigour and openness are needed in its conduct. On the specific allegations made against the behaviour of CRU scientists, we find that their rigour and honesty as scientists are not in doubt.”

The second of the three key findings is also positive for CRU:

“In addition, we do not find that their behaviour has prejudiced the balance of advice given to policy makers. In particular, we did not find any evidence of behaviour that might undermine the conclusions of the IPCC assessments.”

As suspected, they do find that there are issues relating to openness. The third key point:

But we do find that there has been a consistent pattern of failing to display the proper degree of openness, both on the part of the CRU scientists and on the part of the UEA, who failed to recognise not only the significance of statutory requirements but also the risk to the reputation of the University and, indeed, to the credibility of UK climate science.”

They also say later in the document:

“We find that CRU’s responses to reasonable requests for information were unhelpful and defensive.”

It looks like there’s lots of other interesting things in there – that the tree-ring proxy reconstructions and peer-review issues didn’t seem to worry the review panel much caught my eye.

Perhaps the biggest criticism relates to the infamous 1999 WMO report:

…the figure supplied for the WMO Report was misleading. We do not find that it is misleading to curtail reconstructions at some point per se, or to splice data, but we believe that both of these procedures should have been made plain – ideally in the figure but certainly clearly described in either the caption or the text.”

I need to read through the report properly but at first glance it looks very supportive of the CRU scientists.

IPCC AR5 WG1 author shake up

June 29, 2010

So, after all the issues relating to the IPCC in recent months (e.g. the unimaginatively named “Climategate”, “Glaciergate”, the now retracted “Amazongate” and the not-so-heavily-covered-…-I-wonder-why? “SeaLevelGate”) the wheel keeps on turning and we’re looking at another IPCC report in 2013.

How have things changed with the IPCC? Have they made any effort to change after all the negative publicity?

Well, the IPCC issued the list of chapters and authors for the Fifth Assement Repoert (AR5) and I thought I’d have a quick look at what’s new. I’ve only looked at WG1 because that’s what I know and what I find most interesting.

“Clouds and aerosols” get their own chapter and regional climate change is mentioned, which are key areas that need addressing. Irreversibility is also now considered.

The new author list has lots of changes from the AR4. A very quick analysis shows that less than 20% of the Coordinating Lead Authors or Lead Authors from AR4 are Coordinating Lead Authors or Lead Authors in AR5. Notable absences include Phil Jones, Keith Briffa and Michael Mann (although Mann was not an AR4 author either) – whether this is a consequence of “Climategate” is unknown but I expect it will make some people happy.

More nations are now represented in the list of Coordinating Lead Authors or Lead Authors (up to 45 from 34) but American authors now make up a slightly greater proportion (26% vs. 21%).

From this very quick look, it would seem difficult to criticise the AR5 IPCC for being the same old faces, so congratulations to them on that count.

Caveats: I’ve not looked at how these changes compare to the author-tunrover from the Third to the Fourth ARs and old Coordinating Lead Authors or Lead Authors could still turn up as Contributing Authors.

The number crunching for this post was done by Meghan Hughes. Thanks!

The “Hockey Stick” evolution

June 15, 2010

This is a post that aims to go through the evolution of the “Hockey Stick” from 1990 to the present day.  It naturally misses out parts of the story, which deserve far more analysis, simply to keep the post short.  Comments that expand on the bits I’ve omitted are welcome!

What is the “Hockey Stick” and who cares?

One of the key areas of controversy relating to climate change and the body that synthesises all the science – the Intergovernmental Panel on Climate Change (IPCC) – is the so-called “Hockey Stick” graph that first appeared in the IPCC in 2001.

The graph is important because it tries to reconstruct large-scale temperatures for the past 1000 years or so to put the current warming in context.

Lots of people have spent many hours trying to assess or discredit the graph and the science behind it:

  • There have been several official (and controversial) inquires and reports on the science and the scientists.  Two of the most well-known are the NRC Report and the Wegman Report.
  • The Climate Audit blog, and its many followers, have been picking at the science, the raw data and the method that produced the “Hockey Stick” for a long time.
  • The Bishop Hill blog has many posts on the “Hockey Stick” and the man behind that blog has even written a book about the graph (I’ve not read the book but I’d like to review it for this blog soon).

So if it’s so important, how did the “Hockey Stick” get here and where did it go?  Let’s have a look…

IPCC First Assessment Report (FAR) – 1990

The temperature reconstruction of the last 1000 years or so in the FAR was little more than a best guess.  The figure shown below from the report (you can find it on page 202 of the FAR (big pdf)) was even labelled as a “schematic” diagram and had no scale on the temperature axis:

It’s a composite overview of the evidence available in 1990 from ice cores, tree rings, historical records and other so-called “proxy” measures of temperature.  This field of research was in its infancy so the schematic wasn’t highlighted much in the report.

[UPDATE: Having looked through the IPCC FAR again it doesn’t actually say how this schematic was constructed.  Looking at Jones et al. 2009 (“High-resolution palaeoclimatology of the last millennium: a review of current status and future prospects” The Holocene, 19, 3-49) it seems that it was compiled from a series of publictions by H. H. Lamb and was only based on temperature records associated with Central England, so I doubt very much that any ice core data was used! The key message from Jones et al. that casts serious doubt over the schematic is: “At no place in any of the Lamb publications is there any discussion of an explicit calibration against instrumental data, just Lamb’s qualitative judgement and interpretation of what he refers to as the ‘evidence’“. The schematic also failed to make it into the 1992 IPCC Supplementary Report as it was decided that more data was required and it was not representative of a large area.]

Indeed, the reason for including this plot at all in the FAR is probably summed up by this quote:

“So it is important to recognise that natural variations of climate are appreciable and will modulate any future changes induced by man”

However, this plot is still referred to by a lot of people as the temperature at the “present day” end of the graph is not the highest value on the plot.

Given that it’s essentially a sketch, I’m surprised that people read much into this plot.

For example, the usually meticulous Science of Doom was, in my opinion, off the mark with his analysis of the development of the science here, skipping straight from the First to the Third Assessment Report version to imply that something underhand was going on.  This doesn’t represent the scientific progress properly.  So, we’ll look here at the parts of the story that SoD missed out.

[The SoD post also doesn’t show the IPCC FAR version of the plot (he uses one from a 1993 textbook that has a temperature scale) and he points to the Wegman report as the point of reference for analysis of the “Hockey Stick”, which is perhaps not the best source.  Indeed, the Stoat blog has recently examined Wegman’s analysis of this plot and the conclusions are not supportive.]

IPCC Second Assessment Report (SAR) – 1995

So where did the science go between the FAR and SAR?

It seems that it went backwards; the SAR reconstruction only goes back 600 years and not 1000 years like the FAR.

Here’s the relevant plot from page 175 of the IPCC SAR (another big pdf):

Why does it only go back 600 years?  Well, here’s a quote from the SAR:

“Prior to 1400 data are insufficient to provide hemispheric temperature estimates.”

Ok, to be fair on the IPCC, there is now a temperature scale, which is a big improvement.  Also, the IPCC has recognised what they do not know enough about the climate prior to 1400 AD and removed that part of the plot.  I suppose you could read this as the start of a conspiracy to “cover-up” the Medieval Warm Period but there is no evidence for that. [For example, here’s a recent example of interpreting a decent paper poorly to reach the conclusion you want regarding the MWP.]

The report also says:

“A recent analysis, using tree-ring density data, has attempted to reproduce more of the century time-scale temperature variability in this region (Briffa et al, 1995). This shows that the 20th century was clearly the warmest in the last 1000 years in this region, though shorter warmer periods occurred, for example, in the 13th and 14th centuries.”

So the state of the art science in 1995 was not particularly clear but it does give an indication of where it was going…

IPCC Third Assessment Report (TAR) – 2001

Here’s where the story really takes off…

This plot is a composite of all the “best” proxy climate data available at the time of writing the TAR – you can find the plot and lots of background information here on pages 130-136 of the TAR.

It is most strongly linked to Michael Mann of Penn State University and it was this version that was dubbed the “Hockey Stick” (because it looks like an Ice Hockey Stick, I thought that was worth mentioning in case UK readers are wondering why it doesn’t curl around at the end!)

As hinted at in the SAR, there was a lot of new work looking at these reconstructions between the SAR and the TAR so it goes back further and includes regions of uncertainty.

But this was still quite new science.  If it could be trusted then it would be an important addition to the TAR.  [But it wouldn’t be the only or most important part of the report and not a fundamental result that supports the rest of the science, that’s not how science works.  To steal an analogy, science is more like a jigsaw than a house of cards.]

The IPCC clearly came to the conclusion that this plot was trustworthy and delivered this verdict:

“Taking into account [the] substantial uncertainties, Mann et al. (1999) concluded that the 1990s were likely to have been the warmest decade, and 1998 the warmest year, of the past millennium for at least the Northern Hemisphere.”

This did not go down well with some people (and the language was toned down in the subsequent IPCC report).

The controversy is quite well documented, Wikipedia is as good a starting place as any.  More recently, some of the discussion between the IPCC scientists that appeared in the incomplete email record that was taken from the University of East Anglia’s Climatic Research Unit in 2009 has further fueled this controversy. [The “hide the decline” email being the most obvious and relevant example, although this has been misinterpreted and blown out of all proportion – RealClimate give a good account of the context.]

The raw data, the methodology and the statistical tools used to produce the graph have all been examined in great depth.  Everyone from bloggers to the US Senate have been interested in it.  It must be one of the most intensely scrutinised graphs ever produced.

But it’s only a graph, so why all the fuss?

In my opinion, using relatively new science to advise policy makers on issues that affect the whole population’s way of life is bound to throw up problems, especially if you don’t like what the message is.  But that was the situation that the IPCC was in and they were probably right to stress the importance of this graph – it was new science but none of the investigations into it have landed a killer blow.

Indeed, maybe all the scrutiny would help the science develop faster and become more reliable.

So did it?

IPCC Fourth Assessment Report (AR4) – 2007

Here is the most recent IPCC “Hockey Stick”-type plot.  Its background information can be found here in the AR4.

Despite all the attacks on the 2001 “Hockey Stick”, an improved version was included in the 2007 IPCC report.  The report discussed the peer-reviewed criticisms of the methodology, which the IPCC deserve credit for.

Questions still remain over the statistical methods used in this graph and this does not make the discussion very accessible but here is a very quick attempt.

The main conclusion from several of the relevant inquiries was that the statistical methods were not ideal but they did not change the result (i.e. the shape of the graph) in any significant way.  To give a specific example, Lord Oxburgh’s review of the science concluded that the climate scientists should collaborate more with statisticians.  This point is hard to disagree with.

Some people, however, still believe that the statistical methods are a terminal issue for the “Hockey Stick”.  Here’s Bishop Hill in 2008 on the stats (from a very “sceptical” point of view).  He also reported on a couple of papers that aimed to refute the major criticisms of the methods and their tortured journeys to get into the IPCC AR4 (although, having published in both GRL and CC myself, this story doesn’t sound as remarkable as Bishop Hill spins it!)  And here’s a defence of the “Hockey Stick” methods from RealClimate in 2005.

This argument is going to continue and the science will continue to improve.

The thing that strikes me as odd, though, is that most of the criticism aimed at the “Hockey Stick” is still aimed at the 2001 version.  I suppose that there is more ammunition to attack this version with – the data was very new, the methods were new, it was high-profile and the CRU emails give new opportunities to quote mine.  I get the feeling that some of the “sceptic” community have put nearly all their eggs in this basket and it therefore needs to continue to attack the 2001 version.

In the meantime, the science has moved on and improved.  Indeed, the latest Mann et al. version of the work in PNAS has been questioned in the same journal and the response by Mann et al. showed suggested that some of those criticism were quite strange.

IPCC Fifth Assessment Report (AR5) 2013

So what’s going to happen next?

I assume that the work of some of the key players may have been slowed a little because of all the inquiries they’ve had to deal with.

However, as the field has developed other groups will have taken on the challenge and there are now more groups than ever (with new ideas and perspectives) working on these issues.  Indeed, the IPCC has increased the prominence of this type of work for their next report – the outline for AR5 includes a whole chapter (Chapter 5) on palaeoclimatology – it was a chapter sub-section in AR4 and TAR.

This is good news and hopefully we’re getting closer to truth, which is what the scientists wanted all along.

Reference:
ResearchBlogging.orgMann ME, Zhang Z, Hughes MK, Bradley RS, Miller SK, Rutherford S, & Ni F (2008). Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia. Proceedings of the National Academy of Sciences of the United States of America, 105 (36), 13252-7 PMID: 18765811

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Dear Institute of Physics…

March 3, 2010

After publishing my post on the CRU and S&TC yesterday, I looked into the IoP’s evidence submission in a bit more detail and was quite surprised. Here is an open letter to the IoP about that evidence:

Dear Institute of Physics

As a member of the IoP I am very concerned about the recent memorandum submitted by the IoP to the House of Commons Science and Technology Committee.

In my view, it is unfair to criticise the CRU on the basis that they did not comply with data sharing standards that, at present, don’t exist. There is clearly a need for rules regarding openness in relation to data and methods but it is foolish to retrospectively admonish people for not following them! Do the journals currently published by the IoP employ the data policies suggested in your statement?

There also seems to be some misunderstanding in the statement of the particular issues relating to the data that the CRU use in their research and how some of the issues discussed in the private and incomplete email records were resolved in the peer reviewed literature and other open arenas. In particular, point 5 of your evidence is incorrect and irresponsible given that it casts unwarranted doubt over the findings of the IPCC.

Furthermore, your statement gives no recognition to the efforts of the scientists in question to engage with their critics before they were subjected to unfounded attacks on their work and integrity and orchestrated FoI request campaigns.

Finally, I am confused as to why the Energy group was tasked with preparing the statement and not the Environmental Physics group, who would have been more aware of the particular issues in this case.

I realise that a small clarification has been issued but if the IoP continues to stand by this statement then I will have no other option but to reconsider my membership of your organisation.

Yours faithfully,

Andy Russell MInstP

Some similar posts:

Stoat: The IoP fiasco
Some beans: A letter to the Institute of Physics

CORRECTION: The statement was written by the “Energy Sub-group of the IoP’s Science Board” not the “IoP’s Energy Group” as I previously thought. Either way, the IoP has still not been open about who wrote the statement.

UPDATE 5/3/2010: In response to my letter I was sent an email from the IoP standing by the submission and an anonymous statement from a member of the Science Board about the process of writing the evidence submission. Here is that statement:

“The IOP contribution has been widely understood and welcomed – not universally, apparently, but then there is a debate going on. Scientists are sometimes criticised for not engaging, and I hope we can look forward to hearing from professional bodies representing other branches of science.

“The Institute should feel equally relaxed about the process by which it generated what is, anyway, a statement of the obvious. The standard process for policy submissions by IOP – it makes dozens per year – was followed. Typically a call for evidence is spotted or received by IOP HQ. Usually the timescale is tight. A first draft is put together at Portland Place – working scientists just don’t have time to do this. The draft is then emailed round to all members of IOP advisory bodies that might want to contribute, which is where working scientists come in. Science Board is one of several such bodies. When we do respond – my personal strike rate is definitely under 50 per cent – we copy round our responses, so other committee members get multiple opportunities to comment if they wish to. During this phase people can and do say “Sorry, I disagree”. Remember that there is a tight deadline, and by the time it is reached, everyone who seriously wants to comment will have done so. Collective responsibility then kicks in, which is why I am not revealing my identity.”

UPDATE 6/3/2010: apparently not realising that it kind of proves my point about the sort of behaviour Phil Jones has had to deal with for years, I got an email today in response to my perfectly reasonable assessment of the IoP’s evidence submission:

For a physicist you are a bit of a cretin.

However your views on the release of scientific data for verification are positively neolithic.

What kind of blind, bigoted AGW Alarmist tosser are you?

Pull your head out of your arse and smell the coffee!

Have a nice day!

The Climatic Research Unit and the Science and Technology Committee

March 2, 2010

The House of Commons Science and Technology Committee met yesterday for a one off evidence session looking at the disclosure of climate data from the Climatic Research Unit at the University of East Anglia. This blog post is a quick summary of what I thought were the key issues. [Apologies for the use of some jargon that crops up because of the nature of the CRU emails.]

Lord Lawson and Dr Benny Peiser were first up. They represent the Global Warming Policy Foundation who, amusingly, failed to plot 8 temperature values correctly in their logo – I’m not sure that this gives them the authority to question 25 years of academic research on climate data but let’s see what they had to say…

Lawson’s main point was about the fundamental importance of transparency in science (not that Lawson or Peiser have ever been scientists). However, he would not answer the question put to him about who funds his organisation – this was a bit of a cheap shot but it helped make the point that transparency is only important to them in other organisations.

Evan Harris MP did excellent work in setting Lawson up for a fall in his questions about the “…hide the decline…” emails. Lawson was claiming that the details of dendroclimatology divergence were not discussed in any of the subsequent key papers on tree ring based climate reconstructions. Harris then got Lawson to agree that if the CRU scientists could show that they did discuss this matter in their publications then this was not an issue. This comes up again later.

Ian Stewart MP also asked some questions about work that the GWPF plan to do that highlighted their lack of scientific credentials or ambition. Lawson also brought up an incorrect criticism of satellite measurements, which Prof. Julia Slingo (MetOffice) would subsequently correct, and claimed that the “hockey stick” graph was “fraudulent” and periods of it were based on only one tree, claims which he has no evidence to back up.

Next to be questioned was Richard Thomas CBE, UK Information Commissioner (2002-2009), who provided some quite technical details on Freedom of Information – I’m not too sure what he added to the session. I suspect I do not understand enough about FoI laws but my interpretation of his evidence was that CRU may or may not have done anything wrong and that methodology, if documented, has to be distributed under FoI requests but there is no requirement to document it.

I felt that the most important witness, Prof. Phil Jones (accompanied by UEA vice-chancellor Prof. Edward Acton), did not look particularly well and spoke a bit shakily. He went over quite a bit of the background to CRU’s work and data policies and delt with most of the issues. However, he could have done much better when asked about reproducibility of CRU’s gridded surface temperature products by others: all he needed to say was that as long as someone spent the time collecting the data from meteorological organisations and read some scientific papers then they could, with a bit of work, re-produce the CRU temperature product!

This was typical of his statments – it seemed like he missed the point of many of the questions – this was quite a contrast to Lawson who was obviously more comfortable with the rhetoric required to successfully get through these sessions. In particular, Jones’ statement that he’d sent some “awful emails” was probably meant as a joke but it didn’t get any laughs.

Evan Harris MP completed his manoeuvre of highlighting Lord Lawson’s misunderstanding of the divergence issue – Phil Jones described that the “trick” was discussed in a Nature paper, where he suspects they were the first group to use the term “divergence”, and that they were explicit in subsequent papers about this issue. I suspect that this will be a key point in the committee’s report.

Harris appeared to be the only member of the committee that understood the background enough to have devised a consistent line of questioning to the witnesses. Indeed, some of the questions from other committee members made it clear their understanding of peer review and research methods was not great.

My live streaming of the event cut out as Sir Muir Russell (Head of the Independent Climate Change E-Mails Review) took the hot seat so I missed his and Prof. John Beddington (Government Chief Scientific Adviser), Prof. Julia Slingo OBE (Chief Scientist, Met Office) and Prof. Bob Watson’s (Chief Scientist, Defra) statements but reviewing the Gaurdian live blog of the session, there don’t seeem to have been any more bombshells. The most important development was that the quite negative Institute of Physics evidence submission came up – the final group of witnesses felt that it pre-judged the outcome of the enquiry.

My overall impression was that the committee, as well as the GWPF, didn’t seem to understand enough about the scientific process to make progress in this case: papers don’t have a right to be published – they have to be good enough; scientific methods are discussed in papers but no-one publishes computer code of how the analyses were performed – this should probably change though. Phil Jones was also not well prepared to answer general questions from a non-specialist panel and would clearly prefer to deal with arguments in the pages of peer-reviewed journals.

On “the real holes in climate science”

February 10, 2010

[This post is based on a question I got in response to a previous post but thought it deserved a short post on its own as there’s a few interesting points.]

There’s been a lot of bad press recently for climate science but a lot of has focused on very minor issues. For example, most of the coverage on the UEA CRU email leak/theft/hack (so-called climategate) has focused on what some of the “skeptic” community wished was in the emails rather than what was really there. The Guardian has gone over some of the issues from the leak in depth in a recent series of articles, although this seems like a lot of focus on old issues. As Prof. Phil Jones himself said in a recent interview in The Sunday Times: “I wish people would read my scientific papers rather than my emails”. Glaciergate was equally blown out of all proportion given that the original claim only appeared in one sentence in a 3000 page report.

In the midst of all this, Nature printed a nice feature looking at the real big gaps in climate science (Schiermeier 2010), but it is behind a paywall, which is a shame because it’s a good piece. So, I thought I’d provide a very quick summary here.

Regional climate prediction

We still don’t have sufficient computing power to run models at high enough resolution to make projections on the scale that would be useful to policy makers. This is clearly required to make big infrastructure decisions.

Precipitation

Projections of precipitation patterns are really hard to make as they depend on temperature changes, circulation changes, radiative balance changes and pollution (and, therefore, cloud condensation nuclei) changes. Yet precipitation changes will probably have the biggest impact on society.

Aerosols

The effect of aerosols (i.e. small solid particles or liquid droplets suspended in the atmosphere) is a big unknown. Different types do different things and its not really certain whether they have a generally cooling effect – by reflecting away solar radiation – or a warming effect – by promoting more cloud growth and trapping more terrestrial heat. That said, any cooling effect would be very unlikely to reverse the warming impact of greenhouse gases.

The tree-ring controversy

This relates mostly to the “hockey stick” graph and the reliability of the palaeoclimate data we use to put our current climate into perspective. It’s important that we learn from past climate changes as we only have one atmosphere and can’t do experiments with it. But it is not easy to get palaeoclimate data (tree rings, ice cores, sediment cores) or to interpret them properly.

So what is the “consensus”?

In a certain sense, when people talk about the “scientific consensus about climate change” they really mean little more than our understanding of the greenhouse effect, our impact on it and that things are very likely to get messy in the future. All the details are still very much under investigation.

Reference:

ResearchBlogging.orgSchiermeier, Q. (2010). The real holes in climate science Nature, 463 (7279), 284-287 DOI: 10.1038/463284a

Glaciergate in perspective

January 18, 2010

The story is about a claim in the 2007 IPCC report that the Himalayan glaciers would melt by 2035.  It turns out that the evidence for this claim was from a speculative comment made by a not-very-prominent glaciologist in New Scientist in 1999.  The Times and The Express have gone to town with this story.  So, what does it really mean?

A little bit of background…

To understand the significance of Glaciergate, we first need to understand how the IPCC works.  So, the Intergovernmental Panel on Climate Change (IPCC) is split into 3 Working Groups:

  • WGI: The Physical Science Basis
  • WGII: Impacts, Adaptation and Vulnerability
  • WGIII: Mitigation of Climate Change


Each group produced a separate report in 2007.  They were each about 1000 pages long.  This was the fourth IPPC report round, the others were in 1990, 1995 and 2001.

WGI reviews and synthesises all the work on the physics and chemistry of the Earth system and tries to make projections of how things like temperature, rainfall and atmospheric circulation will change in the future.  I refer to this report a lot in my work as a meteorologist/climatologist.

I know a little about Working Group II – it is written by hydrologists, glaciologists, economists, social scientists and medical scientists – but I have very little idea about what goes on in WGIII.  I also confess that I’ve never looked at the WGIII report.  WGs II and III rely on a certain degree of speculation; it is their business to ask what the world would be like if certain things happen based on the projections from WGI.

Was the Himalayan meltdown a “central claim” in the IPPC report?

The 2035 date relating to the Himalayas appears in one sentence in Chapter 10 of the Working Group II report.  So this is one sentence in nearly 3000 pages. As far as I can see (please correct me if I’m wrong) the 2035 claim was not repeated in the WGII Summary for Policymakers or the overall Synthesis Report.  This was not a central claim.

Given that WGII is speculative by nature then Glaciergate appears to be a reviewing error rather than an attempt to distort the science.  Why the claim was given an implied “very likely” (90% certain) tag is worrying but then this is the first questioning of anything in the report that I can remember since it was published in 2007 – that says a lot for the skill and thoroughness of the report reviewers.

Most importantly, though, the WGII glacier claim changes absolutely nothing about the fundamental science behind climate change that appears in WGI.  This is like saying you wont trust anything in the economics section of The Times because they once printed a football result wrong.  The WGI science is all robust and, if anything, quite conservative in its claims and projections.

Dr Rajendra Pachauri (IPCC Chair) is a former railway engineer with a PhD in economics and no formal climate science qualifications…

Today’s Express also makes this statement as if it undermines the whole of the IPCC.  If anything, it just shows that the reporter has very little idea what the IPCC actually does.  Pachauri has worked in several different scientific disciplines and has headed a large organisation before.  In my mind, that more than qualifies him to head the IPCC.

Anyway, if you’re looking for people with in depth knowledge of specific fields, then there are the WG Chairs.  For example, WGI was chaired by Susan Solomon, who stands a pretty good chance of being awarded a Nobel prize for her work in the 1980s on the ozone “hole”.  Beneath the WG Chairs, each chapter has at least 1 co-ordinating author and 1 lead author.  Beneath them, each chapter also has many contributing authors, all experts in their field.

This attack on Pachauri doesn’t hold up very well.

The revelation is the latest crack to appear in the scientific concensus over climate change…

This claim was made in the Times yesterday, with the other cited cracks being the CRU email theft and something about sea level rise estimates.  This claim seems to assume that “consensus” means that no new work is going on in the climate sciences or at least demonstrates a complete ignorance of how science works.

Things will change in the science, which is exactly why the plans for the next IPCC report (due in 2014) are already well under way!  These are exciting (and, if I’m honest, a little depressing) times for climate science so its disappointing that many people outside the research community don’t want to know about it.