Global warming does not slow down

9 04 2013

Julien Emile-Geay

Thanks to the miracles of credit card reward programs, I have been gainfully receiving, for some time now, a weekly copy of the distinguished magazine The Economist. It is usually a fine read, especially since its editors distanced themselves from the laughable flat-Earthing of the Wall Street Journal a long time ago. It was therefore  a bit of a shock to read last week (on the cover, no less): “global warming slows down” (holding the quotation marks with gloves).  Had they been bought by Rupert Murdoch, or were they privy to new data, of which the  isolated climate scientist that I am had remained woefully ignorant? Well, neither, it seems.

Opening its pages with a mix of curiosity and skepticism (yes, we climate scientists are skeptics too – skepticism is not the privilege of deniers but the hallmark of healthy minds), I read the piece “Global Warming: apocalypse perhaps a little later”. It argued that since some recent estimates of climate sensitivity have been revised downward, the world might not scorch as early as we once thought. Still, they wisely argued that this was no excuse for inaction, and that the extra time should be used to devise plans to mitigate, and adapt to, man-made climate change.

Though I  agree with the consequent, I wholeheartedly disagree with the premise.

Digging a little deeper, it appears that they devoted a whole piece on climate sensitivity (“Climate Science: a sensitive matter“).  James Annan was right in pointing out its quality – it is a nuanced piece of science writing for a lay audience,  something all of my colleagues and I know the difficulty of achieving. It wasn’t the science journalist’s job to pick winners, but as a climate scientist I can tell you that not all the evidence presented therein carried equal weight. My issue has to do with the emphasis (here and elsewhere) that just because surface temperature  has been practically flat for the past decade (i.e.  the transient climate response (TCR) may not be as high as one would have concluded from the 1970-2000 warming), this means that equilibrium climate sensitivity (ECS) must also be lower.

Now, the “sensitive matter” piece does take care to distinguish the two concepts (transient vs equilibrium warming), but the message did not reach the editors, who conflate them with flying colors on the front page (“global warming slows down”). So while said editors can apparently hire good science writers, it would be even better if they  read their writings carefully.

My personal take is that TCR  is an ill-suited measure of climate response, because it only considers surface temperature. When energy is added to the climate system (e.g. by increasing the concentration of heat-trapping substances like carbon dioxide), it can go do any of 3 things:

  1. raise surface temperature
  2. help water change phase (from liquid to vapor, or from ice to liquid)
  3. raise temperature somewhere else (e.g. the deep ocean).

Tracking surface warming is certainly of paramount importance, but it’s clearly not the whole story. Focusing exclusively on it misses the other two outcomes. Do we have evidence that this might be a problem? Well, most glaciers in the world are losing mass, and a recent article by Balmaseda et al [2013], shows very clearly that the heat is reaching the deep ocean (see figure below). In fact, the rise in ocean heat content is more intense as depth than it is at the surface, for reasons that are not fully understood. To be fair, nothing involving the tri-dimensional ocean circulation is particularly straighforward when viewed through the lens of a few decades of observations, but the ocean heat content is quite a  robust variable to track, so even if the causes of this unequal warming are nebulous, the unequal warming isn’t.


The Balmaseda et al study concludes that, when considering the total rise in ocean heat content, global warming has not slowed down at all: it has in fact accelerated. I’m not quite sure I see it this way: warming is still decidedly happening, but it does look like this rate of increase has slowed down somewhat (as may be expected of internal climate variability). Therefore, to meet the Economist halfway, I am willing to embrace conservatism on this issue: the climate system is still warming, and let’s agree that the warming has neither slowed down nor accelerated.

In fairness, the Economist’s “sensitive matter” piece did quote the Balmaseda et al. study as part of its rather comprehensive review ; in my opinion, however, this is not just one data point on a complex issue (as their piece implies), but is a real game-changer. It confirms the findings of Meehl et al, [2011], who predicted exactly that: greenhouse energy surpluses need not materialize  instantly as surface warming ; in many instances, their model produced decades of relative “pause” in surface temperature amidst a century-long warming. It did so because the ocean circulation is quite variable, and sometimes kidnaps some of the heat to great depths, so it takes time before the whole of the climate system feels it. This is one reason for the essential distinction between ECS and TCR: the inherent variability of the climate system means that it may take a long time to reach equilibrium. That’s what’s really bothersome about ECS: it’s not observable on any time scale we care about, so it is of limited relevance in discussing reality (it is important to understanding climate models, however).  Perhaps TCR should be based on some measure of ocean heat content? This might already have been done, but I am not aware of it. Actually, sea level might be our best proxy for integrated ocean heat content plus melted ice, and non-surprisingly it is still going up.

So despite the jargon, the basic idea is quite simple: more CO2 means a warmer climate system as a whole, and sooner rather than later.  So it is becoming increasingly urgent to do something about it, as they point out.  Now, what would it take to convince the Wall Street Journal of that?

UPDATE (April 16, 3:30 PST): further accounts of The Economist’s unduly optimistic perspective are given here and here.  Another paper published this week in Nature Climate Change (nicely summarized here)  also emphasizes the important role of the ocean in mediating surface warming.

Tree rings and Drought Indices

21 11 2012

Kevin Anchukaitis

On November 14, Nature published a paper by  Justin Sheffield and colleagues with the title ‘Little change in global drought over the past 60 years’.  The paper describes the consequences of a bias in an index of drought, the Palmer Drought Severity Index (PDSI), which incorporates precipitation, temperature, and soil moisture storage into a single measure of drought severity.  The index is designed so as to indicate ‘normal’ (average) conditions at zero, with negative values indicating drought and positive values indicating pluvial (wetter) conditions.  Sheffield et al. note that the bias they identify arises from ‘a simplified model of potential evaporation [Thornthwaite model] that responds only to changes in temperature’. Indeed, this is something that scientists in the drought community have been aware of — For example, Aiguo Dai published a paper last year in the Journal of Geophysical Research in which he noted that

Another major complaint about the PDSI is that the [potential evaporation] calculated using the Thornthwaite equation (Thornthwaite,1948) in the original Palmer model could lead to errors in energy‐limited regions [Hobbins et al., 2008], as the Thornthwaite PE (PE_th) is based only on temperature, latitude, and month.

Similar to the Sheffield et al. paper, Dai looked at variants of the PDSI, including one calculated using the Penman-Monteith equation, which incorporates radiation, humidity, and wind speed as well as precipitation and temperature.  As their headline results, Sheffield et al. write that:

More realistic calculations, based on the underlying physical principles that take into account changes in available energy, humidity and wind speed, suggest that there has been little change in drought over the past 60 years.

While Dai came to a different conclusion:

The use of the Penman-Monteith PE and self-calibrating PDSI only slightly reduces the drying trend seen in the original PDSI.

This is one of those times to be wary of what Andrew Revkin has called ‘single study syndrome‘. As I am quoted saying in a ScienceNews article, I think the jury’s still out on the reason for the differences between the two groups and the implications.   Certainly, and as Sheffield et al. note, some of the difference is likely in part due to the treatment of uncertainties in the data for radiation, humidity, and wind speed that go into the Penman-Monteith equation.  Another solid resource on this issue is this Carbon Brief article by Freya Roberts.  In a larger sense, too, this is about what we actually mean by the term drought and how we chose to define and measure it.  John Fleck has this aspect extremely well-covered here, here, here, and here (also here, here, and here).

I wanted to address here, however, a question that arose in a Twitter string between journalists John Fleck and Keith Kloor, and climate scientists Jonathan Overpeck, Simon Donner, Ben Cook, Richard Betts, and myself — namely, since many tree-ring drought reconstructions are reconstructions of the PDSI, what does this mean for our understanding of past megadroughts?

Despite some of its weaknesses, the PDSI (even with the Thornthwaite model) still has some desirable characteristics.  First, it attempts to capture the influence of temperature on evapotranspiration, and so reflects more than just precipitation.  Second, since it describes dimensionless anomalies, it is theoretically comparable over large regions.  Third, since it simulates soil moisture storage, it can capture the importance of a previous season’s rain (or snow) on subsequence moisture availability. Finally, the Thornthwaite model can be calculated using precipitation and temperature, climate data that are more readily available back in time compared to the radiation, humidity, and wind speed data needed for  Penman-Monteith.  I should note that not all the assumptions implicit or explicit in the above hold true at all times or over all places.  Still, one is tempted to paraphrase Churchill on democracy.  Or was that Mark Twain?

So does the fact that many tree-ring reconstructions of drought — particularly the North American and Monsoon Asia Drought Atlases — use the Palmer Drought Severity Index as their ‘target’ (predictand) field mean that we have to re-evaluate our ideas about past megadroughts?  Not yet.  The first thing to keep in mind is that all the information we have about past drought comes from proxy measurements like tree-ring width (and lake sediments and speleothems and ice cores, to name a few) and not from observations of PDSI.  To put it another way, it is the sustained narrowness of growth rings in many trees over many decades and at many sites that leads us to infer something about the timing, extent, and relative magnitude of past droughts.  When we reconstruct the PDSI (or precipitation or temperature), we are in essence taking the part of the instrumental PDSI record that overlaps with the tree-ring record and using that period of overlap to calibrate and validate a statistical model that translates tree-ring width into an estimate of PDSI.  We are taking the tree-ring measurement data and putting them into units of climate.

Sheffield et al. talk briefly about the implications for paleoclimate reconstruction of drought and get this part right:

… the tree-ring data, which reflect real variations in climatic and non-climatic factors …

Or to put it another way, tree-ring width reflects climate, but PDSI might not accurately reflect drought severity because of the way temperature is used to calculate potential evapotranspiration.  As an example of this potential problem in the modern record, Sheffield et al. cite two papers they say show a diverging relationship (‘diverge from the instrumental-based PDSI_Th in recent decades’ between PDSI and ring width’) — unfortunately, this is where things start to go a bit wrong:

Fang, K. Y. et al. Drought variations in the eastern part of northwest China over the past two centuries: evidence from tree rings. Clim. Res. 38, 129–135 (2009).

deGrandpre, al. Seasonal shift in the climate responses of Pinus sibirica,Pinus sylvestris, and Larix sibirica trees from semi-arid, north-central Mongolia. Can. J. For. Res. 41, 1242–1255 (2011)

Both papers describe climate and tree growth relationships in semi-arid regions of China and Mongolia.  The second, by deGrandpre and coauthors (which, in the interest of disclosure, includes some of my own coauthors on related projects), doesn’t actually discuss any divergence between PDSI and tree-ring width.  The first, by Keyan Fang and coauthors, does show a period of separation between ring width and PDSI, but only between 1997 and 2003, the end of their record:

Fang et al. Figure 5

The authors of Fang et al. discuss this feature of their study, saying:

The abnormally dry conditions from 1997 to 2003 (Fig. 5) might have been caused by the significant drying trend and a poor PDSI model fit (Liang et al. 2007). That is, the current PDSI model for this region might have overestimated the effects of the warming trend on the local moisture conditions since 1997, resulting in abnormally dry PDSI values.

So, they are essentially (briefly) worrying about the same phenomenon that Sheffield et al. describe — that higher temperatures are biasing the calculation of the PDSI here.  I’m hesitant to read any more into a single study of a single species in a single location, but it is interesting to note that, if Fang et al. are correct and the lower PDSI values after 1997 are indeed a reflection of ‘overestimated the effects of the warming’  on the PDSI, their tree-rings didn’t ‘fall for it’ — they don’t follow PDSI into biased territory.  This should give us more confidence that the trees are reflecting moisture conditions, but it gives us less confidence in the PDSI.

Strangely, Sheffield and coauthors also briefly speculate about the Divergence Problem in formerly temperature-sensitive trees at some northern treeline sites.  Unfortunately, they get this part rather wrong — the mechanism they describe for the influence of temperature on tree-ring width isn’t correct and PDSI is completely unrelated to the divergence problem in these trees.

In case you’re still concerned about the existence of megadroughts identified in tree-ring chronologies (and other proxies) from places like the western United States, it is important to point out that a lot of the evidence for these events doesn’t even involve reconstructions of the PDSI.  For instance, Scott Stine’s important 1994 megadrought study is based on the dates of now-drowned Medieval trees in the Sierra Nevada of California (indicating sustained and dramatically lower lake levels at times prior to the 14th century). Henri Grissino-Mayer’s amazing 2000+ year reconstruction of drought from El Malpais in New Mexico is of water year precipitation, not PDSI.  And Dave Meko, Connie Woodhouse, and other have reconstructed streamflow, not PDSI, in the Colorado River basin.    Evidence for megadroughts comes from the proxies themselves, not the modern instrumental data like PDSI.

So why worry about the problems in the PDSI if you’re a paleoclimatologist like myself? It is possible that we’re on the cusp of this becoming a problem for the calibration of our statistical reconstruction models, although nothing like the ‘divergence’ seen in Fang et al. has thus far emerged (yet) at most tree-ring sites that I’m aware of.  Also, we’d like to be able to compare past droughts from the paleoclimate record with modern droughts from the instrumental data and future droughts from climate models.  But to do so, we need a metric that reliably will tell us the same thing across those epochs.  Evidence has been accumulating for years that PDSI presented a problem for this goal of integrating data and models and the past into understanding the future.  Our group has been looking into ways to deal with this for several years on issues including probabilistic drought forecasting and comparisons between models and paleoclimate data.  Hoerling and colleagues recently published a paper in the Journal of Climate looking at the PDSI and specifically predictions of imminent drying in the U.S. Great Plains.  They write  that:

PDSI is shown to be an excellent proxy indicator for Great Plains soil moisture in the 20th Century; however, its suitability breaks down in the 21st Century with the PDSI severely overstating surface water imbalances and implied agricultural stresses. Several lines of evidence and physical considerations indicate that simplifying assumptions regarding temperature effects on water balances especially concerning evapotranspiration in Palmer’s formulation compromise its suitability as drought indicator in a warming climate.

PDSI tracks soil moisture and precipitation well through the 20th century, but after that time becomes a biased indicator of drought conditions.

Hoerling et al. 2012 Figure 3

Richard Seager talked to John Fleck about this issue as well, pointing to this paper by Burke et al.

So what’s the solution for paleoclimatologists, if we’d still like to keep some of the attractive elements of the PDSI but enable better comparisons between past, present, and future droughts?  We might have to trade some of the desirable traits of the PDSI for potentially less-biased — but more uncertain or incomplete — drought metrics like modeled soil moisture. When comparing past climates and GCM simulations, we might utilize forward models to transform simulated climate into simulated tree-ring widths or other proxy measures, as opposed to the normal approach of transforming proxy data into climate variables.

In summary, the Sheffield et al. result doesn’t really cast any doubt on our knowledge of the existence, timing, during, and relative magnitude of past megadroughts.  What it does do — along with the other papers, particularly those by Hoerling and Dai, discussed above — is make us (dendroclimatologists) think about other drought metrics we might want to reconstruct to enable the most accurate comparisons across timescales and between paleoclimate data, instrumental observations, and climate model simulations.  Stay tuned.

UPDATE (via Skeptical Science): John Nielsen-Gammon, professor, atmospheric scientist, and Texas State Climatologist looks at the Sheffield paper and compares it with the earlier Dai article.  His summary:

So what’s the take-home message from all this?  It does indeed matter how you estimate evaporation, and better estimates do indeed correspond to smaller global drought trends over time, but those trends are probably not as small as Sheffield et al. calculates.  Drought is still getting worse on a globally-averaged basis.

New England Hurricanes, the forecast every time

12 11 2012

Kevin Anchukaitis

Let me start my first post here at Strange Weather by thanking Julien for the opportunity to join him here at his blog. I’ve been studiously preparing by listening to lots of Tom Waits albums, and although I hadn’t intended for my first post to be about hurricanes in the northeastern United States, some strange weather intervened.

I’m a recent arrival to the Massachusetts coast and now a scientist at the Woods Hole Oceanographic Institution, after spending the previous several years in New York City. While Superstorm Sandy, née Hurricane Sandy, was still several days from landfall in New Jersey, though, the region’s history of deadly hurricanes was already in the front of my mind. In August of 1991 Hurricane Bob scored a direct hit on Falmouth on Cape Cod in Massachusetts — my new home. At barbeques and gatherings this summer, my immediate neighbors were telling their stories of Bob’s “furious” landfall, the wind, the snapping trees, the storm surge, the flooding — and the long aftermath without power. So as Sandy lined up for her run at New England at the end of October, I admit I was somewhat nervously eyeing the tall but spindly locust trees near my house.

New England is not without a reason to keep one eye on the tropics in the late summer and autumn. Besides Bob, the New England Hurricane of 1938, the Great Atlantic Hurricane of 1944, Hurricane Carol in 1954, and Hurricane Donna in 1960. Wikipedia has a list of New England hurricanes. In 1821, a hurricane passed directly over New York City, resulting in 13 feet of storm surge and causing the East River to flow across lower Manhattan south of Canal Street. Yet another reason to be wary about hurricanes in New England lies in the mud and sand of the coastal marshes up and down the New England coast, several of which are disconcertingly within walking distance of my new home. These environments preserve a long record of storm activity in coastal New England going back hundreds or thousands of years.

Jeff Donnelly is a colleague of mine at Woods Hole Oceanographic Institution and one of the world’s experts in paleotempestology (Andrew Alden has a nice write-up on this science, here) — the study of past major storm activity from geological or biological evidence. Jeff uses sediments in coastal environments like marshes to identify and date past storms, but others have also used stable isotopes in tree rings, corals, and cave deposits, as well as historical records.

Marsh sediments from across New England tell a story of past hurricane strikes in the region, some clearly quite large. These environments record the passage of strong storms in the overwash deposits of sand that flood over their barrier with the sea during high waves and storm surges. Shore Drive in Falmouth is just such a barrier, and Sandy demonstrated quite clearly what an overwash deposit on its way to a backbarrier marsh looks like.

In a 2001 paper in the journal Geology, Jeff Donnelly and colleagues used multiple sediment cores extracted from a backbarrier march at Whale Beach, New Jersey, located between Ocean City and Sea Isle City, just to the south of Atlantic City, and close to where Hurricane Sandy made landfall, to reconstruct a history of beach overwash. They found deposits of sand associated with a 1962 nor’easter and another strong storm which they believed was the 1821 Hurricane. They dated a third deposit, thicker than the 1821 sand layer and probably related to an intense hurricane, to between 1278 and 1438 CE. In their article they note that the Whale Beach record suggests an annual landfall probability of 0.3%.

In a 2004 paper in Marine Geology, Donnelly and his team again looked at overwash deposits in New Jersey, this time from Brigantine, just to the north of Atlantic City. Here again they identified a layer of sand in the backbarrier marsh likely corresponding to the 1821 hurricane. They also dated large sand layers to the period between 550 – 1400 CE, which might correspond with the 13th or 14th century event identified at Whale Beach.

Donnelly et al. 2004, Marine Geology

Original caption from Donnelly et al. 2004, Figure 7: Cross-section of Transect 2 at Brigantine. ( p ) Location of radiocarbon-dated samples (see Table 1). Horizontal axis begins at the barrier/marsh boundary. The vertical datum is the elevation of the barrier/marsh interface (approximately mean highest high water).

Further to the east, in another 2001 paper Donnelly and his team used sediment cores from Succotash Marsh (near the fabulous Matunuck Oyster Bar near Point Judith in Rhode Island) to date hurricane strikes to known events in 1938 and 1954, as well as 1815 and the 1630s. Two other overwash deposits were dated to 1295-1407 and 1404-1446 CE. Donnelly and coauthors concluded that “at least seven hurricanes of intensity sufficient to produce storm surge capable of overtopping the barrier beach at Succotash Marsh have made landfall in southern New England in the past 700 yr”

One more, closer to home: In 2009, Anni Madsen and her coauthors (including Donnelly) published dates on hurricane deposits in Little Sippewissett Marsh in Falmouth, Cape Cod, Massachusetts, using optically stimulated luminescence (OSL) dating. This particular core shows a number of overwash deposits over the last 600 years, including probably Hurricane Bob and the 1976 Groundhog Day storm, but is also indicative of some of the difficulties and uncertainties in using backbarrier marshes to reconstruct hurricane strikes: Little Sippewissett Marsh doesn’t have sand layers that obviously date to recorded storms in 1938, 1944, 1954, 1815 and 1635, which include some of the largest to hit this region. Uncertainties arise from, amongst other things: a single core may not record all the storms at a site, storms themselves alter the height of the barrier and inlet channels, dating of events comes with analytical and depositional uncertainty, and in New England strong storms could be hurricanes or nor’easters.

Madsen et al. 2009, Geomorphology

Location of Little Sippewissett March, showing 19th and 20th century storm tracks across the region, from Madsen et al., A chronology of hurricane landfalls at Little Sippewissett Marsh, Massachusetts, USA, using optical dating, Geomorphology 109 (2009) 36–45, 2009

On Dot Earth, Andy Revkin has pointed toward his articles on Donnelly’s Caribbean research, as well as a 2002 paper by Anders Noren on millennium-scale storminess in the northeastern United States.

Bringing us back to Sandy, what does the history and geology of New England hurricanes tell us? There is evidence from all along the coast that powerful storms do occasionally make landfall in the region. The evidence from Whale Beach in New Jersey, near to where Sandy came ashore, records the very strong 1821 hurricane as well as another likely event in the 13th or 14th century. Other strong storms have hit the New England coast at other times in the past millennium. A 2002 article from the Woods Hole Oceanographic Institution quotes Donnelly:

“Most people have short memories,” says Donnelly. In fact, it is estimated that three-quarters of the population of the northeastern US has never experienced a hurricane. Donnelly’s research provides evidence to be heeded. “The geologic record shows that these great events do occur,” he says. “We need to make people aware that it can happen again. We’ve got to have better evacuation plans and we need to equip people to react to a big storm.”

I’m so far agnostic on the precise influence of human-caused climate changes on the track and characteristics of Sandy. The process of sorting out the influence of natural variability from the human-influence on this particular storm has just begun. As Justin Gillis notes in the New York Times Green Blog:

Some [climate scientists] are already offering preliminary speculations, true, but a detailed understanding of the anatomy and causes of the storm will take months, at least. In past major climate events, like the Russian heat wave and Pakistani floods of 2010, thorough analysis has taken years — and still failed to produce unanimity about the causes.

The influence of rising sea levels, particularly along the east coast of North America, no doubt has to be factored into understanding current and future storm surges. But what the geological and historical record indicate is that even in the absence of a human-influence on the strength, track, or magnitude of tropical storms, we would still need to be prepared for destructive coastal storms to strike areas of high population and considerable infrastructure. Paleoclimatology — in this case, paleotempestology — nearly always provides us with evidence of an even greater range and diversity of behavior of the climate system then we’ve witnessed over the relatively short period of instrumental observations, and gives an idea of some of the events — droughts, floods, and storms — that we need to keep in mind when figuring out how to build resilient communities.

McShane and Wyner

12 11 2010

An influential new paper on paleoclimate reconstructions of the past millennium has been doing the rounds since August.

Authored by Blake McShane and Abraham Wyner, two professional statisticians with affiliations in business school, the work is remarkable in more than one way:

  1. It is performed by professional statisticians on an up-to-date network of proxy data (that of Mann et al, PNAS 2008), a refreshing change from the studies of armchair skeptics who go cherry-pick their proxies so they can get a huge Medieval Warm Period.
  2. It uses modern statistical methods, unlike the antiquated analyses that Climate Audit and other statistical conservatives have wanted us to use for a few centuries now. (whether these methods lead to correct conclusions will be investigated shortly).
  3. It makes the very strong claim that climate proxies are essentially useless.

It is the last claim that has justifiably generated a great uproar in my community. For, if it were correct, then it would imply that my colleagues and I have been wasting our time all along – we had better go to Penn State or Kellogg beg for food, as the two geniuses have just put us out of a job. Is that really so?.

Before I start criticizing it, here are a few points I really liked:

  • I agree with their conclusion that calibration/verification scores are  a poor metric of out-of-sample performance, which is indeed the only way we can judge the reliability (“models that perform similarly  at predicting the instrumental temperature series [..] tell very different stories about the past). In my opinion, a much more reliable assessment of performance comes from realistic pseudoproxy experiments: the Oracle of paleoclimate reconstructions. Although they are problems with how realisticthey are at present, the community is fast moving to improve this.
  • Operational climate reconstruction methods should indeed outperform sophisticated nulls, but caution is required here: estimating key parameters from the data to specify the AR models, for instance,  reduces the degree of  “independence” between the null models and the climate signal.
  • Any climate reconstruction worth its salt should give credible intervals (or confidence intervals if you insist on being a frequentist… However, whenever you ask a non-statistician what a confidence interval is, their answer invariably corresponds to the definition of a credible interval).
  • The authors really are spot on the big question: what is the probability that the current temperature (say averaged over a decade) is unprecedented in the past 2 millennia? And more importantly, what is the probability that the current rate of warming is unprecedented in the past 2 millennia? Bayesian methods give a real advantage here, as their output readily gives you an estimate for such probabilities, instead of wasting time with frequentist tests, which are almost always ill-posed.
  • Finally, I agree with their main conclusion:

“the long flat handle of the hockey stick is best understood to be a feature of regression and less a reflection of our knowledge of the truth. Nevertheless, the temperatures of the last few decades have been relatively warm compared to many of the thousand year temperature curves sampled from the posterior distribution of our model.”  That being said, it is my postulate than when climate reconstruction methods incorporate the latest advances in the field of statistics, it will indeed be found that the current warming and its rate are unprecedented… the case for it just isn’t completely airtight now.

Now for some points of divergence:

  • Pseudoproxies are understood to mean timeseries derived from the output of long integrations of general circulation models, corrupted by noise (Gaussian white or AR(1)) to mimic the imperfect correlation between climate proxies and the climate field they purport to record. The idea, as in much of climate science, is to use these numerical models as a virtual laboratory in which controlled experiments can be run. The climate output gives an “oracle” from which one can quantitatively assess the performance of a method meant to extract a climate signal from a set of noisy timeseries (the pseudoproxies), for varying qualities of the proxy network (as measured by their signal-to-noise ratio or sparsity). McShane and Wyner show a complete lack of reading of the literature (always a bad idea when you’re about to make bold claims about someone else’s field) by mistaking pseudoproxies for “nonsense predictors” (random timeseries generated independently of any climate information). That whole section of their paper is therefore of little relevance to climate problems.
  • Use of the lasso to estimate a regression model.  As someone who has spent the better part of the past 2 years working on improving climate reconstruction methodologies (a new paper will be posted here in a few weeks),  I am always delighted to see modern statistical methods applied to my field. Yet every good statistician will tell you that the method must be guided by the structure of the problem, and not the other way around. What is the lasso? It is an L-1 based method aimed at selecting very few predictors in a list of many. That is extremely useful when you face a problem where there are many variables but you only expect a few to actually matter for the thing you want to predict.  Is that the case in paleoclimatology? Nope. Climate proxies are all noisy, and one never expects a small subset of them to be dominating the rest of the pack, but rather their collective expression to have predictive power. Using the lasso to decimate the set of proxy predictors, and then concluding that the ailing ones then fail to tell you anything, is like treating a whooping cough with chemotherapy, then diagnosing that the patient is much more ill than in the first place and ordering a coffin. For a more rigorous explanation of this methodological flaw, please see the excellent reviews by Martin Tingley and Peter Craigmile & Bala Rajaratnam.
  • Useless proxies? Given the above, the statement “the proxies do not predict temperature significantly better than random series generated independently of temperature” is therefore positively ridonkulus. Further, many studies have shown that in fact, they do beat nonsense predictors in most cases (they had better!!!). Michael Mann’s recent work (2008, 2009)  actually has been systematically reporting RE scores against benchmarks obtained from nonsense predictors. I have found the same in the case of ENSO reconstructions.
  • Arrogance.  When entering a new field and finding results at odds with the vast majority of its investigators, two conclusions are possible : (1) you are a genius and everyone else is blinded by the force of habit  (2) you have the humility to recognize that you might have missed an important point, which may warrant further reading and possibly, interactions with people in said field. Spooky how McShare & Wyner jumped on (1) without considering (2).  I just returned from a very pleasant visit in the statistics department at Carnegie Mellon department, where I was happy to see that top-notch statisticians have quite different attitudes towards applications: they start by discussing with colleagues in an applied field before they unroll (or develop) the appropriate machinery to solve the problem. Since temperature reconstructions are now in the statistical spotlight, I can only hope that other statisticians interested in this topic will first seek to make of  a climate scientists a friend rather than a foe.
  • Bayesianism:  there is no doubt in my mind that the climate reconstruction problem is ideally suited to the Bayesian framework. In fact, Martin Tingley and colleagues wrote a very nice (if somewhat dense) paper explaining how all attempts made to date can be subsumed under the formalism of Bayesian inference. When Bayesians ask me why I am not part of their cult, I can only reply that I wish I were smart enough to understand and use their methods. So the authors’ attempt at a Bayesian multiproxy reconstruction is most welcome, because as said earlier it enables to answer climate questions in probabilistic terms, therefore giving a measure of uncertainty about the result. However, I was stunned to see them mention Martin Tingley’s pioneering work on the topic, and then proceeding to ignore it. The reason why he (and Li et al 2007) “do not use their model to produce temperature reconstructions from actual proxy observations” is that they are a tad more careful than the authors, and prefer to validate their methods on known problem (e.g. pseudoproxies) before recklessly applying to proxy data and proceed to fanciful conclusions. However, given that climate scientists have generally been quite reckless themselves in their use of statistical methods (without always subjecting them to extensive testing beforehand), I’ll say it’s a draw. Now we’re even, let’s put the glove and be friends. Two wrongs have never made a right, and I am convinced that the way forward is for statisticians (Bayesians or otherwise) to collaborate with climate scientists to come up with a sensible method, test it on pseudoproxies, and then (and only then) apply it to proxy data. That’s what I will publish in the next few months, anyway… stay tuned 😉
  • 10 PCs: after spending so much time indicting proxies it is a little surprising to see how tersely the methodology is described. I could not find a justification for their choice of the  number of PCs used to to compress the design matrices, and I wonder how consequential that choice is.  Hopefully this will be clarified in the responses to the flurry of comments that have flooded the AOAS website since the article’s publication.
  • RegEM is misunderstood here once again.   Both ridge regression and truncated total least squares are “Error-in-Variable” methods aimed at regularizing an ill-conditioned or rank-deficient sample covariance matrix to obtain what is essentially a penalized maximum likelihood estimate of the underlying covariance matrix of the { proxy + temperature } matrix. The authors erroneously state that Mann et al, PNAS 2008 use ridge regression, while in fact they use TTLS regularization. The error is of little consequence to their results, however.

In summary, the article ushers in a new era in research on the Climate of the past 2000 years: we now officially have the attention of professional statisticians. I believe that overall this is a good thing, because it means that smart people will start working with us on this topic where their expertise is much needed. But I urge all those who believe they know everything to please consult with a climate scientist before treating them like idiots! Much as it is my responsibility to make sure that another ClimateGate does not happen, may all statisticians who read this feel inspired to give their field a better name than McShane and Wyner’s doings !

PS: Many other commentaries have been offered thus far.  The reader is referred to a recent climatological refutation by Gavin Schmidt and Michael Mann for a climate perspective.

Climate Mammoth

10 04 2010

Former French science minister Claude Allègre is perhaps the most prominent global warming skeptic in my homeland. He is one of the few to have scientific credentials – but unfortunately, not in the right kind of science. Allègre is a specialist in what is called high temperature geochemisty, where he was noted (and decorated) for his celebrated work on the age of the Earth, for instance. No doubt Allègre knows his stuff, as attested by his publication record and numerous medals. Unfortunately, his climate credentials are a little thinner, which is a problem when you start publishing several books essentially calling the entire climate science community a bunch of idiots, or worse – mobsters.  His latest outcry (L’imposture Climatique, “Climate Fraud) has upset so many of my colleagues that a  petition was doing the e-rounds this week, in which the French climate science community is asking current Minsiter Valérie Pécresse  to hold an objective and fair debate at the Académie des Sciences. The full story is here ,and the debate looks like it will indeed happen soon.

Why is such a debate necessary? Well, Allègre is known to be a bully, and got famous as a minister for calling the French educational system a “mammoth” that needed to lose some fat. Needless to say, this phraseology and a legendary lack of tact (his temper literally got him defenestrated at a political rally in 1968, which old timer French professors always liked to joke about),  did little to garner support in favor of  his policies, no matter how necessary they might have been. Still, I’m not one to cast the first stone when it comes to dealing untactfully with opponents, so why should I even mention this?

The problem is that Allègre and his long-time colleague  Vincent Courtillot have used their clout at the Académie (of which they are  bona fide members) to organize  some fake debates on the issue, where they failed to invite people who know anything about climate, or censoring their response. So the people who do know about climate understanbly felt  left out, and would like a seat in the debate. This would just be petty academic disputes, if it weren’t for the fact that the French media seem very hungry for Allègre’s presence. This is apparently as much because of his aggressive communication style (which always makes for a heated debate, therefore a healthy amount of  prime time drama during news hour) than for his current position in this fake debate. I say current because apparently, he wrote 20 years ago in a book (Clés pour la géologie) “By burning fossil fuels, man increased the concentration of carbon dioxide in the atmosphere which, for example, has raised the global mean temperature by half a degree in the last century”. Now the tone has changed drastically: we hear the familiar refrain that warming is barely discernible or, when it is (for example, in the melting of the snowcap of Mt  Kilimanjaro), that this is simply due to “natural causes”. A strange feeling of déjà vu?

As usual with climate skepticism, we have to go beyond personal motivations and analyze the arguments in their own right. This was done masterfully by the inimitable Ray Pierrehumbert in a 2-part blog post on RealClimate, which rank among my favorite  posts of all time. Part 1 is here, Part 2 there. You would think that the Flat Earth Knights (that’s what they are now called in the climate community, referring to their omission of the Earth’s  rotundity  in elementary radiation budget calculations) would have stopped embarrassing themselves after top-notch climate scientist Edouard Bard patiently debunked all of their arguments. Alas,  far from an end, it seemed to have only ushered an era of growing media attention for Allègre and Courtillot, who tell the skeptics just what they want to hear under a varnish of scientific credibility that few care to question. In Courtillot’s case, scientific misconduct is beyond doubt. Allègre seems to be more subtle, but the fact that he is using his prominence and weight in the media to hijack the debate is troubling. The reason why I blur the line between the two is that they are clearly tag-teaming, with Allègre handling the book (non peer reviewed) and television PR campaign, while Courtillot is very active  in the peer-reviewed literature, with the success that we know.  To his credit, Courtillot  recently made the news for his association with two publications in the Journal of Atmospheric and Solar-Terrestrial Physics, which aimed at establishing a statistically-significant correlation between solar activity and the recent warming. The papers are here and here.  A well-argued critique of their methodology, lead by statistical climatologist Pascal Yiou can be found here.

While the peer-reviewed literature is indeed a venue of choice for scientists to debate arguments, it may not be the most transparent to the general public. As a scientist and signatory of the aforementioned online petition, I look forward to a free, open and impartially moderated debate at the Académie des Sciences, where I trust that the considerable knowledge, integrity and  intelligence of some of the most noted French climate scientists will give real scientific arguments a fair chance of being heard. Then, let the people decide what to believe, but at least on the basis of sound arguments.

If, as seems unavoidable, our mammoth ends up losing a few tusks in the battle, I hope he will (this time) respect scientific ethics when intervening in a scientific debate where (so far) ignorance and arrogance are his only medals.


PS: The quixotic claims from Allègre’s latest book are debunked here (in French) and honestly they are so pathetic that I won’t waste my saturday afternoon on an English translation!

Advice to Climate Scientists on how to Avoid being Swift-boated and how to become Public Intellectuals

1 03 2010

My friend Kevin pointed me to this great post by Juan Cole, which is well worth it’s own weight in wisdom. The recent vilification of climate scientists in the public sphere underscores the need for my profession to urgently become media-savvy. Though it seems that experience will be the best teacher, i found the following advice very useful. It is always nice to hear you are not alone in a dark room. Happy reading. El Niño.

Climate Scientists continue to see persuasive evidence of global warming and climate change when they speak at academic conferences, even though, as Andrew Sullivan rightly put it, the science is being ‘swift-boated before our eyes.’ (See also Bill McKibben at on Climate Change’s OJ Simpson moment).

This article at includes some hand-wringing from scientists who say that they should have responded to the attacks earlier and more forcefully in public last fall, or who worry that scientists are not charismatic t.v. personalities who can be persuasive on that medium.

Let me just give my scientific colleagues some advice, since as a Middle East expert I’ve seen all sorts of falsehoods about the region successfully purveyed by the US mass media and print press, in such a way as to shape public opinion and to affect policy-making in Washington:

1. Every single serious climate scientist should be running a blog. There is enormous thirst among the public for this information, and publishing only in technical refereed journals is guaranteed to quarantine the information away from the general public. A blog allows scientists to summarize new findings in clear language for a wide audience. It makes the scientist and the scientific research ‘legible’ to the wider society. Educated lay persons will run with interesting new findings and cause them to go viral. You will also find that you give courage to other colleagues who are specialists to speak out in public. You cannot depend on journalists to do this work. You have to do it yourselves.

2. It is not your fault. The falsehoods in the media are not there because you haven’t spoken out forcefully or are not good on t.v. They are there for the following reasons:

a. Very, very wealthy and powerful interests are lobbying the big media companies behind the scenes to push climate change skepticism, or in some cases (as with Rupert Murdoch’s Newscorp/ Fox Cable News) the powerful and wealthy interests actually own the media.

b. Powerful politicians linked to those wealthy interests are shilling for them, and elected politicians clearly backed by economic elites are given respect in the US corporate media. Big Oil executives e.g. have an excellent rollodex for CEOs, producers, the bookers for the talk shows, etc. in the corporate media. They also behind the scenes fund “think tanks” such as the American Enterprise Institute to produce phony science. Since the AEI generates talking points that aim at helping Republicans get elected and pass right wing legislation, it is paid attention to by the corporate media.

c. Media thrives on controversy, which produces ratings and advertising revenue. As a result, it is structured into an ‘on the one hand, on the other hand’ binary argument. Any broadcast that pits a climate change skeptic against a serious climate scientist is automatically a win for the skeptic, since a false position is being given equal time and legitimacy. It was the same in the old days when the cigarette manufacturers would pay a ‘scientist’ to go deny that smoking causes lung cancer. And of course we saw all the instant Middle East experts who knew no Arabic and had never lived in the Arab world or sometimes even been there who were paraded as knowledgeable sources of what would happen if the United States invaded Iraq and occupied it.

d. Journalists for the most part have to do as they are told. Their editors and the owners of the corporate media decide which stories get air time and how they are pitched. Most journalists privately admit that they hate their often venal and ignorant bosses. But what alternative do most of them have?

e. Journalists for the most part do not know how to find academic experts. An enterprising one might call a university and be directed to a particular faculty member, which is way too random a way to proceed. If I were looking for an academic expert, I’d check a citation index of refereed articles, but most people don’t even know how to find the relevant database. Moreover, it is not all the journalists’ fault. journalism works on short deadlines and academics are often teaching or in committee and away from email. Many academics refuse (shame on them) to make time for media interviews.

f. Many journalists are generalists and do not themselves have the specialized training or background for deciding what the truth is in technical controversies. Some of them are therefore fairly easily fooled on issues that require technical or specialist knowledge. Even a veteran journalist like Judy Miller fell for an allegation that Iraq’s importation of thin aluminum tubes in 2002 was for nuclear enrichment centrifuges, even though the tubes were not substantial enough for that purpose. Many journalists (and even Colin Powell) reported with a straight face the Neocon lie that Iraq had ‘mobile biological weapons labs,’ as though they were something you could put in a winnebago and bounce around on Iraq’s pitted roads. No biological weapons lab could possibly be set up without a clean room, which can hardly be mobile. Back in the Iran-Iraq War, I can remember an American wire service story that took seriously Iraq’s claim that large numbers of Iranian troops were killed trying to cross a large body of water by fallen electrical wires; that could happen in a puddle but not in a river. They were killed by Iraqi poison gas, of course.

The good journalists are aware of their limitations and develop proxies for figuring out who is credible. But the social climbers and time servers are happy just to host a shouting match that maybe produces ‘compelling’ television, which is how they get ahead in life.

3. If you just keep plugging away at it, with blogging and print, radio and television interviews, you can have an impact on public discourse over time. I could not quantify it, but I am sure that I have. It is a lifetime commitment and a lot of work and it interferes with academic life to some extent. Going public also makes it likely that you will be personally smeared and horrible lies purveyed about you in public (they don’t play fair– they make up quotes and falsely attribute them to you; it isn’t a debate, it is a hatchet job). I certainly have been calumniated, e.g. by poweful voices such as John Fund at the Wall Street Journal or Michael Rubin at the American Enterprise Institute. But if an issue is important to you and the fate of your children and grandchildren, surely having an impact is well worth any price you pay.

Juan Cole, Feb 28 2010 blogpost

What does clean coal buy us?

26 02 2009

Some pretty cheap laugh, but not much else. It’s a well-known fallacy that there is no such thing as “clean” coal.

Yet our friends from the coal industry would rather have us believe otherwise, and go to great lengths of ridiculousness to convince the less discerning among us of the undying virtues of Carboniferous deposits in our energy infrastructure.  As done here :

Fortunately, our friends from This Is Reality are not sitting on their hands, and we owe them a pretty hilarious counter-offensive, crafted by no less than my favorite movie directors, the Coen Brothers :

Enjoy without moderation, and don’t breathe the black stuff if you can advoid it (you can in the US, just call up your representative if they try to shove it down your throat).