Bayesian Immunity

29 04 2020

Probability is the only satisfactory way to reason in an uncertain world.

Dennis Lindley

An elementary notion in probability is Bayes’ rule, named after the reverend Thomas Bayes, who probably did not discover it. Bayes rule is about flipping conditionals: if I know the probability of event A given that event B has occurred, what’s the probability of B given that A has occurred? That probability is a big neon sign:

A neon sign showing the simple statement of Bayes’ theorem at the offices of HP Autonomy. Credit: Matt Buck.

On the face of it, Bayes’ rule is a tautology, basically a restatement of the chain rule. But it is in fact fundamental to all kinds of inference, from robots learning about their environment from sensors, to recommender systems for movies, to a lot of machine learning, criminal justice, and medical diagnostics.

It is a depressing fact of modern medicine that most medical professionals incorrectly apply the results of diagnostic tests. Imagine that you have a test that is 99% sensitive (meaning it correctly identifies 99% of people who have a condition) and 99% specific (meaning that it incorrectly identifies people who do not have that condition as having it only 1% of the time). Say you take the test and the result comes back positive, what is the probability that you actually have the disease? Most people (including a frightening number of doctors) will say 99%. But that is missing a key part of the equation: the prior odds. That is, if the condition is extremely rare (say 0.1% of the population), then the majority of people who test positive (10/11) do not actually have that condition. In this scenario, somebody who tests positive only has a 1/11 chance of having the disease. Pretty different from 99%, wouldn’t you say? If you are curious, this video gives a good illustration of why the maths works out that way.

Now, Bayes applied to medical diagnoses is nothing new, and indeed the subject of almost any tutorial on Bayes’ rule. The reason why I am bringing it up today is that there is increasing hope placed on antibody testing to help end the COVID-19 pandemic. Some observers are even imagining fairly dystopian scenarios in which the world would be segregated in a two-tier society: those immune to the virus, and those still at risk. They (and others) even imagine extreme measures by which people might be tempted to voluntarily get infected to develop immunity (assuming, of course, that they don’t die).

Those extreme scenarios aside, I thought the topic was a good one for my “Lies, Damned lies and statistics” class, where I basically restate Bayes’ theorem a 100 different ways until students get it. I thought I would share the problem here, since Bayes’ theorem, basic though it is, is still so widely underappreciated in decision-making, even when lives are literally at play.

Let’s say you manage workers that might be exposed to SARS-CoV2, and you would like to make sure to send only workers that are immune to it to those risk environments. A worker shows up with an “immunity passport”, the result of a positive antibody test (T+) that has 99% sensitivity and 95% specificity (that’s optimistic given current tests, but again, this is for argument’s sake). What is the probability that they actually are immune to the virus?

Let’s translate this in math. “99% sensitivity” means that the conditional probability of testing positive given that you are immune is: P(T^+|I) = 0.99. Similarly, “95% specificity” means that the probability of testing positive given that you are not immune P(T^+|\bar{I}) is 0.05. Bayes’ rule (rephrased from the neon sign above), is:

P(I|T^+) = \frac{P(T^+|I) P(I)}{P(T^+|I) P(I) +P(T^+|\bar{I}) P(\bar{I}) }

The probability we seek (being immunse given that the test is positive) is a combination of four numbers : P(T^+|I) and P(T^+|\bar{I}) which characterize the test’s accuracy, and crucially, P(I), the probability of being immune in the first place, as well as P(\bar{I}), the probability of not being immune. Now, obviously, we have the simple relation that P(\bar{I}) = 1 - P(I), so the last piece of of the puzzle is knowing P(I). In the case of a brand new disease like COVID-19 it’s an unknown variable, which I am going to denote p, and for which another name might be herd immunity. Now, we can rewrite Bayes’ rule as:

P(I|T^+) = \frac{0.99 p}{0.99 p + 0.05 (1-p)}

which is a closed-form function of the herd immunity parameter p. So, just as before, it’s not just about how accurate the test is; it’s also, crucially, about how prevalent the condition is. To see this, let’s plot this relationship:

Probability of being immune given a positive test, as a function of herd immunity parameter, p. The gray dashed line shows the state of knowledge in the absence of observations.

 We see that for low herd immunity (say, 5%), even a reliable test doesn’t give you a very good guarantee that an immunity marker is synonymous with immunity: the posterior probability is only 49%, so you’d have better odds with a coin flip (50%). Still, 49% is nearly 10 times better than 5%, which shows you how much information the data added to your mental picture: you started from the prior (gray dashed line), and the evidence (here, the antibody test), brought you up to the blue line, which is considerably closer to 1 — though perhaps nowhere near as close as you were hoping. 

Indeed, you might be loath to make potentially life-or-death decisions based on a 49% probability. What would it take to bring that probability to 95%, say? Two ways:

  • With this kind of test accuracy, once herd immunity passes 50%, the posterior is above 95%. 
  • If you can’t wait until then, the only way to boost the posterior is to make the test radically more accurate. 

Below is a case where the rate of false positives has been brought down from 5% to 1% (that is, the test is now 99% specific):

Same as before, but the red curve explores a scenario where the false positive rate is 1%, versus 5% in the blue curve seen above.

We see that limiting erroneous diagnoses (false positives) really brings up the posterior probability, particularly for low herd immunity. Yet, for a herd immunity below 20%, that is still not quite enough to reach 95% certainty.

So, like all medical diagnostics, and more broadly speaking all detection problems, it’s not only about the accuracy of test, but also the prevalence of the condition you are trying to diagnose. For well-established diseases, the incidence rate is often sufficiently well-characterized that it’s not a limiting factor in decision-making. But in this case, it is: we have to know both how good the test is (e.g. from laboratory experiments) and how prevalent herd immunity is at the time the test is made. We cannot do that without independent information about who has the virus or not (from independent tests such as a PCR-based one).

So I cannot imagine that any rational agent will rely on antibody tests to make these kinds of decisions anytime soon — not until we have much more information about herd immunity. That being said, the Trump administration is notorious for making irrational decisions about nearly everything, so we might very soon be using antibody tests as if they told us everything we want to know. At least, thanks to Bayes, we’ll have the intellectual satisfaction of knowing exactly how dumb that was. In an age of radical anti-intellectualism, that seems to be the only satisfaction left to intellectuals.

Volcanism & climate, part 2

27 03 2020

A lot of volcanic updates this week (see: part 1)! The academics among you know how slowly its creaky wheels can move at times. What follows is a lesson in patience and persistence.

The story starts in 2005, when my PhD advisor Mark Cane suggested that I use his famed model of El Niño-Southern Oscillation (ENSO) to understand the behavior of ENSO over the Holocene. Out of this came two papers: one focusing on the potential role of solar variability to explain millennial-scale climate oscillations; the other on the response of ENSO to volcanic forcing. Both were inspired by work done by Michael Mann, Mark and a few others, shortly before then.

That experience gave me a taste for two things: (1) it was a lot more fun to work with simple models, which can be understood and run quite fast, compared with big berthas like General Circulation Models — I am forever grateful that I don’t have to run those myself; (2) fun though it is to play with models, I was far more interested in seeing if these experiments in silico could explain any aspect of the natural world.

That pre-supposed that one knew, in fact, what had happened in the real world over the past few thousands of years. Given that we don’t even have adequate data coverage for the 1877 El Niño, this was a tall order. That got me crawling down the rabbit hole of collating information from various proxies about the state of ENSO over the past 10,000 years. I have never left that rabbit hole, and I’m now writing from there, with attendant benefits for social distancing.

Paleo ENSO

Both of the papers mentioned above (Emile-Geay et al 2007; Emile-Geay et al 2008) attempted the somewhat foolhardy exercise of manually compiling cherry-picked ENSO proxies and comparing them to model output. It quickly became obvious that it was as dumb as it was tedious: for instance, while we found some support in certain proxies for something like an El Niño event in 1258/59 (corresponding to the Samalas eruption, and in line with the predictions of the Cane-Zebiak model I was running), that could have been the result of chance alone. Indeed, in any given year, there’s about a 30% chance of experiencing El Niño conditions, so getting a match isn’t exactly cause for celebration.

I was thus hopelessly excited when the great Kim Cobb contacted me to work on better constraining the state of ENSO over the past millennium, thanks to a NOAA grant she had just received. Kim had risen to rockstar-level fame for her 2003 paper that presented the first evidence of sea-surface temperature variations from the heart of the tropical Pacific over the past millennium. In it, she had analyzed the isotopic composition of oxygen in coral skeletons left on the beautiful beaches of the Line Islands: Palmyra, Christmas and Fanning. This composition revealed a highly variable El Niño over the the past millennium, in several discrete windows corresponding to the period the time the corals had lived through (black curve below). What was especially impressive is that the data were monthly, giving insight into the full ENSO cycle for hundreds of years. To be sure, there had been many studies with monthly coral isotopic data before, but none from the heart of the tropical Pacific or covering quite as much of the past millennium.

Fig 5 from Cobb et al 2003. The monthly Palmyra coral d18O records are in (b) as a thin black line, shown with a 10-yr running average (thick yellow line).

I ended up going to Georgia Tech to work with Kim for my postdoc (2006-2008), and after trials and tribulations, we published a set of updated reconstructions of the NINO3.4 index (a key metric of ENSO) over the last 850 years (part 1, part 2). Yes, you’re not dreaming, it did take 5 years, for reasons I don’t want to get into. One reason is that compiling paleoclimate data by hand is one of the most thankless tasks known to science, and the state of cyberinfrastructure at the time made this especially unproductive. That’s what got me motivated to change this. Another reason is that I had to teach myself some serious statistics, which took a while. Finally, I had to learn to make do with imperfect results, which was probably the hardest part. (To this day, I am still extremely dissatisfied with these reconstructions, which is why I am so keen on Feng‘s newest work using paleoclimate data assimilation to update these estimates. The years have made me marginally wiser, though: I have ceased to expect that this will be perfect. )

Back to Kim’s project: part of the grant was to use new data from Palmyra to fill in the holes of the record shown above. Kim’s lab delivered some exquisitely high-resolution data, which provide new ways to probe the past behavior of ENSO.

Volcanoes and ENSO

Once we had such a reconstruction in hand, it was natural to look for relationships with external forcing. Indeed, the extent to which changes in Earth’s energy budget (resulting from changes in the Sun’s luminosity, in volcanic aerosols obscuring the Sun, or from human emissions of greenhouse gases) may affect ENSO has been a longstanding question in the field.

On the volcano side, the story goes back to 1984, when Paul Handler reported “possible associations” between volcanic aerosols and ENSO. At the time, very little was understood about ENSO, but it quickly became clear (partly through the work of Cane and Zebiak, mentioned above) that since ENSO could predicted up to 12 months ahead without any knowledge of stratospheric conditions, those could not be a major factor. Subsequent data did not appear to support Handler’s hypothesis, to put it mildly (e.g. Self et al 1997), but then again: there was so few volcanic eruptions over the short historical record that it could not answer that question conclusively one way or the other. Another issue is that Handler had proposed no plausible mechanism for the connection, which therefore looked like a fluke.

Yet, in 2003, Adams et al reported “Proxy evidence for an El Niño-like response to volcanic forcing“, using some of Mike Mann’s first multiproxy reconstructions of ENSO (the NINO3 index, to be precise) and a simple statistical methodology called Superposed Epoch Analysis (SEA). That was the impetus for his 2005 study cited above, which used the relatively recent thermostat mechanism to justify the El Niño-like response (counter-intuitive, because volcanoes cool climate, so why would they warm the tropical Pacific?).

In my 2008 study, cited above, I looked at this in more detail, using fairly large ensembles of simulations to look at how volcanoes can load the dice in favor of ENSO events; we concluded that their effect was negligible except for large eruptions, larger than the 1883 Krakatau eruption, which explained why one wouldn’t see an effct over the twentieth century, which was devoid of such large eruptions. Yet, all of this was largely theoretical until we could say with confidence what had happened to ENSO over the past millennium. Kim’s 2003 original record was a game-changer, but unfortunately did not sample some of the most interesting times for volcano fans: it did not cover Samalas, Kuwae, or the early 19th century eruptions, for instance.

So I was really keen on testing this in my new reconstructions. However, I was also clear-eyed about dating accuracy. In the Earth Sciences, dating precision is always key. I knew our 2013 reconstructions had at precise enough of a temporal precision to explore solar connections and decadal and longer scales (which we did), but I was skeptical that we could meaningfully evaluate the volcanic response, where being off by even one year could really screw things up. There was evidence of misalignment in some series, which could throw the whole game off.

That did not prevent Mc Gregor et al (2010) from repeating the analysis of Adams et al (2003) on their “Unified ENSO Proxy” (a haphazard assemblage of existing ENSO reconstructions, some using the same underlying data). I was not convinced it was correct, and unconvinced that even my own reconstructions were good enough for that.

New splice on the block

Enter Sylvia Dee, who joined my lab in 2010. As part of a project for my Data Analysis class, I encouraged her to look at connections between volcanic forcing and the latest and greatest Palmyra data, pictured below:

Coral oxygen isotopes from Palmyra island. Shown are monthly resolved fossil coral oxygen isotopes measured in the newest multisegment Palmyra corals. See Dee et al, 2020, for details.This version of their Fig 1 showcases the absolute (U/Th) dates used to constrain the events’ timing.

We did not find much at the time, but there were a few snags: (1) the dating was provisional, and Kim would later revise it; (2) the forcing wasn’t that good (we saw that in part 1), and (3) there also weren’t many climate simulations to compare this with. What a difference a decade makes! In the intervening years, Kim refined the chronology, Sigl et al came up with vastly improved estimates of the forcing (2015; see part 1), and the PMIP3 past1000 simulations (2013) provided a trove of model results to compare to.

A couple of years ago, Sylvia (then a postdoc at Brown University,) decided to dust off this old code and apply it to this new trove of data using a more thorough detection algorithm. Together with Kim Cobb, Toby Ault, Chris Charles, Hai Cheng, Larry Edwards, and yours truly, she was able to test the link between volcanism and ENSO with unprecedented accuracy.

The results are presented in this article, published today in Science. What we found was that this dataset — the longest, best replicated, highest-resolution, and most proximal record to the center of ENSO variability currently available — reveals no consistent relationship between explosive volcanism and ENSO. This stands in sharp contrast with what some climate models are currently saying: see the colored curves on the figure below.

Response to explosive volcanism in Palmyra coral d18O data and PMIP3 model simulations. Source: Dee et al, 2020


In the years since the Adams et al (2003) study, the El Niño-like response to volcanism has become somehow enshrined in the paleoclimate cannon. It is not entirely obvious why that happened, and I think that deserves a commentary. When I first started presenting the results of my 2008 paper, many climate scientists were looking at me like I had two heads (or, perhaps, none at all). One of the common responses I got from modelers was (paraphrasing) “my GCM doesn’t do that, so it can’t possibly be true”. Then, some models (GFDL’s CM2.1 for instance) started producing the El Niño-like response in the year following an eruption; sometimes, one could even show that the thermostat mechanism invoked by Mann et al (2005) was at play in those GCMs.

Then, one day (I forgot exactly when), the mighty Community Earth System Model started producing this behavior. The IPSL model also started doing it, though for completely different reasons. And now, this El Niño-like response to volcanism seems accepted as a fundamental law of Nature by some modelers.

This came in sharp relief last summer, while I was co-writing a chapter on paleoclimate evidence for changes in ENSO in an upcoming AGU book, with Kim Cobb, Julia Cole, and Mary Elliot. We concluded that there was no robust influence of strong volcanic eruptions on ENSO, but a chapter dedicated to precisely that question (and, it is noteworthy, written exclusively by modelers), came on the other side of that question. How can the same book conclude two different things?

I think I know the source of this confusion: the vast majority of the reconstructions reviewed in that chapter used tree-ring records. While those are reported to have annual precision, analyses such as SEA require extremely accurate dating, and it is known that multiproxy reconstructions are vulnerable to even minute dating uncertainties [Comboul et al., 2014]. To wit, Sigl et al. [2015] showed that even cross-dated tree-ring chronologies, often considered the gold standard of chronological accuracy in the paleo realm, were offset by 1-2 years in the case of some eruptions. Such offsets get compounded in a compositing  methodology such as SEA. A second reason is that SEA is rarely benchmarked against an appropriate null hypothesis [Haurwitz and Brier, 1981; Rao et al., 2019], if at all. A third reason is that many of the reconstructions involve extra-tropical tree-rings, whose primary ENSO imprint is through hydrological perturbations. Using simulations with the CESM model, Stevenson et al. [2016] showed that, to an extratropical tree (many of which have been used in ENSO reconstructions), nothing looks more like a volcanic eruption than an El Niño event. If this mechanism applies in nature, it would make such records rather problematic for disentangling the link between volcanism and ENSO.

In Sylvia’s latest opus, we took great care to address these 3 concerns:

  • the coral data come from overlapping cores, each with several high-precision dates. That leaves almost no wiggle-room for age uncertainties, a point made at length in the paper’s supplement.
  • we use a rigorous block bootstrap test to determine the significance of the excursions, against a null distribution obtained by resampling ENSO states in non-volcanic years (again, see the supplement. That’s where all the actual science lives).
  • we use data from the heart of the tropical Pacific, which responds directly to sea-surface temperature, not its distant atmospheric teleconnections.

Is this the last word on the topic? Of course not; nothing ever is. But I think we can say with confidence that it will stimulate more research. I am eagerly awaiting conversations with modelers telling me that these observations must be wrong because they contradict their model, and (once they’ve reluctantly assimilated the information), papers that claim that the models were simulating this all along. To their credit, apart from CESM, none of the PMIP3 models seem to show a consistent ENSO response to volcanism. Perhaps they are on the right track? As in part 1, I am curious what PMIP4 models will show about this.

But until a new record comes along, better replicated, dated and proximal to ENSO state than this one, the burden of proof will fall on the shoulders of those who assert a link between volcanism and ENSO.

Volcanism & climate, part 1

26 03 2020

Volcanoes are among the most iconic of Earth’s structures, active portals conveying the Hadean depths to the heavens, and all the ecosystems in between.  When the topic of volcanoes comes up, most people think of Kilauea’s lavas flows, Eyjafjallajökull’s ash cloud grounding air traffic, or lovers caught in a last embrace at Pompeii.  Volcanoes turn out to also be an important forcing of Earth’s climate, and not just because they spew greenhouse gases (at this point, they release carbon dioxide 100 times slower  that our civilization). The most recent, climatically significant eruption was that of Mount Pinatubo in 1991, immortalized in this picture. 

The driver of a pick-up truck desperately tries to overrun a cloud of ash spewing from the volcanic eruption of Mt. Pinatubo in 1991. Credit: ALBERTO GARCIA

Read more: 

Terrifying though that moving mountain of ash and burning gas may be, it does not, in fact, have much effect on climate: that ash is short-lived and shallow,  easily scavenged out of the lower atmosphere by rain. What does matter for climate are the sulfur emissions (specifically, SO2), which coagulate into fine droplets that are rather effective at reflecting sunlight.  Explosive eruptions produce buoyant plumes that can catapult such reflectors into the stratosphere (the region of the atmosphere just over the cruising altitude of commercial jetliners), where they are isolated from the stormy turmoil of the troposphere (the lower layer that we breathe).  In the stable heights of the stratosphere, volcanic aerosols persist for several years, during which they have ample time to circumnavigate a good chunk of the globe, thanks to the stratosphere’s vigorous circulation.  The typical climate impacts begin shortly after an eruption, usually peaking within a year, with a gradual return to “normal” (if there is such a thing for a chaotic system like Earth’s climate) within 5 years. 

At least, that’s the picture painted from the painfully short instrumental record, which has seen only a handful of climatically-significant eruptions — Pinatubo and El Chichon (1982) being the only ones that were recorded by satellites.  For this reason, the way stratospheric aerosol forcing is implemented in climate models mimics what happened during Pinatubo, because that’s the largest one we have observed with any global coverage.

However, we also have ample geologic and documentary evidence for much larger eruptions prior to that: for instance, the “year without a summer” due to the Tambora eruption of 1816, or the Samalas eruption of 1257, the largest of the Common Era, and probably the third largest of the last 10,000 years, according to ice cores. Yes, what falls out of the stratosphere eventually falls to ice caps, where they can recorded within the annual layers of fast-accumulating ice deposits.  (for more on how we reconstruct volcanic forcing from ice cores, see this article). So paleoclimatologists, volcanologists, archeologists, geographers, historians and climate modelers have long been interested in how volcanoes influence climate and society.

Another reason study the climate response to volcanic aerosols: they serve as our best analog for a form of geoengineering called “solar radiation management”, which is — to the immense dismay of reasonable people — now considered as a viable policy proposal to mitigate the worst impacts of climate change (while creating a host of gruesome side effects that such proposals tend to ignore). 

All of this to say that volcanoes offer incredible natural experiments with which to probe Earth’s climate system, and how it is represented by climate models (the computer codes used to project climate’s evolution over the next century, for instance). The last assessment report of the Intergovernmental Panel on Climate Change (IPCC), in particular, summarized the peer-reviewed literature as of 2013, and highlighted the conundrum illustrated here: 

Fig 1: Superposed epoch analysis (SEA) on simulated and reconstructed temperature response to the 12 strongest volcanic eruptions since 1400 AD, reproduced from IPCC AR5 (Masson- Delmotte et al., 2013) Fig. 5.8b

The figure shows that, on average, climate model simulations of northern hemisphere temperature tended to produce a response to volcanic cooling that was stronger, peaked earlier, and recovered more rapidly, than temperature reconstructed from paleoclimate indicators (tree-rings, corals, ice cores, sediment cores, etc). On the face of it, such a discrepancy could call into question the fidelity of climate simulations, the accuracy of paleoclimate reconstructions, the quality of the forcing, and perhaps all three. Like many IPCC figures, it has stimulated many of us to think deeply about what is going on.

In 2017, my group was lucky enough to receive funding from the National Oceanographic and Atmospheric Administration to study this and other climate impacts of volcanic eruptions through the new prism of paleoclimate data assimilation, particularly the Last Millennium Reanalysis. One of the goals of the project is to understand (and hopefully correct) the discrepancy shown above. After some very careful work (and a lot of Python code), USC graduate student Feng Zhu reports in a recent paper (soon to be published in Geophysical Research Letters) that the discrepancies can indeed be reconciled, with a few asterisks.

At the risk of spoiling the suspense, here’s the same picture, after accounting for known effects due to (1) the location of paleoclimate records (mostly, trees); (2) the seasonality of these proxies (extratropical trees only grow during the summer), and (3) focusing on the observations types that are least affected by a process called biological memory (maximum latewood density, or MXD).

Fig 2: (a) Same as Fig 1, but applying the LMR after resolving differences in the model and proxy domains associated with seasonality, spatial distribution, and biological memory. (b) Same as (a) but using the NTREND network of MXD records (Wilson et al, 2016; Anchukaitis et al, 2017)

This figure shows that, for well-characterized eruptions and a careful, like-to-like comparison of climate models and MXD records, the discrepancy is resolved to a shocking extent. Remarkably, this holds true for two very different sets of paleoclimate observations, a handpicked selection like NTREND, or a more permissive selection like PAGES 2k. For details, the reader is invited to read the paper (in press at Geophysical Research Letters).

Now, before you take this to the bank, there are a few asterisks. The first thing to mention is that we are not the first to point out that these effects matter; indeed our study was motivated by decades of research by many careful scientists. Standing on the shoulders of giants, as always. Our main contribution is to weave all these factors together to enable a systematic comparison of models and reconstructions over multiple eruptions. Yet a big caveat is that the comparison could only be done on 6 eruptions, highlighted in this figure:

Fig 3: Comparison between the volcanic forcing of Gao et al., 2008 (used in many PMIP3 models , as well as the iCESM simulation of Stevenson et al., 2019; Brady et al., 2019) and the eVolv2k version 3 Volcanic Stratospheric Sulfur Injection (VSSI) compilation (Toohey & Sigl, 2017). The triangles denote the selected 6 large events between 1400 and 1850 CE. 

While the promise of paleoclimatology is to sample many more and larger events than captured over the instrumental record, we found our sample still limited, for three reasons:

  1. Forcing uncertainties: many of the model simulations used the Gao et al 2008 forcing, which has now been considerably refined by the incredibly careful work of Michael Sigl and co-authors. As a result, many of the eruptions were just not comparable, or we had to manually adjust the timing of eruptions to really compare like to like. The comparison should be much richer when PMIP4 simulations, which use the evolv2k dataset, come online.
  2. Data coverage: the “pull of the recent” is as real in paleoclimatology as it is in paleobiology. What this means is that the most recent eruptions are the one that are best captured by our proxy network; the old ones, not so much. This remains a problem for the 1257 (Samalas) and 1450’s (Kuwae + others?) eruptions, as we discuss at length in the paper. We therefore had to focus on the last 4 centuries — not the full Common Era as we were hoping for.
  3. Cluster effects: some eruptions tend to happen in clusters, making it hard to separate their effects. To keep the comparison clean, Feng chose to focus on isolated eruptions, which further reduces the number of eruptions.

So while our careful analysis of the data enables a spectacular resolution of discrepancies between models and reconstructions, that is only the case for this small set of 6 eruptions. Therefore, our conclusion (that the physics of the climate models is not the main culprit here ; only the way the comparison was previously done) presently applies only to these 6 eruptions. At least, for now: there is obviously a large potential for expanding the set of eruptions for this comparison in PMIP4, and as more MXD chronologies are incorporated in data syntheses like PAGES 2k or N-TREND.

One of the more interesting questions is whether our conclusion would hold true for much larger eruption like Samalas, mentioned earlier. As pointed out before, models and reconstructions disagree fiercely on that one. In 2015, Markus Stofffel and his group did some exciting work showing that you could reduce much of the difference by looking at carefully screened long MXD chronologies, considering seasonality (as we did) and improving how stratospheric sulfur aerosols are represented in models. These “self-limiting” feedbacks have long been known to become important for very high sulfate loading, yet a lot of climate models did not incorporate them. Using the IPSL model with a particular formulation of this feedback, Stoffel et al concluded that the improved stratospheric physics was key to bringing models and reconstructions in line. Does this contradict our result?

We don’t know yet. It remains to be seen how their comparison would work for a broader set of models and observations. The good news is that Feng made all his open-source (Python) code freely available to reproduce and extend the results of the paper. I, for one, would love to have more simulations to play with, incorporating such improved physics and seeing whether the reconstructions can help discriminate between competing parameterizations. If you have ideas for how you’d like to extend this study, please reach out! This type of work lends itself well to long quarantines.

Reflections on a pandemic, part 2

25 03 2020

a radial survey of topics on my mind in the midst of the COVID-19 pandemic, continued from Part 1.

Risk Perceptions

I’ve come to appreciate that most of our fights are fundamentally about risk perceptions, even when they are couched in different words: are we better off with strong social safety net or a strong military? Can we balance environmental health and economic growth? (that’s a false choice, but one that is still repeatedly offered.) Can society allow gay couples to marry without threatening the institution of marriage? (again a false choice, but a common argument). Where you land on these questions depends on how you weight the risk of one action or inaction vs the other.

COVID-19 has been an interesting mirror to calibrate how (ir)rational we are in our perceptions of risk. Certainly the death toll in places like Italy (nearly 10%, according to these data) is extremely worrisome. Yet, it needs to be put in the context of other morbidity factors (see table below for USC data): it’s not like heart disease went on confinement during the COVID-19 pandemic. In fact, given that it’s now much more difficult to exercise and that (in my observation) junk food was the first to be snatched from supermarket shelves, I would imagine that deaths from heart disease will only get amplified by COVID-19, though we may not see that right away.

Leading cause of death in the United States in 2017. Source:

Missing from a lot of these tables is also the breakdown by age or the existence of pre-existing health conditions. So how is one to make a rational assessment about the right level of concern? Amidst all these extremes, I was glad to hear the level-headed appraisal of Amesh Adalja, which I heartily recommend.

This made me think more broadly about why COVID-19 is eliciting a panic response for much of the world, whereas climate change (arguably, an even more formidable threat) has failed to do so. Much has been written on this, including this excellent piece co-written by my colleague Eric Galbraith.

The timescales are critical here: our ability to perceive risk is the result of our evolution, which has rewarded the ability to avoid or thwart immediate, tangible threats like this coronavirus, but has yet to select us on whether we correctly assess long-term threats like climate change. To paraphrase the terminology of Daniel Kahneman , our instinctive “System 1” is peculiarly bad at assessing abstract, long-term threat, which it discounts entirely in favor of more immediate concerns (however mundane). System 2 involves careful thinking, and our society has evolved that in the form of scientific institutions and (in some countries, anyway) governments that actually listen to those scientists. What should be clear from the work of many cognitive psychologists, beginning with Kahneman and Tversky, is that we are not nearly as smart as we think we are, and that we should place our trust in the hands of experts. Experts are fallible too, but, as Naomi Oreskes reminds us, the thought collective they form, and its process of critical interrogation, guarantees that their message gets ever closer to the truth.

That is why I listen to epidemiologists when it comes to COVID-19, and that is why everyone should listen to the quasi-absolute consensus on man-made climate change.

The lesson so far is that we seem to be quite bad at learning our lesson: only when smacked in the face with consequences do most of us finally overcome our reluctance to enacting the recommendations of experts. Inaction on COVID-19 should not be surprising: even when our own health is clearly at stake, many of us procrastinate about the advice given by our primary care physician until a strong warning sign comes, usually in the form of illness. It would be wise to view COVID-19 as a glaring illustration of the current inability of our society to take prompt and necessary action on crises even when we have seen them coming for a while. As should be clear, we can’t afford to procrastinate on major crises any longer, so perhaps we should take that subtle cue to re-assess our societal priorities, and start by engineering economic system that serve the planet and its people, not the other way around.

COVID and Climate

COVID-19 is forcing some societal changes that many people once argued were impossible: “the American way of life is not up for negotiation. Period”, G.H.W. Bush once said. One submicron-size viral pandemic later, that way of life got upended pretty quickly, without the need for any negotiation.

To me this is a vibrant demonstration of how fundamentally changeable our society is. For decades now, climate scientists like myself have rung the alarm bells about the devastating costs of unmitigated carbon levels in the atmosphere, requiring nothing short of a war-time mobilization effort to rewire our economies in carbon-neutral ways. For the most part, the body politic chose to do nothing, for many reasons. One argument you often hear, is that it would be too difficult to re-organize our society to meet that challenge. Clearly it is not. In fact, as Western societies rethink how to generate income from the comfort of their own home, we are seeing the potential for what a radically more local existence could mean.

On one level, COVID-19 is forcing us to enact many of the same changes that a low-carbon life would require. But the parallel goes deeper: as this brilliant visualization makes clear, there are two major reasons COVID-19 is so disruptive to our way of life and our economy:

  1. We’re late to the fight. In an exponential game, the early bird catches the worm. If drastic international travel restrictions had been enacted as soon as the virus was detected in, well, 2019, much of the world wouldn’t be in confinement right now, and this economic crisis could have been minimized. We are now witnessing the true cost of procrastination.
  2. We’re not working together: there could have been a whole lot more international cooperation to stem this crisis. Despite calls by the WHO, there was not. Be that as it may, there are a lot of things local governance can do here, at the level of countries, states, and cities. But within these entities, we all need to play along. Social responsibility is following the advice of public health experts and officials; supporting each other through these tough times; at the very least, not falling into a narrative of scarcity that leads some to stockpile toilet or other necessities that leaves others in the cold. We’re all in this together, and if too many people buck the trend, there might as well be no measures in place. Unfortunately, the Faux News demographic is not helping here.

In policy parlance, this is called a “collective action problem“, of which climate change is the poster child: there is a cost to action, which no one wants to incur unless everybody does. Even though there is a much, MUCH higher cost to inaction, it is well-known in game theory that this asymmetry can result in paralysis. COVID-19 and climate change both show that unless all of us are aware of the risk and committed to addressing it, the needle won’t move, or move too little too late. COVID-19 will help us see this one way or the other: it has already shown the willingness of some societies to drastically reshape themselves to protect their most vulnerable members. In ours, it will either show collaboration to be a success, or it will show the lack of collaboration to be a terrible failure. Either way, we should heed the lesson.

That is, if we see things for what they are. In the Age of Misinformation, that is far from a guarantee. Here again, COVID-1, climate change, and other societal risks share a common enemy: science denialism.


David Michaels makes the apt point that the US dilly-dallying on this crisis shares many of the epistemic roots that have plagued the accurate appraisal of climate risk. Namely, that when science (any science) is seen as threatening to an economic system, no amount of corporate malfeasance will be spared to undermine that science. This was abundantly shown in Merchants of Doubt. Because science denialism has been perfected by many industries (starting with the astonishing PR innovations of the tobacco industry) it is a well-oiled machine that extends its tendrils to the heart of right-wing media conglomerates, from Fox News to talk radio, who benefit from and amplify its message. And because their audiences have now been primed for decades to deny or minimize the warmings of experts, even educated Republicans in demographics at high risk of severe harm by COVID-19 continue to put their heads in the sand about the issue.

This is no accident. This is the result of a decades-long war on science that has pitted every scientific issues that conflicts with the bottom-line of entrenched industries against the momentous disinformation apparatus of modern right-wing media, from Fox News to Breitbart. If I were a social Darwinist, I’d say that we should let those who refuse to hear the evidence die of the consequences of this refusal. Unfortunately, we’re all crewmates on Spaceship Earth: my actions affect them, and their actions (or lack thereof) affect all of us. So there is no choice but for everyone to get on board and accept the evidence, preferably before it is too late.

Beyond COVID-19, the concerning pandemic here is the denial of reality, exemplified by Trump’s wishful thinking about the issue. My only hope is that this crisis underlies the true costs of denying reality, and rehabilitates critical thinking at all levels of our society. For that to happen, the tech giants that profit from the spread of misinformation need to change, and it seems to be leaving aside its supposed “neutrality” in the public interest. Great! Can we do the same for climate change, racial equality, vaccines, gun control, and other major societal issues, please?

Viral Conspiracies

For a brief time early in this millennium, “going viral” acquired a sheen synonymous with celebrity, fame and fortune (as well as, occasionally, disrepute, opprobrium, and ridicule). The SARS-CoV-2 virus abruptly reminded us of the its biological (and sometimes sinister) of viral.

The underlying mathematics are the same, however, and they may in large part be understood through graph theory. I first became acquainted with it through Duncan Watts’ “Six Degrees“, which lifted the curtain on the astonishing variety of natural and social phenomena that can be understood through graph theory (including paleoclimate reconstructions, if you can believe it).

I empathize with the many people throughout the world who feel abandoned by such institutions as crises multiply around them. While it is not the only problem, I do think that scientists being too distant from the public they serve is one reason why we see such conspiracy theories multiply (Russian trolls being another, non-negligible one). This blog is one attempt at communicating science a bit more directly than through the stilted prism of peer-reviewed technical journals.

There is a perilous parallel in how similarly infections and misinformation spread, and the small world property has a lot to do with that. Of particular interest to me today are conspiracy theories, like “climate scientists are only in it for the money” — no doubt referring to those fat suitcases full of unmarked $100 bills that the money-hungry solar industry regularly hands me, which is why I am currently sitting in a fortress in the middle of the ocean from which I pull strings with a cat on my lap. There are many other examples of such conspiratorial ideation, from the myth that the 1969 moon landing was a hoax, to anti-vaccine paranoia, to the idea that a cure for cancer is being withheld by vested interests, to Flat Earthers contending that NASA has been brainwashing us that the Earth is round when in fact it is flat. Common to many (all?) of these conspiracy theories is a strong disillusion in institutions, not only government but also scientific bodies.

In a refreshingly clear and concise article, David Robert Grimes proposed a simple mathematical model to quantify the limited viability of conspiratorial beliefs; it’s basically viral contagion in reverse. Imagine that you are part of a conspiracy trying to rule the world, and (naturally) you don’t want people to find out. Despite your best efforts to keep it a tight-lipped secret, information always gets out, sooner or later: you might be talking in your sleep, blurting out secrets to your spouse without even knowing; or there might be a whistleblower in your organization, worried about your plot to take over the world for solar panels. Either way, your network of conspirators is leaky, and sooner or later someone is going to call The Guardian with some damning revelations.

I’m going to skip over the math (looks like Grimes had gone a little fast himself, and that some aspects needed a correction), but one way to look at this is to ask: even in the best of circumstances, for how long can a global conspiracy endure before someone spills the beans, voluntarily or not? The answer depends obviously on how many people are “in on it”, how carefully they communicate, etc. To give just one of the example (one that hits close to home), here is the probability of spilling the beans on that climate change conspiracy as a function of time:

Failure curves for the climate change “hoax”: The blue solid line depicts failure probability with time if all scientific bodies endorsing the scientific consensus are involved, the red-dotted line presents the curve if solely active climate researchers were involved

Assuming a fixed probability of an intrinsic leak (i.e. how likely is each conspirator to spill the beans?), the main variable is the number of conspirators. With just those evil, blood-thirsty climate scientists doing the conspiring (about 30,000 of us, by Grimes’ estimate), it would take about 27 years for the climate “hoax” to have a 95% probability of being exposed (red dashed curve). Add to that the entirety of NASA, the American Geophysical Union, and other gatherings of shape-shifting vampires (about 400,000 total), and you get the blue curve, reaching the same 95% probability in just 4 years. In other words, if man-made global warming truly were a hoax, the beans would very likely have been spilled many times over.

You can argue about the exact numbers picked by Grimes for the probability of an intrinsic leak, but there is no going around the fact that keeping secrets between large numbers of people is tough business. That’s why Dr Evil only works with a select cadre of co-conspirators, not 400,000 of them. Also, he kills a sizable fraction of them when Mr Bigglesworth gets upset, which keeps the numbers low. Grimes’ article is considerably more interesting, though it involves fewer murderous cats. It is well worth a read.

May it become one more argument against the ludicrous theories peddled by science denialists of all stripes.

Virus as Medicine

While SARS-CoV-2 was not my idea of a good time, it has changed our life practically overnight, and is forcing a fundamental rethinking of pretty much every aspect of modern existence, from travel, work and schooling to artistic and political expression. Politico ran a nice piece over the week-end, with over 30 thinkers sharing their view of how COVID-19 was reshaping our world, in many cases, I’d argue, for the better. But no one said it quite as beautifully as Lynn Ungar:

Pandemic by Lynn Ungar

What if you thought of it
as the Jews consider the Sabbath—
the most sacred of times?
Cease from travel.
Cease from buying and selling.
Give up, just for now,
on trying to make the world
different than it is.
Sing. Pray. Touch only those
to whom you commit your life.
Center down.

And when your body has become still,
reach out with your heart.
Know that we are connected
in ways that are terrifying and beautiful.
(You could hardly deny it now.)
Know that our lives
are in one another’s hands.
(Surely, that has come clear.)
Do not reach out your hands.
Reach out your heart.
Reach out your words.
Reach out all the tendrils
of compassion that move, invisibly,
where we cannot touch.

Promise this world your love—
for better or for worse,
in sickness and in health,
so long as we all shall live.

Reflections on a pandemic, part 1

17 03 2020

Oh, you thought coming to this blog would spare you from more coverage of the COVID-19 pandemic? Think again. As a scientist, writer, and all around human, it’s impossible not to observe and reflect on this defining moment, affecting everyone on this planet to varying degrees. How often do such black swans recur? We still speak of the Black Death, 7 centuries later. Though  advances in the social and life sciences should ensure that the mortality rate is lesser for COVID-19, I am fairly certain that we will still be talking about this in 7 centuries (perhaps, in another civilization evolved from our own). 

The sole purpose of this post series is to offer some personal reflections on the topic. It is not meant as a public service announcement — see your local authorities and epidemiologists for that.  It is not meant to be alarming, reassuring, or anything in between. It is just meant to offer what I hope to be a different perspective from what you have read everywhere else. Mostly, it’s a form of therapy for me in putting on paper the various swirling strands of thought going through my mind. I doubt any of you have time to read, but right now, I have time to write (hello, Spring Break staycation!), so here goes.


The first point that strikes me is what many writers and filmmakers have pondered for a while: in an increasingly interconnected world, there is no such thing as a local problem anymore (“Twelve Monkeys” comes to mind). Many, particularly in the US, have this idea that they can barricade themselves behind their wealth and ignore problems half a world away. 

Not anymore. We live in such an interdependent world that there is almost no way to contain major crises. This is a point I have appreciated since the Syrian crisis (incidentally, a crisis heavily influenced by climate change) brought terrorism to the heart of Paris or Orlando. It’s no longer good enough to have a sea between you and a crisis — terrorism has abolished that distance. 

As we see now with COVID-19, a virus that is sufficiently contagious (and in this case, sufficiently deadly) can shut the world off more efficiently than a war.  Indeed, I have lived in the US since 2001. The country has been at war on multiple fronts every day since September 11 of that year, but there has been virtually nothing in my daily existence to attest to that fact. This virus, on the other hand, is forcing a complete re-arrangement of society, with social distancing being the norm in most places for the foreseeable future. How long-lasting will these impacts be? Will we ever return to a sense of normalcy about the large gatherings where some much cross-pollination happens, like music and arts festivals drawing tens of thousands of people? What about spiritual gatherings of tens of millions of people?  A lot depends on how this virus evolves, and on what kind of defenses we evolve against it (particularly, an affordable vaccine).


 I teach a general education class called “Lies, damn lies and statistics”, where I take the students through case studies in misinformation and teach them epistemic and statistical immune boosters to make them resilient to the spread of fake news of various ilks. One of those case studies, arguably the most spectacular, concerns the modern anti-vaccination movement in western countries, threatening to negate nearly 2 centuries of progress in medicine. For a great recount of it, see this story.  (actually, this is a public service announcement: vaccines work. vaccinate your kids). The really interesting part to me is why, despite all these anti-vaxxer myths having been repeatedly debunked, they still seduce so many people, including some in my friend community. This is where cognitive biases come in ; here is a thorough (and slightly hyperventilating) account of the relevant cognitive psychology in the context of vaccine rejection. The main point is this: vaccines have been a victim of their own success. Having nearly eradicated most childhood infectious diseases in the western world, you can find people (mostly on the Whole Foods parking lot) who now dread minuscule doses of a naturally-occurring compounds like formaldehyde more than they dread measles, mumps, rubella, or small pox. Why? Because they’ve never seen any cases of these deadly diseases, but they have seen the damages of additives that the food industry is still, somehow, authorized to sell. (see Pink Slime, for but one example). So they prefer not taking action to fend off a immense risk than taking action with a minuscule or non-existent risk. That’s the negativity bias in action. 

My hope is twofold: (1) that an efficacious, affordable vaccine against COVID-19 is found ASAP (we’re looking at 12-18 months in the best of cases, unless a miracle happens); and (2) that this vaccine radically changes the conversation about all vaccinations. I hope we will see a lot of parents who refused to vaccinate their children suddenly rush to get their at-risk parents (and themselves) vaccinated. If this crisis doesn’t change attitudes, I don’t know what could. And if we don’t learn from this crisis, then we are really, REALLY screwed. 

Understanding and Predictions

On the topic of “science works, yo”, the COVID-19 mayhem is a reminder of just how predictable this whole thing was, and how complacent nearly everyone in the western world has been in the face of such predictions. Bill Gates’ 2015 TED talk “The next outbreak? We’re not ready”, where he echoes a lot of the bells that epidemiologists have been ringing for some time, is justifiably enjoying a rebirth on YouTube. Or this hypothetical scenario eerily similar to COVID-19 explored only last year by security experts. Even as the epidemic was spreading exponentially in China, the western world did precious little to prepare (more on this below). To me this is a reminder that yes, science works, and that expert predictions are not entirely garbage (see also: climate projections). All the modern technology we use every single day is built on that expert knowledge, so maybe, you know, trust your experts? Don’t blindly trust technology, however: right now there is no techno-fix that can substitute for the behavioral changes that experts say are necessary (and proven to work, not just in past  epidemics, but in this one as well — see China).  

On this note, I repost below a fascinating graph contrasting the spread of the epidemic in Italy and China (Hubei province). The black curve shows the expected growth, which follows an exponential form. The blue and red curves shows the number of cases in Italy and Hubei Province, China, which have comparable populations. The Hubei data have been time-shifted by 36 days to put them on equal footing as the Italian data, since the outbreak started commensurably later there. 

Hubei/Italy comparison (origin unknown – please help me trace it)

In the early days of the outbreak, both curves were smack on top of the exponential one. After implementing lockdown measures, however,  Hubei residents kicked themselves off of that deadly curve, and its infection rates slowed markedly. The infection seems on track to peak soon, and (barring another exogenous flare-up) decline thereafter.  Now, the growth rate varies a bit from country to country (see recent data here), but again, those of Italy and China (as a whole, this time), appear to have been relatively similar in the early stages. It’s too early to tell if and how soon Italy’s lockdown will having similar benefits as China’s, but it shows how crucial it is to act early and aggressively. 

More recent data compiled by Johns Hopkins and visualized by the financial times also illustrate this well:

You might notice that the y-axis is logarithmic on the plot above. In such a coordinate system, an exponential growth looks like a straight line, which is what you see for most countries that have yet to enact measures. Now let’s geek out on the math for a second: why does this simple mathematical form fits the data so well? In other words, what makes N = N_o e^{rt} such a good model for (early) disease propagation? (N is the number of cases, t  is time in days, and r is the growth rate, in # of new cases per day.)

An exponential it is the solution to the differential equation:

\displaystyle \frac{\mathrm{d}N}{\mathrm{d}t} = ry

which says (in English) : the more people are infected, the faster the infection spreads. Simple as that. The simple reality of rates of change being proportional to the amount of something is the reason exponentials are so ubiquitous in nature. (things get more complicated as other factors enter the fray, but by all accounts COVID-19 is still in a regime of exponential growth for most countries, certainly the US as of this writing).  There are, however,  notable exceptions, like Singapore, Hong Kong and Japan. What’s different there? The difference is that they have centralized governments that dealt with contagions like SARS quite recently. They were vigilant, prepared, and quick to implement measures to contain and mitigate the outbreak.  Maybe government ain’t all bad? (see below)

Another remark about growth rate (r) is that, unlike the death rate, it does not  discriminate on the basis of age or pre-existing respiratory conditions, explaining why we see the infection spread at similar rates in many countries, even with fairly different social structures.  But existing data already makes a strong case that aggressive action to cut the transmission rate can and does work.

Regardless of country or creed, when epidemiologists tell you to take this seriously, please do: we (collectively) have seen epidemics before (see, SARS, MERS, the 1918 flu), we can model and predict them reasonably well, and there are many proven measures to mitigate the outbreak. Those work, as we can see that in near real time. Of course, those measures are only effective if (1) there is an effective government in place; (2) people actually listen to public guidelines.  If the government fails to grasp the urgency of the crisis, the country/province wastes precious time, and we pay that dilly-dallying with the price of blood. And if people fail to listen to follow the guidelines of their public health officials, things are nearly as bad.

This is brought home by this chart by Tomas Pueyo, which shows the impact of wasting just ONE day in implementing social distancing measures to curtail the spread of a similar epidemic. Obviously, while the death tolls depends on the true death rate of the virus (still unknown, but most likely in the range of 1-2%) and the age pyramid (since it disproportionately affects the elderly), it is trivially obvious that the fewer people infected, the better.

source: Thomas Pueyo

Here’s a thing about exponentials: they start out very flat, and quickly reach astronomical levels. The classic example of such geometric progressions is the legend of Radha playing chess with Krishna in disguise. At current rates, and *if we do nothing* (which we’re not, thankfully), there could be 100 million cases by May in the US.   Clearly, social distancing is necessary to flatten that curve. Not tomorrow, yesterday.


I live in the United States, where the Trump administration has miserably failed to take adequate measures to this crisis. That too, was entirely predictable: they’ve failed to take adequate measures to pretty much any threat they have been faced with so far. Why? Because, like much of the GOP, they do not believe in government. As often pointed out*, one’s attitudes to government are a self-fulfilling prophecy: if you believe government sucks and vote for a party that repeatedly questions the very idea of government, you end up with a sucky, incompetent government. This utter incompetence on full display in this picture of Mike Pence’s coronavirus task force in prayer. You don’t need to pray, Mikey: all you had to do is heed the advice of your qualified scientists and public health officials. That is, if you believe in science, and if you don’t act like the lap dog to a delusional boss.

As someone who grew up in a country with a mostly functional government that people mostly trust (and on which they tend to excessively rely, arguably), I am always flabbergasted by the American mistrust of government, which is so deeply rooted in the country’s founding documents. Nonetheless,  it pays to remember that in the 20th century, Democratic and Republican administrations alike have embodied “big government”. They just happen to prioritize different parts of government, with Democrats believing in its ability to solve social problems, and Republicans believing it is ability to regulate what women do with their bodies, to decide what pseudoscience gets taught in schools, and to take from the poor to give to the rich. (Both have been more or less equally generous to the military-industrial complex, but I digress).

Whatever your views on economic policy, it would seem a no-brainer for every sane person to support a strong, responsive public health system. But to the twisted logic of the modern GOP, it is not: any kind of government service is seen as a slippery slope to Stalinism. Couple that with Trump’s maniacal tendency to dismantle any sensible work done by his predecessor, and you now have the makings of an entirely preventable public health crisis. 

If there is any silver lining about this, may it be this: once the dust settles and people have buried their dead, I pray that this crisis finally brings home to the American public that a strong, effective government staffed by competent, altruistic people is truly in their interest. Some observers even argue that this crisis is Trump’s undoing. Given about 42% of the American public seems to support him no matter what, I have my doubts about that. However, the problem isn’t this administration per se, it is a general hostility and ignorance about government, and I am praying harder than Pence that this crisis might change that attitude. It literally is a question of life or death. 


Not that Pence believes in evolution either, but this pandemic is the living proof that yes, viruses are genetically labile, and their biological success depends on their ability to reproduce themselves. It’s evolution by natural selection in near-real time. If the coronavirus that’s now causing all this mayhem hadn’t evolved to be able to jump from bats to humans, we wouldn’t be in the middle of this pandemic (and: you wouldn’t have to get a different flu vaccine every year). Again, I very much doubt that this is the lesson people will retain from this crisis, but I put it here for the record: nothing in biology makes sense except in light of evolution


I’ll end on a personal note for today. Right now in Los Angeles, COVID-19 is still a hypothetical threat for most. While we are taking prophylactic measures to make sure it stays that way, several things have hit all of us already: all the fun things are gone. No entertainment, no bars, no restaurants, no gym, no yoga studios (also: no toilet paper in any store, at least for now. Did people get the memo that this virus has nothing to do with dysentery?). Schools are closed. Most people who can are working from home. Most economic activity has come to a grinding halt, threatening the livelihood of people who were already living at the edge.

For me this is not all bad. I’m spending some quality time with my family; my wife has been cooking up a storm with all the fresh produce that panic buyers left on the shelves (more for us!). I am, perhaps for the first time, really getting to see what my daughter learns in school. I’m also seizing every opportunity I can to breathe, relax, meditate. I went for a run in the rain; hadn’t run in 2 years. Before that, I had to locate a pair of pants I could run in. That forced me to re-arrange my closet for the first time in years.

In other words, even these incredibly mild side-effects are profound. This coronavirus is forcing us to push the reset button on a profoundly dysfunctional, unsustainable way of life. Were we going to do anything about it otherwise? Recent history has shown the opposite.

The combination of cleansing rain and lower vehicle traffic has resulted in the cleanest air I’ve ever breathed in LA. The expected reductions in pollution-related morbidity, and the social distancing effects on diseases other than COVID-19 (like the flu) may offset quite a few of COVID-19 deaths. In other words: while some paint this as a slow-moving apocalypse, it is not all bad. The net societal effect might even positive, if we play our cards right.

As the saying goes: “If you can’t get out of it, get into it”. I am getting into it. It’s nice seeing people taking walks in the neighborhood, seeing lights at people’s windows, and get a feel for what life would be like if we prioritized the local over the distant. It’s a good time to rethink what our society could be like.

This may be quite enough for one post. In the following one, I will tackle risk perceptions, virality, and volatility.

Climate models got their fundamentals right

15 04 2019

Everyone loves to hate climate models: climate deniers of course, but also the climate scientists who do not use these models — there are many, from observationalists to theoreticians. Even climate modelers love to put their models through harsh reality checks, and I confess to have indulged in that myself: about how models represent of sea-surface temperature variations in the tropical Pacific on various time scales (here, and here), or their ability to simulate the temperature evolution of the past millennium.   

There is nothing incongruous about that: climate models are the only physically-based tool to forecast the evolution of our planet over the next few centuries, our only crystal ball to peer into the future that awaits us if we do nothing — or too little — to curb greenhouse emissions. If we are going to trust them as our oracles, we had better make sure they get things right. That is especially true of climate variations prior to the instrumental record (around 1850), since they provide an independent test of the validity of the models’ physics and assumptions. 

In a new study in the Proceedings of the National Academy of Sciences, led by Feng Zhu, a third-year graduate student in my lab,  we evaluated how models fared at reproducing the spectrum of climate variability (particularly, its continuum)  over scales of 1 year to 20,000 years. To our surprise, we found that models of a range of complexity fared remarkably well for the global-mean temperature — much better than I had anticipated. This is tantalizing evidence that models contain the correct physics to represent this aspect of climate, with a big caveat: they can only do so if given the proper information (“initial and boundary conditions”). While the importance of boundary conditions has long been recognized, our study shakes up old paradigms in suggesting that the long memory of the ocean means that initial conditions are at least as important on these scales:  you can’t feed your model a wrong starting point and assume that fast dynamics make it forget all about it within a few years or decades. The inertia of the deep ocean means that you need to worry about this for hundreds — if not thousands — of years. This is mostly good news for our ability to predict the future (climate is predictable), but means that we need to do a lot more work to estimate the state of the deep ocean in a way that won’t screw up predictions. 

Why spectra?

Let us rewind a bit. The study originated 2 years ago in discussions with Nick McKay, Toby Ault, and Sylvia Dee, with the idea of doing a Model Intercomparison Project (a “MIP”, to be climate-hip) in the frequency domain. What is the frequency domain, you ask? You may think of it as a way to characterize the voices in the climate symphony. Your ear and your brain, in fact, do this all the time: you are accustomed to categorizing the sounds you hear in terms of pitch (“frequency”, in technical parlance) and have done it all your life without even thinking about it. You may even have interacted with a spectrum before:  if you have ever tweaked the equalizer on your stereo or iTunes, you understand that there is a different energy (loudness) associated with different frequencies (bass, medium, treble), and it sometimes need to be adjusted. The representation of this loudness as a function of frequency is a spectrum. Mathematically, it is a transformation of the waveform that measures how loud each frequency is, irrespective of when the sound happened (what scientists and engineers would call the phase). 


Fig 1: Time and Frequency domain. Credit:



If you’re a more visual type, “spectrum” may evoke a rainbow to you, wherein natural light is decomposed into its elemental colors (the wavelengths of light waves). Wavelengths and frequency are inversely related (by, it turns out, the speed of light). 


Fig 2: A prism decomposing “white” light into its constituent colors, aka wavelengths. Credit: DKfindout

So, whether you think of it in terms of sound, sight, or any other signal, a spectrum is a mathematical decomposition of the periodic components of a  signal, and says how intense each harmonic/color is.

Are you still with us? Great, let’s move on to the continuum. 

Why the continuum?

A spectrum is made of various features, which all have their uses. First, the peaks, which are associated with pure harmonics, or tones. The sounds we hear are made of pure voices, with a very identifiable pitch; likewise, the colors we see are a blend of elemental colors.  In the climate realm, these tones and colors  would correspond to very regular oscillations, like the annual cycle, or orbital cycles. All of these show up as sharp peaks in a spectrum. Then there are voices that span many harmonics, like El Niño, with its irregular, 2-7yr recurrence time, showing up as a broad-band peak in a spectrum. Finally, there is all the rest: what we call the background, or the continuum. You can think of the continuum as joining the peaks together, and in many cases it is at least as important as the peaks.

Why do we care about the continuum? Well, many processes are not periodic. To give but one example, turbulence theories make prediction about the energy spectrum of velocity fluctuations in a fluid, characterized by “scaling” or “power law”  behavior of the continuum.  There are no peaks there, but the slope of the spectrum (the “spectral exponent”) is all the rage: its value is indicative of the types of energy cascades that are at play. Therefore, these slopes are very informative of the underlying dynamics.   Hence the interest, developed by many previous studies, in whether models can reproduce the observed continuum of climate variability: if they can, it means they correctly capture energy transfers between scales, and that is what scientists call a “big fucking deal” (BFD). 

Our contribution


Fig 3: A spectral estimate of the global-average surface temperature variability using instrumental and paleoclimate datasets, as well as proxy-based reconstructions of surface temperature variability. See the paper for details.

One challenge in this is that we do not precisely know how climate varied in the past — we can only estimate it from the geologic record. Specifically, the data we rely on came from proxies, which are sparse, biased and noisy records of past climate variability. I was expecting to find all kinds of discrepancies between various proxies, and was surprised to find quite a bit of consistency in Fig 1. 

In fact, because we focused on global scales, the signal that emerges is quite clean: there are two distinct scaling regimes, with a kink around 1000 years. This result extends the pioneering findings of Huybers & Curry [2006], which were an inspiration for our study, as were follow-up by Laepple & Huybers [2013a,b, 2014].

It also leads to more questions:

– Why do the regimes have the slopes that they do?

– Why is the transition around 1,000 years?

– Can this be explained by a simple theory, based on fundamental principles?

This we do not know yet. What we did investigate was our motivating question: can climate models, from simple to complex, reproduce this two-regime affair? Why or why not?

What we found is that even simple models (“Earth System Models of Intermediate Complexity”) or a coarse-resolution Global Climate Model (GCM) like the one that generated TraCE-21k capture this feature surprisingly well (Fig 4).


Fig 4: The power spectral density (PSD) of transient model simulations. In the upper panel, β is estimated over 2-500 yr. The inset plot compares distributions of the scaling exponents (estimated over 2-500 yr) of GAST in PAGES2k-based LMR vs the PMIP3 simulations. The inset plot compares the β values of the model simulations (red, green, and blue dots) and that of the observations (gray dots). The gray curves are identical as those in Fig. 1. See Zhu et al [2019] for details

The TraCE-21k experiments were designed to simulate the deglaciation (exit from the last Ice Age, 21,000 years ago), and they know nothing about what conventional wisdom says are the main drivers of climate variations over the past few millennia: explosive volcanism, variations in the Sun’s brightness, land-use changes, or, more recently, human activities, particularly greenhouse gas emissions. And yet, over the past millennium, these simple models capture a slope of -1, comparable to what we get from state-of-the-art climate reconstructions. Coincidence?

More advanced models from the PMIP3 project also get this, but for totally different reasons. Most PMIP3 models don’t appear to do anything over the past millennium, unless whacked by violent eruptions or ferocious emissions of greenhouse gases (a.k.a. “civilization”). If you leave these emissions out, in fact, the models show too weak a variability, and too flat a spectral slope.  How could that be, given that they are much more sophisticated models  than the ones used to simulate the deglaciation?

Does the ocean still remember the last Ice Age?

The answer comes down to what information was given to these models: you can have the most perfect model, but it will fail every time if you give it incorrect information. In the PMIP experimental protocol, the past1000 simulations are given a starting point that is in equilibrium with the conditions of 850 AD. That makes sense if you’re trying to isolate the impact of climate forcings (volcanoes, the Sun, humans) and how they perturb this equilibrium. But that implicitly assumes that there is such a thing as equilibrium. Arguably, there never is: the Earth’s orbital configuration is always changing, and if the climate system is always trying to catch up with it. Only a system with goldfish memory of its forcing can be said to be in equilibrium at any given time. So how long is the climate system’s memory? 

What TraCE and the simple EMICs show is that you can explain the slope of the continuum over the past millennium by climate’s slow adjustment to orbital forcing, or abrupt changes that happened over 10,000 years ago. This means that climate’s memory is in fact quite long. Where does it reside?  Three main contenders:

  1. The deep ocean’s thermal and mechanical inertia
  2. The marine carbon cycle
  3. Continental Ice sheets

This is where we reach the limit of the experiments that were available to us: they did not have an interactive carbon cycle or ice sheets, so these effects had to be prescribed. However, even when all that is going on is the ocean and atmosphere adjusting to slowly-changing orbital conditions, we see a hint that the ocean carries that memory through the climate continuum, generating variability at timescales of decades to centuries, much shorter than the forcing itself. Though this current set of experiments cannot assess how that picture would change with interactive ice sheets and carbon cycle, it led to the following hypothesis (ECHOES): today’s climate variations at decadal to centennial scales are echoes of the last Ice Age; this variability arises because the climate system constantly seeks to adjust to ever-changing boundary conditions. 

If true, the ECHOES hypothesis would have a number of implications. The most obvious one is that global temperature is more predictable than has been supposed. The flipside is that this prediction requires giving models a rather accurate starting point. That is, model simulations starting in 1850 AD need to know about the state of the climate system at that time (particularly, the temperature distribution throughout the volume of the ocean), as this state encodes the accumulated memory of past forcings. Accurately estimating this state, even in the present day, is not trivial.

But first things first: is there any support for the ocean having such a long memory? Actually, there is: a recent paper by Gebbie & Huybers makes the point that today’s ocean is still adjusting to the Little Ice Age (around AD 1500-1800). Our results go a step further and suggest that the Little Ice Age, or the medieval warmth that preceded it, also lie on the continuum of climate variability. Though it is clear that they were influenced by short-term forcings, like the abrupt cooling due to large volcanic eruptions, part of these multi-century swings may simply be due to the ocean’s slow adjustment to orbital conditions. Fig 5 illustrates that idea. 


Fig 5: The impact of initial conditions on model simulations of the climate continuum. Models started with states in equilibrium with 850 AD show too flat a spectral slope unless whacked by the giant hammer of industrial greenhouse gas emissions. On the other hand, models initialized from a glacial state and adjusting to slowly-changing orbital conditions show a much steeper slope, comparable to reconstructions obtained from observations over the past 1000 years. This suggests that the continuum (some parts thereof, at least) is an echo of the deglaciation.


The ECHOES hypothesis is still only that: a hypothesis. It will take more work to test it in models and observations. But, if true, it would imply that even simple climate models correctly represent energy transfers between scales, i.e. that their physics are fundamentally sound. That’s a BFD if there ever was one in climate science. In the future, we would like to test this hypothesis with more complex models (with interactive carbon cycle and ice sheets), which will involve prodigious computational efforts.  If supported by observations, the ECHOES hypothesis will imply that to issue multi-century climate forecasts, one needs to initialize models from an estimate the ocean state prior to the instrumental era. This is a formidable challenge, but — trust us — we have ideas about that too. 


Friendly Advice to Graduate School Applicants

4 01 2018

Happy New Year!

In much of the Northern Hemisphere, this time of the year usually brings winter storms and graduate school applications.  I’ve gotten enough of those over the years that I thought I would compile a list of do’s and don’ts.  I am hoping it will help save your application from a chilly reception, and assist prospective students (PS’s) in their search for the most suitable prospective advisor (PA).

  • Know you audience: First, know that the people reading your application are all the professors from the department to which you are applying. If you are trying to reach a PA in particular, definitely get in touch with them prior to applying. This has 2 major benefits:
    1. you can save yourself some valuable time and money by figuring out if there is mutual interest in your applying at all.
    2. the PA will know who you are, and thus read your application with more interest, if there as been prior communication.
  • Who are you? The second thing to know is that your PA is quite busy, and has limited time to look you up. Thus, it is most helpful when you write to them if you can provide an up-to-date CV, or even better, have a webpage with your CV and examples of your work. Basically: it should not take them any time to figure out who you are, what your skills are, and why you’re applying.
  • Why are you applying? It is essential to show the relevance of your query. If you have a Bachelor’s degree in petroleum engineering, wanting to study climate is not the most obvious career switch. It is large enough an academic jump that you might want to justify it, so as to reassure your PA that you’re not just sending blanket applications to anyone in the geosciences without any notion of what they do, and how you could contribute to their field. In the aforementioned case, you might want to say “After 2 years learning about how to ruin our planet, I’ve decided to save it”, or something along those lines. The case is even more dramatic if you’re switching between the humanities and the sciences, say: the connection will be anything but obvious to anyone but you, so make sure you include an explanation for this (academically) sharp departure. On a more meta level, you should definitely ask yourself why you are applying for grad school. Is it only because you want to buy time before choosing a career? Are you trying to get your feet wet with a masters or  are you already committed to a PhD, and understand what that involves? (spoiler: several years of hard work, followed by more years of harder work). The PA has many roles, but  figuring out your life goals is not one of them.
  • Are you applying to a department or a lab? Some graduate programs are centrally run, admitting the best students they can, then having PA’s compete for the students. In other departments, like mine, you apply to work with one or more PAs, and need to open lines of communication with them, as they ultimately will be making the admission decision. In the rest of this post, I will be treating the second case: applying to a particular lab, not a department as a whole.
  • Show that you understand what your PA does.  Though contact with the PA is essential, my biggest turn-off is to receive an email from a PS asking me what research I do. The first quality of a researcher is doing research, so if you can’t even look up your PA on Google Scholar, you’re not fit for the job. Some PAs have lovely web pages that you can even read!   So do your research on the PA’s research and ask them direct questions about their work. You’ll find them much more willing to engage that way.
  • What is the funding situation? In many places, PhD fellowships are strongly tied to the PA’s research funding. It is a good idea to ask your PA about their current projects, and ask if they have any student funding on some of them. Even better: apply for externally funded PhD fellowships (e.g. from NSF) and come with funding in hand; you won’t find many labs to turn you down!
  • Can you write? Believe it or not, the single most important thing about being a scientist is whether you can communicate your science, and that usually involves writing in English. Therefore, anything that you have written in your own voice (preferably, but not necessarily, about the topic you wish to pursue in grad school) is a valuable datum for your PA. You need not have written a peer-reviewed paper before grad school (few people have!) but you surely have written something by that stage. If you have not (e.g., because you were educated in a language other than English), think of translating some of your academic writing into English to give PAs a chance to see how you think and write.
  • Programming: in my line of work (climate modeling and analysis), some programming knowledge is essential. That does not mean you need to be an expert programmer when you begin (I certainly wasn’t), but some classes help. What does not help are statements of the kind:

I have rich experience in  MATLAB programming and JAVA programming. Not to mention Microsoft Word, Microsoft Excel, Microsoft PowerPoint, and Photoshop software

MATLAB and JAVA are programming languages, but Microsoft Word? Excel?? Powerpoint??? Please! This sort of statement is akin to holding a giant placard over your head, screaming “I don’t have a clue about what programming is, but feel free to give me important tasks!”. It makes me want to “rm -rf”  your application altogether.

  • What kind of lab are you applying to? Is the PA running a boutique operation or a large enterprise? There are benefits to both. In a boutique shop you should get more 1-on-1 interactions, but there’s always a chance that funding might run out. Big labs are usually a sign of sustained funding (yay stability!) but often mean that it’s hard to get any face time with the PA, and you may wind up getting raised by their postdocs, more senior grad students, or – as happens frequently with famous PAs – being left to figure things out by yourself, “sink or swim” style. Figure out your learning style and whether your PA is a good fit for it.
  • Personal style: Ultimately, choosing a PA is not that different from choosing a life partner. It’s a decision that will affect you for the rest of your life, so think about it carefully. It’s not only about their prestige, or that of their institution. It’s also about whether you relate to them on a human level, because you’re about to spend a few years together. You might as well make those enjoyable.
  • Visit the department!  It’s not just about the PA, of course. How is the working environment? The fellow students? The campus? If you can, definitely visit the PA and spend as much time as you can with the students to figure out what life is like at their institutions, and in their research group. If you can track down some of the PA’s former students, they might have valuable insights too – and provide you with an idea of what your trajectory could look like.

Have I forgotten anything? Feel free to let me know below.



The Hockey Stick is Alive; long live the Hockey Stick

11 07 2017

Sometimes science progresses by overturning old dogmas, covering the minds behind luminaries in glory and celebrating individual genius. More often, however, it proceeds quietly, through the work of many scientists slowly accumulating evidence, much like sedimentary strata gradually deposited upon one another over the immensity of geologic time.

The recent PAGES 2k data compilation, which I had the privilege of helping carry across the finish line, falls squarely in the latter category.  To be sure, there is plenty of new science to be done with such an amazingly rich, well-curated, and consistently formatted dataset. Some of it is being done by the 2k collective, and published here. Some of it is carried out in my own research group. Most of it is yet to be imagined. But after slicing and dicing the dataset in quite a few ways, one thing is already quite clear: the Hockey Stick is alive and well.

For those who’ve been sleeping, the Hockey Stick is the famous graph, published in 1999 by Mann, Bradley and Hughes (MBH) and reproduced below. You only need a passing knowledge of climate science to know that it was pretty big news at the time, especially when, in 2001, it was featured in the summary for policy-makers of the Third Assessment Report of the IPCC. The graph was hailed as definitive proof of the human influence on climate by some, and disparaged as a “misguided and illegitimate investigation” by the ever-so-unbiased Joe Barton.


Because of its IPCC prominence, the graph, the science behind it, and especially its lead author (my colleague Michael E. Mann) became a lightning rod in the climate “debate” (I put the word in quotes to underline the fact that while a lot of the members of the public seem to be awfully divided about it, climate scientists aren’t, and very very few argue about the obvious reality that the Earth is warming, principally as a result of the human burning of fossil fuels).

Since 1998/1999, when the Hockey Stick work started to come out, the field has been very busy doing what science always does when a striking result first comes out: figuring out if is robust, and if so, how to explain it. Mike Mann did a lot of that work himself, but science always works best when a lot of independent eyes stare at the same question. Two things were at issue: the statistical methods, and the data, used by MBH.

Both matter immensely, and I’ve done my fair share of work on reconstruction methods (e.g. here and here). Yet many in the paleoclimate community felt that a lot more could be done to bolster the datasets that go into reconstructions of past temperature, regardless of the methods, and that they had a unique role to play in that. The PAGES 2k Network was thus initiated in 2006, with the goal of compiling and analyzing a global array of regional climate reconstructions for the last 2000 years. Their first major publication came in 2013, and was recounted here. Its most famous outcome was seen as a vindication of the Hockey Stick, to the point that the wikipedia page on the topic now shows the  original “Hockey Stick graph” from MBH99 together with the PAGES2k global “composite” (it cannot be called a reconstruction, because it was not formally calibrated to temperature).

Like many others, I had my issues with the methods, but that was not the point: compared to MBH and subsequent efforts, the main value of this data collection was that it was crowdsourced. Many experts covering all major land areas had collaborated to gather and certify a dataset of temperature-sensitive paleoclimate proxies.

The collective, however, felt that many things could be improved: the regional groups had worked under slightly different assumptions, and produced reconstructions that differed wildly in methodology and quality. The coverage over the ocean was practically non-existent, which is a bit of a problem given that they cover about two thirds of the planet (and growing, thanks to sea-level rise).  Finally, the data from the regional groups were archived in disparate formats, owing to the complete (and vexing) lack of data standard in the field.  So the group kept moving forward to address these problems.  Late in 2014 I got an email from Darrell Kaufman, who asked if I would help synthesizing the latest data collection to produce a successor to the 2013 PAGES 2k curve.  Not having any idea what I was getting myself into (or rather, what my co-blogger and dear friend Kevin Anchukaitis had gotten me into), I accepted with the foolish enthusiasm that only ignorance and youth can afford.

I knew it would work out in the end because the data wrangler in chief was Nick McKay, who can do no wrong. What I sorely under-estimated was how long it would take us to the point where we could publish such a dataset, and even more so, how long until a proper set of temperature reconstructions based on this dataset would be published. Much could be said about the cat-herding, the certifications, the revisions, the quality-control, and the references (oh, the references! Just close your eyes and imagine 317 “paper” reference and 477 data citations. Biblatex and the API saved out lives, but the process still sucked many months out of Nick’s, Darrel’s, and my time – a bit reminiscent of the machine in Princess Bride). Many thanks need to be given to many people, not least to then-graduate student Jianghao Wang, who developed an insane amount of code on short notice.

But that would take away from the main point: I promised you hockey sticks, and hockey sticks you shall get.  Back in 2007 Steve McIntyre (of ClimateAudit fame) told me “all this principal component business is misguided; the only way you really get the central limit theorem to work out in your favor is to average all the proxies together. This is variously known as “compositing” or “stacking”, but after raising a toddler I prefer the term “smushing” (those of you who have ever given a soft fruit to a toddler will know just what I mean).

Now, it may not be apparent, but I don’t think much of smushing.  On the other hand, the goal of this new PAGES 2k activity was to publish the dataset itself in a data journal, which entailed keeping all interpretations out of the “data descriptor” (the sexy name for such a paper). Many PAGES 2k collaborators wanted to publish proper reconstructions using this dataset, and this descriptor could not go into comparing the relative merits of several  statistical methods (a paper led by Raphi Neukom, which we hope to wrap up soon, is set to do just that). Yet a key value proposition of the data descriptor was to synthesize the 692 records we had from 648 locales, seeing if any large-scale signal emerged out of the inevitable noise of climate proxies. So I overcame my reticence and I went on a veritable smushing fest, on par with what a horde of preschoolers would do to a crate of over-ripe peaches.

The result? Peach purée. Hockey sticks! HOCKEY STICKS GALORE!!!


At first I did not think much of it. Sure, you could treat the data naïvely and get a hockey stick, and what did that prove? Nothing. So I started slicing and dicing the data set in all the ways I could think of.

By archive type? Mostly, that made other hockey sticks (except for marine sediments, which end too early, or have too much bioturbation, to show the blade). By start date? Hockey sticks.


Depending on how to screen the data? Nope, still a bunch of hockey sticks.


By latitude? Mostly, hockey sticks again, except over Antarctica (look forward to a article by the Ant2k regional group explaining why).



As a scientist, you have to go where the evidence takes you. You can only be smacked in the face by evidence so many times and not see some kind of pattern.  (you will never guess: a HOCKEY STICK!).

It’s been nearly 20 years since the landmark hockey stick study. Few things about it were perfect, and I’ve had more than a few friendly disagreements with Mike Mann about it and other research questions.  But what this latest PAGES 2k compilation shows, is that you get a hockey stick no matter what you do to the data.

The hockey stick is alive and well. There is now so much data supporting this observation that it will take nothing short of a revolution of how we understand all paleoclimate proxies to overturn this pattern. So let me make this prediction: the hockey stick is here to stay. In the coming years and decades, the scientific community will flesh out many more details about the climate of the past 2,000 years, the interactions between temperature and drought, their regional & local expressions, their physical causes, their impact on human civilizations, and many other fascinating research questions.  But one thing won’t change: the twentieth century will stick out like a sore thumb. The present rate of warming, and very likely the temperature levels, are exceptional in the past 2,000 years, perhaps even longer.

The hockey stick is alive; long live the hockey stick. Climate denialists will have to find another excuse behind which to hide.


PS: stay tuned for some exciting, smush-free results using this dataset and the framework of the Last Millennium Reanalysis.







Sweet nothings and fat cats

15 09 2016

Much has been written this week about the sugar industry’s dirty tactics to shift blame to fat in order to preserve (even expand) their business model and fatten their own bottom line. My news feed has been ablaze with expressions of surprise and anger at the sugar industry for deceiving the American public about what is now a public health crisis (obesity).

What’s surprising to me is that people are still getting surprised. Indeed, this is exactly the sort of thing many industries have done, and will keep doing until they get caught.

History shows that when you are in the business of selling an inherently harmful product, and under the inexorably expansionist logic of free-market capitalism, the profit motive will have you stop at nothing to keep selling more of it.  For a riveting (and appalling) account of this history, I highly recommend “Merchants of Doubt” by Oreskes & Conway (now a motion picture).


The basic tenet is simple: carefully manipulate public perception about your product to make it sound like we either don’t know that it’s dangerous, or that the harm is caused by something else.  Once the public is confused or thoroughly mislead, it’s impossible to get political consensus on any kind of regulation, and you can keep doing business as you see fit.

The tobacco industry is the most famous of such doubt-mongerers for having been caught in the act. But as “Merchants of Doubt” recounts, there are many other such examples (pesticides, mattress flame retardants, acid rain, the ozone hole, and most consequentially for our civilization, climate change). The  industries behind such campaigns knew full well that they were selling a harmful product, and they just kept doing it until they got caught.

Some of them are still at it, and will continue to do so until they are forced to throw the towel. For instance, the fossil fuel industry is still funding free-market think tanks to keep confusing the public about the causes and risk of climate change, and stave off meaningful regulation to prevent dangerous interference with the climate system.  The fracking lobby is still pretending it’s not polluting groundwater, emitting tons of natural gas at its wellheads, or inducing earthquakes in areas that previously experienced very few of them.

The sugar industry is merely the latest cat to have used the tobacco strategy to pay off unscrupulous scientists and shift blame to preserve its business model and get fatter.   Those of us still surprised by such examples of corporate malfeasance are encouraged to read/watch this admirable piece of work.

Global warming does not slow down

9 04 2013

Julien Emile-Geay

Thanks to the miracles of credit card reward programs, I have been gainfully receiving, for some time now, a weekly copy of the distinguished magazine The Economist. It is usually a fine read, especially since its editors distanced themselves from the laughable flat-Earthing of the Wall Street Journal a long time ago. It was therefore  a bit of a shock to read last week (on the cover, no less): “global warming slows down” (holding the quotation marks with gloves).  Had they been bought by Rupert Murdoch, or were they privy to new data, of which the  isolated climate scientist that I am had remained woefully ignorant? Well, neither, it seems.

Opening its pages with a mix of curiosity and skepticism (yes, we climate scientists are skeptics too – skepticism is not the privilege of deniers but the hallmark of healthy minds), I read the piece “Global Warming: apocalypse perhaps a little later”. It argued that since some recent estimates of climate sensitivity have been revised downward, the world might not scorch as early as we once thought. Still, they wisely argued that this was no excuse for inaction, and that the extra time should be used to devise plans to mitigate, and adapt to, man-made climate change.

Though I  agree with the consequent, I wholeheartedly disagree with the premise.

Digging a little deeper, it appears that they devoted a whole piece on climate sensitivity (“Climate Science: a sensitive matter“).  James Annan was right in pointing out its quality – it is a nuanced piece of science writing for a lay audience,  something all of my colleagues and I know the difficulty of achieving. It wasn’t the science journalist’s job to pick winners, but as a climate scientist I can tell you that not all the evidence presented therein carried equal weight. My issue has to do with the emphasis (here and elsewhere) that just because surface temperature  has been practically flat for the past decade (i.e.  the transient climate response (TCR) may not be as high as one would have concluded from the 1970-2000 warming), this means that equilibrium climate sensitivity (ECS) must also be lower.

Now, the “sensitive matter” piece does take care to distinguish the two concepts (transient vs equilibrium warming), but the message did not reach the editors, who conflate them with flying colors on the front page (“global warming slows down”). So while said editors can apparently hire good science writers, it would be even better if they  read their writings carefully.

My personal take is that TCR  is an ill-suited measure of climate response, because it only considers surface temperature. When energy is added to the climate system (e.g. by increasing the concentration of heat-trapping substances like carbon dioxide), it can go do any of 3 things:

  1. raise surface temperature
  2. help water change phase (from liquid to vapor, or from ice to liquid)
  3. raise temperature somewhere else (e.g. the deep ocean).

Tracking surface warming is certainly of paramount importance, but it’s clearly not the whole story. Focusing exclusively on it misses the other two outcomes. Do we have evidence that this might be a problem? Well, most glaciers in the world are losing mass, and a recent article by Balmaseda et al [2013], shows very clearly that the heat is reaching the deep ocean (see figure below). In fact, the rise in ocean heat content is more intense as depth than it is at the surface, for reasons that are not fully understood. To be fair, nothing involving the tri-dimensional ocean circulation is particularly straighforward when viewed through the lens of a few decades of observations, but the ocean heat content is quite a  robust variable to track, so even if the causes of this unequal warming are nebulous, the unequal warming isn’t.


The Balmaseda et al study concludes that, when considering the total rise in ocean heat content, global warming has not slowed down at all: it has in fact accelerated. I’m not quite sure I see it this way: warming is still decidedly happening, but it does look like this rate of increase has slowed down somewhat (as may be expected of internal climate variability). Therefore, to meet the Economist halfway, I am willing to embrace conservatism on this issue: the climate system is still warming, and let’s agree that the warming has neither slowed down nor accelerated.

In fairness, the Economist’s “sensitive matter” piece did quote the Balmaseda et al. study as part of its rather comprehensive review ; in my opinion, however, this is not just one data point on a complex issue (as their piece implies), but is a real game-changer. It confirms the findings of Meehl et al, [2011], who predicted exactly that: greenhouse energy surpluses need not materialize  instantly as surface warming ; in many instances, their model produced decades of relative “pause” in surface temperature amidst a century-long warming. It did so because the ocean circulation is quite variable, and sometimes kidnaps some of the heat to great depths, so it takes time before the whole of the climate system feels it. This is one reason for the essential distinction between ECS and TCR: the inherent variability of the climate system means that it may take a long time to reach equilibrium. That’s what’s really bothersome about ECS: it’s not observable on any time scale we care about, so it is of limited relevance in discussing reality (it is important to understanding climate models, however).  Perhaps TCR should be based on some measure of ocean heat content? This might already have been done, but I am not aware of it. Actually, sea level might be our best proxy for integrated ocean heat content plus melted ice, and non-surprisingly it is still going up.

So despite the jargon, the basic idea is quite simple: more CO2 means a warmer climate system as a whole, and sooner rather than later.  So it is becoming increasingly urgent to do something about it, as they point out.  Now, what would it take to convince the Wall Street Journal of that?

UPDATE (April 16, 3:30 PST): further accounts of The Economist’s unduly optimistic perspective are given here and here.  Another paper published this week in Nature Climate Change (nicely summarized here)  also emphasizes the important role of the ocean in mediating surface warming.