U.S. heat over the past 13 months: a one in 1.6 million event [UPDATED]

From Jeff Masters:

Each of the 13 months from June 2011 through June 2012 ranked among the warmest third of their historical distribution for the first time in the 1895 – present record. According to NCDC, the odds of this occurring randomly during any particular month are 1 in 1,594,323. Thus, we should only see one more 13-month period so warm between now and 124,652 AD–assuming the climate is staying the same as it did during the past 118 years. These are ridiculously long odds, and it is highly unlikely that the extremity of the heat during the past 13 months could have occurred without a warming climate.

emphasis mine

UPDATE: Please see this correction from Michael Tobis

UPDATE 2: Tamino has examined the difficulties in estimating the probability and and arrives at an imperfect result of about 1 out of 458000. He also notes that another decent approach (used by Lucia) produces a probability of somewhere in the 1-in-a-million range. Finally he concludes that:

This much is clear: the odds of what we’ve seen having happened in an unchanging climate are pretty small. Jeff Masters’ original estimate wasn’t right, but it does appear to be within an order of magnitude.

UPDATE 3: Tamino has updated his post indicating that Lucia has updated her calculation and gotten a result of a probability of 1 in 134381.

23 thoughts on “U.S. heat over the past 13 months: a one in 1.6 million event [UPDATED]

Add yours

  1. Actually that’s bad form from both Masters and NCDC.

    1.6 million (more precisely, 1,594,323) to one is just the thirteenth power of 1/3, which overstates the case to the extent that successive monthly anomalies are correlated. (Also the 1/3 is somewhat arbitrary and could be a cherry pick, but leave that aside). I don’t doubt that something very odd is going on but the number represents a common elementary statistical error and is in this case excessively alarmist.

  2. So what is the right answer? What are the odds of this happening? When can we expect this record be broken again? And the one after that?

    Are these questions at all interesting, or is it more important to show that Masters has it wrong?

    1. Sorry, I’ve got to concur with NCDC and Dr. Masters. True, it might not technically be a 1 in 1.6 million event, but the probability is too low to be correctly ascertained in any event. While it’s true that on a month-to-month basis, there is some correlation between values. However, the amount of correlation over a 13-month period is probably very low, perhaps even LESS than what would be indicated by chance. This is because over such a long period of time, the main drivers of the US climate, such as ENSO and other oceanic oscillations are typically non-static. In other words, you would expect conditions that favor warmth to not persist over such a long time frame. I’m not an atmospheric scientist, so I wouldn’t know how to quantify these values.

      I see climate skeptic Lucia Liljegren made an effort at determining the actual value. Originally, she indicated 1 in 10 probability. But this is clearly wrong. Since records began in 1895, there have been 1404 months. This means there are 1391 13-month periods and none of them have exhibited the behavior that we have observed in the most recent 13-month period. Moreover, even the most recent 13-month period would not have exhibited this behavior if the temperature effect attributable to the global warming trend were to be removed. So just on the basis of actual observational evidence, it’s clear that this would be exceedingly rare. Indeed, it probably would not occur if the climate was not changing.

    2. I should add that Lucia’s 1 in 10 probability was based on the level of correlation for global temperatures, which is extremely high. The level of correlation for US temperature, even on a month-to-month basis, was quite low. Per her analysis, the correlation for global temperatures was about .93, but the correlation for US temperatures only .16. This is not much greater than chance. With this value, she comes up with a more realistic 1 in 500,000 estimate. However, I don’t believe even this analysis adequately takes into consideration that over longer timescales, the types of external drivers that affect climate typically shift into different states (i.e. ENSO, NAO, PDO). So the correlation is probably even less than what would be implied merely at looking at the data on a month-to-month basis.

    3. It is, I think, necessary to spot and squelch bad statistics and in general to spot and squelch errors, to maintain a scientific worldview.

      It is also necessary to explain to people that sometimes no useful answer is forthcoming. In statistics of time series, you have a chicken and egg problem. If you don’t have any information about the time series other than the series itself, it is very difficult to draw conclusions from it. You have to make some assumptions about its character and then test whether those assumptions hold.

      The 1.6 million to 1 is a correct assessment of how rare the time series would be as a sample of uncorrelated white noise. But we know a priori that it is correlated. Lucia et al are trying to characterize the correlation in the absence of physical reasoning, but arguably the record is too short to do that.

      Statistics by itself is a very weak tool compared to statistics informed by theory. Informed by theory, we already know the world is warming, so the information added by the time series is small. Uninformed by theory, we have to make some claims about the autocorrelation on a thirteen month time scale and the “noise”, and then do a fairly complex calculation or as Lucia is doing, a simulation.

      David Fox’s concern about anti-correlations cutting in at 13 months does count the other way, for example. But how to handle the underlying trend in establishing correlations is a bit confusing. First you have to reduce the series to zero mean. You’re trying to test for a trend, but the bigger the trend, the more the ends of the series are correlated. And now you have to ask: if we are asking whether an upwardly trending signal trends upward, we are sort of wasting our time, aren’t we?

  3. Jeff Masters replies in email:

    I originally wrote in my post that “Each of the 13 months from June 2011 through June 2012 ranked among the warmest third of their historical distribution for the first time in the 1895 – present record. According to NCDC, the odds of this occurring randomly during any particular month are 1 in 1,594,323. Thus, we should only see one more 13-month period so warm between now and 124,652 AD–assuming the climate is staying the same as it did during the past 118 years.”

    It has been pointed out to me that the calculation of a 1 in 1.6 million chance of occurrence (based on taking the number 1/3 and raising it to the 13th power) would be true only if each month had no correlation to the next month. Since weather patterns tend to persist, they are not truly random from one month to the next. Thus, the odds of such an event occurring are greater than 1 in 1.6 million–but are still very rare.

    1. Michael, I have to say that I recall a time on In It where someone addressed you as Mr. Tobis. You stated that it could be MT or Michael but if an appellation were to be used, you’d earned the title “Dr. Tobis.” Dr. Briggs received his Ph.D. in statistics from Cornell in 2004.

      This should not be taken as an expression of support for his position on AGW but he has earned the right to be referred to as Dr. Briggs.

    2. Fair enough insofar as the title goes. I refer to Dr. Lindzen as such, for instance.

      Pretty shocking though, from where I’m sitting. It kind of diminishes the title. I’m seriously unimpressed.

    3. Huh. Well, I’ve argued with Dr. Briggs on his site, and certainly he has no academic basis to claim climate expertise (insofar as geophysics goes) but I’ve gotten a lot from his discussions of Bayes’ theorem, the relationships between propositions, data, probabilities, and logical inference, etc. And he may be as qualified to opine on climate as a climate scientist is qualified to opine on, say, economics.

      And, without a doubt (at least in my mind), his hobby of demolishing silly studies in sociology, pop psychology, etc. wherein large amounts of data points from systematically biased groups (e.g., U.S. college students between the ages of 18 and 22 who are willing to volunteer for – or be paid to participate in – studies) are mined for small p values which are certain to exist somewhere and then published as significant is entertaining and valuable.

      I’m not seeing where his joining the “community of scholars” diminishes that community. I’m not at all sure that he’s not a bona fide expert in statistics and would interested for your basis in thinking that or that his holding of the title diminishes it.

    4. All of which just makes it worse when he makes stuff up about climate science, Rob. It’s irresponsible behavior unbecoming of someone with his qualifications.

    5. Steve, I disagree. To the extent that that’s true, I submit it disqualifies many people from expressing the opinions that they do on their blogs and sites – including this site. He doesn’t discuss geophysics (at least that I’ve seen), he discusses probability, data interpretation, statistics, etc.

      I really don’t see any significant difference between him discussing climate, Michael discussing economics, or me discussing anything other than ultrasonics and welding.

    6. Rob, the difference is that Briggs clearly wishes his readers to infer that his views of climate science have some special value due to his expertise in statistics. No, they don’t, although of course he’s free to try to sucker people in that way.

      I wasted two minutes of my life reading the linked piece. The penultimate paragraph illustrates my point nicely:

      “Low probabilities are not proof of anything—except that certain propositions relative to certain premises are rare. If those certain premises are true, then so are the probabilities accurate. Whatever the probabilities work out to be is what they work out to be end of story. If the chance a ball hits my favorite blade of grass is tiny, this does not mean that therefore global warming is real. Who in the world would claim that it is? Yet why if relative to unrealistic premises about temperature buckets the probability of 13 out of 13 above-normal monthly temperature is tiny would anyone believe that therefore global warning is real? You might just as well say that the same rarity of 13 out of 13 meant therefore my dad was a master golfer. The two pieces of evidence are just as unrelated as were the rarity of the grass being hit and global warming true.”

      Kind of a blatant strawman argument, isn’t it?

      The there’s the last paragraph:

      “If our interest is in different premises—such as the list of premises which specify “global warming”—then we should be calculating the probability of events relative to these premises, and relative to premises which rival the “global warming” theory. And we should stop speaking nonsense about probability.”

      Hmm, ‘premises which rival the “global warming” theory.’ Why, I do believe it’s a challenge to geophysics.

      For a less subtle example, and one that Briggs seem to have put forward as a summary of his views on the subject, see here. I think it speaks for itself.

  4. I don’t find it to be a straw man. The initial contention was that the 1 chance in 1.6 million (1/3^13) of a certain series of measurements made it extremely unlikely to be a random event and thus, implicitly due to AGW. His criticism relates to the model, its relation to the observation,s and the inferences to be drawn from there. And, why should a Professor of Statistics not opine on the evaluation of models with respect to how strongly data supports them? It is not as if there are no other models, whether or not you are in agreement with the physical analysis that supports them.

    In the linked piece, Briggs certainly expresses his views on AGW, but even there he frames it in terms of whether the correctness of a model can be inferred from various data sets. Do I think that he’s ignorant of a fair amount of evidence? I do. Do I think his conclusion with respect to AGW is accurate? I do not. But I do think that he points out deductions and inferences that are made and published that are not supported by the data given – this is true of the psych. and soc. papers he lampoons and, occasionally (imho) the statements and claims of the community in support of the climate “standard model” (to borrow from particle physics).

  5. Well, Rob, actually there’s more than on strawman there. The statistical one is the equal treatment of all blades of grass in his example. The other is imagining that others imagine that this event is by itself evidence for climate change, since it can only be that in the context of physical theory and many other threads of evidence.

    Briggs also glosses over the key point here, which is that regardless of statistical treatment this is a very, very, very rare event of a sort that physical theory projects to become more common. (Frankly I think that for this type of event any claim to have established its likelihood in even vague terms is invalid.)

    The article I linked to demonstrates that Briggs has an appetite for utter trash when it comes to climate change. IIRC that bias very much does creep over into his statistical writings, although I think he’s typically more subtle about it than the present example. His unrelated material may be fine for all I know, but he’s made it not worth my while to even find out.

    Michael, e.g., behaves rather differently when writing about fields not his own. Briggs, by contrast, has earned the disrespect he gets, which I believe is where we came in.

    1. Although Steve uses stronger and more ad hominem language then I’d like, he is on the money with his criticism: the blades of grass argument is simply not applicable. Yes, SOMEBODY had to win the lottery, but that is totally beside the point – nobody said “most megamillion winners will be from around Passaic”, nor did that happen. But if it HAD happened, “somebody had to win” would not be a useful explanation.

      It’s a shockingly weak counterargument.

      The whole kerfuffle shows how ill equipped people are to think about statistics. Eschenbach’s appeal to a Poisson process, for example, is ludicrous, and even Tamino’s takedown misses the point. This is nothing like a Poisson process, so fitting a Poisson distribution is complete nonsense. Period.

      Solving statistical problems on an exam is something few enough people know how to do, but applying statistical thinking correctly is even rarer. This is something those of us who have a glimmer of understanding have to cope with. Undergrad sophistication in calculus and physics can take people a long way, but most people don’t even have a semester of statistics, and of those who do, most were taught by somebody who wasn’t very good at it either.

      Even so, to find out that the “blade of grass” argument comes from someone with a PhD in the field is demoralizing.

    2. Just to say, Michael, that’s assuming it was intended as a real statistical argument rather that as propaganda from the outset. If the latter, and IMO it’s very much the latter, it’s PhD abuse of the very worst sort. Unfortunately, given Briggs’ typical reader, it’s probably very effective propaganda given that such readers don’t want the scientific truth insofar as that’s available but rather an affirmation of their prior world-view. And what’s that proverb about every complex problem having at least one solution that is both simple and wrong?

  6. Briggs’ criticism of the one in 1.6 million figure was not about non-independence of successive events but about what I would describe as “after the fact” selection of a sample (though I am not sure he would agree with that characterization of his position).

    Assuming independence, the probability of a RANDOMLY CHOSEN sequence of 13 data values all being in the top third IS (1/3)^13 but that is NOT the same as the probability of there being such a sequence somewhere in a larger sample. If we wait long enough we will certainly eventually see such a sequence, and the month after we see 13 high months in a row is NOT a randomly chosen place to end a sample. Of course the probability that within thirty years of starting to look for a trend the last 13months of a 100years of data drawn from the same probability distribution every month will all be in the top third is still very low but the chance of making that point (and explaining its implications properly) may well have been blown by now. The problem is not with those who will buy anything that fits with their political prejudices, but with the kind of people who are persuadable by reason but reluctant to be stampeded – and who now have good reason to suspect that climate scientists are cavalier about the use of probability and statistics.

Leave a Reply to William ConnolleyCancel reply

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑