Climategate 2.0: Bias in Scientific Research

November 23rd, 2011

Ever since the first Climategate e-mail release, the public has become increasingly aware that scientists are not unbiased. Of course, most scientists with a long enough history in their fields already knew this (I discussed the issue at length in my first book Climate Confusion), but it took the first round of Climategate e-mails to demonstrate it to the world.

The latest release (Climategate 2.0) not only reveals bias, but also some private doubts among the core scientist faithful about the scientific basis for the IPCC’s policy goals. Yet, the IPCC’s “cause” (Michael Mann’s term) appears to trump all else.

So, when the science doesn’t support The Cause, the faithful turn toward discussions of how to craft a story which minimizes doubt about the IPCC’s findings. After considerable reflection, I’m going to avoid using the term ‘conspiracy’ to describe this activity, and discuss it in terms of scientific bias.

It’s Impossible to Avoid Bias

We are all familiar with competing experts in a trial who have diametrically opposed opinions on some matter, even given the same evidence. This happens in science all the time.

Even if we have perfect measurements of Nature, scientists can still come to different conclusions about what those measurements mean in terms of cause and effect. So, biases on the part of scientists inevitably influence their opinions. The formation of a hypothesis of how nature works is always biased by the scientist’s worldview and limited amount of knowledge, as well as the limited availability of research funding from a government that has biased policy interests to preserve.

Admittedly, the existence of bias in scientific research – which is always present — does not mean the research is necessarily wrong. But as I often remind people, it’s much easier to be wrong than right in science. This is because, while the physical world works in only one way, we can dream up a myriad ways by which we think it works. And they can’t all be correct.

So, bias ends up being the enemy of the search for scientific truth because it keeps us from entertaining alternative hypotheses for how the physical world works. It increases the likelihood that our conclusions are wrong.

The IPCC’s Bias

In the case of global warming research, the alternative (non-consensus) hypothesis that some or most of the climate change we have observed is natural is the one that the IPCC must avoid at all cost. This is why the Hockey Stick was so prized: it was hailed as evidence that humans, not Nature, rule over climate change.

The Climategate 2.0 e-mails show how entrenched this bias has become among the handful of scientists who have been the most willing participants and supporters of The Cause. These scientists only rose to the top because they were willing to actively promote the IPCC’s message with their particular fields of research.

Unfortunately, there is no way to “fix” the IPCC, and there never was. The reason is that its formation over 20 years ago was to support political and energy policy goals, not to search for scientific truth. I know this not only because one of the first IPCC directors told me so, but also because it is the way the IPCC leadership behaves. If you disagree with their interpretation of climate change, you are left out of the IPCC process. They ignore or fight against any evidence which does not support their policy-driven mission, even to the point of pressuring scientific journals not to publish papers which might hurt the IPCC’s efforts.

I believe that most of the hundreds of scientists supporting the IPCC’s efforts are just playing along, assured of continued funding. In my experience, they are either: (1) true believers in The Cause; (2) think we need to get away from using fossil fuels anyway; or (3) rationalize their involvement based upon the non-zero chance of catastrophic climate change.

My Biases

I am up front about my biases: I think market forces will take care of the fact that “fossil” fuels are (probably) a limited resource. Slowly increasing scarcity will lead to higher prices, which will make alternative energy research more attractive. This is more efficient that trying to legislate new forms of energy into existence.

I also think currently proposed energy policies will cause widespread death and suffering. The IPCC not only destroys scientific objectivity and scientific progress, it also destroys lives.

Therefore, I view it as my moral duty to support the “forgotten science” of natural climate change, a class of alternative hypotheses that have all but been ignored by the IPCC and government funding agencies.

I hope I am correct that most climate change we have experienced is natural. But I also know that “hoping” doesn’t make it so. If I had new scientific evidence that human-caused climate change really was a threat to life on Earth, I would publish it. It would sure be easier to publish than evidence against.

But from everything I’ve seen, I still think Nature probably rules, and that humans (as part of nature) also have some unknown level influence on climate. We know that the existence of trees affects climate – why not the existence of humans?

Countering the Bias

Scientists are human, and so you will never remove the tendencies toward bias in scientific research. You can’t change human nature.

But you can level the playing field by supporting alternative biases.

For years John Christy and I have been advising Congress that some portion of the appropriated funds for federal agencies supporting climate change research should be mandated to support alternative hypotheses of climate change. It’s time for the pendulum to start swinging back the other way.

After all, scientists will go where the money is. If scientists are funded to find evidence of natural sources of climate change, believe me, they will find it.

If you build such a playing field, they will come.

But when only one hypothesis is allowed as the explanation for climate change (e.g. “the science is settled”), the bias becomes so thick and acrid that everyone can smell the stench. Everyone except the IPCC leadership, that is.

UAH Global Temperature Update for October 2011: +0.11 deg. C

November 3rd, 2011

The global average lower tropospheric temperature anomaly for October, 2011 dropped , to +0.11 deg. C (click on the image for the full-size version):

The 3rd order polynomial fit to the data (courtesy of Excel) is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.

Here are this year’s monthly stats:

YR MON GLOBAL NH SH TROPICS
2011 1 -0.010 -0.055 +0.036 -0.372
2011 2 -0.020 -0.042 +0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 +0.117 +0.195 +0.039 -0.229
2011 5 +0.133 +0.145 +0.121 -0.043
2011 6 +0.315 +0.379 +0.250 +0.233
2011 7 +0.374 +0.344 +0.404 +0.204
2011 8 +0.327 +0.321 +0.332 +0.155
2011 9 +0.289 +0.304 +0.274 +0.178
2011 10 +0.114 +0.169 +0.059 -0.056

The Northern Hemisphere, Southern Hemisphere, and tropics have all cooled substantially, consistent with the onset of another La Nina, with the tropics now back below the 1981-2010 average.

[Since AMSR-E failed in early October, there will be no more sea surface temperature updates from that instrument.]

For those tracking the daily AMSU 5 data at the Discover website, the temperature free-fall continues so I predict November will see another substantial drop in global temperatures (click for large version):

WHAT MIGHT THIS MEAN FOR CLIMATE CHANGE?
…taking a line from our IPCC brethren… While any single month’s drop in global temperatures cannot be blamed on climate change, it is still the kind of behavior we expect to see more often in a cooling world. 😉

Brrr…the Troposphere Is Ignoring Your SUV

October 30th, 2011

For those tracking the daily global temperature updates at the Discover website, you might have noticed the continuing drop this month in global temperatures. The mid-tropospheric AMSU channels are showing even cooler temperatures than we had at this date with the last (2008) La Nina. The following screen shot is for AMSU channel 6 (click for large version).

A check of the lower stratospheric channels (9, 10) suggests this is not a stratospheric effect bleeding over into the tropospheric channels.

With the current (and forecast to continue) stormy pattern over the U.S., I have to wonder whether the atmosphere is currently in a destabilized state. I doubt that surface temperatures anomalies are as anomalously low as the mid-troposphere temperatures are running, which in combination with anomalously cold mid- and upper-tropospheric temperatures means there is extra energy available for storms. (Since AMSR-E failed in early October, our sea surface temperature plot is no longer showing current data, so I have no easy way to check surface temperatures.)

Of course, this too shall pass. I just thought it was an interesting curiosity during a time when some pundits are claiming global warming is “accelerating”. Apparently, they are still stuck in the last millennium.

Our GRL Response to Dessler Takes Shape, and the Evidence Keeps Mounting

October 12th, 2011

I will be revealing some of the evidence we will be submitting to Geophysical Research Letters (GRL) in response to Dessler’s paper claiming to refute our view of the forcing role of clouds in the climate system.

To whet your appetite, here is a draft version of one of the illustrations (click for the large version). It clearly shows the large discrepancy which exists between the IPCC climate models and satellite observations in the way they show the Earth shedding excess radiant energy in response to warming. This is central to question of how much warming can be expected from anthropogenic greenhouse gas emissions, because the less radiant energy the model’s shed per degree of warming, the more the models continue to warm.

The figure above represents 700 years of data (50 years each from all 14 models we have analyzed), and all 20 years of global Earth radiant energy budget data which exists from 2 satellite periods. Each point plotted represents an estimate of how much energy is lost (gained) by the Earth per degree of warming (cooling) during year-to-year climate variations in the individual decades.

Results for various averaging times are shown: Monthly (used by Dessler), 3 and 12 monthly (used by Forster & Gregory, 2006 J. Climate in their analysis of ERBE data, results of which are plotted as blue squares above), and 18 months used by only us in our analysis of the CERES data. We decided showing results for multiple averaging times is better than arguing with our critics over what averaging time is best. (If there are two options, A and B, and we chose A, our critics would claim there was an Exxon-funded conspiracy to exclude B.)

Of course, this evidence also supports one of the main conclusions of our Remote Sensing paper published earlier this year: there is a large discrepancy between the IPCC climate models and observations. That’s the paper which led to the resignation of the journal’s Chief Editor, and an apology from that journal to Kevin Trenberth for even publishing our paper (never mind it was peer reviewed by researchers who also publish on the subject).

The Effect of Volcanoes in Models versus Observations

One new twist that emerges from the above figure comes from the blue triangles, representing the model decades involving large episodic radiative forcing events by volcanic aerosols, compared to decades without volcanic forcing (yellow triangles). These blue triangles clearly show that a low bias in the regression-diagnosed feedback parameter tends to occur when time-varying radiative forcing is present (The volcanoes were Mt. Agung in the 1960s, El Chichon in the 1980’s, and Mt. Pinatubo in the 1990s. 7 of the 14 models included strong, episodic volcanic forcing, as independently decided from data presented by Forster & Taylor, 2006 J. Climate).

Furthermore, comparison of those blue triangles to the Pinatubo-influenced ERBE satellite data (blue squares, separately computed and previously published by IPCC-affiliated researchers) shows even a larger discrepancy than do the yellow (non-volcanic) triangles compared to the (orange) CERES data, which experience no major volcanic events. While one might argue that the CERES satellite measurements (orange circles) are not totally inconsistent with the yellow model triangles, the same cannot be said about the ERBE Pinatubo-influenced observations (blue squares) versus the blue model triangles. This has become a common IPCC defense of the climate models (“…well, the observations aren’t totally inconsistent with all of the models…”), as if this somehow constitutes validation of the climate models.

How Do the Results Jibe with Dessler (2010)?
Dessler (2010) in effect made a calculation representing the single orange circle on the far left. He interpreted it as evidence of positive cloud feedback (all of the IPCC models now exhibit positive cloud feedback), and indeed if I were to take that single circle, with its diagnosed net feedback parameter of only 1.2 W m-2 K-1, I might be inclined to agree it does, indeed, suggest positive cloud feedback.

But note how that single orange circle compares to the models (the triangles) when the exact same calculation is made from them. There is a significant discrepancy, which is seen to grow at the longer averaging times where the feedback signal is expected to more clearly emerge.

And the discrepancy appears to be the greatest in decades that experienced major volcanic eruptions.

Conclusion

The evidence keeps mounting that the Earth is more resistant to radiative forcing than are the climate models used by the IPCC to project future climate change. While it doesn’t actually prove the models are wrong in their projections of global warming, I don’t see how discrepancies this large can continue to be ignored.

If not for the public policy implications (which Dessler admits was the impetus for his 2011 paper criticizing our work), evidence as strong as that contained in the above illustration would be easily embraced by the climate research community. Maybe some day.

It will be interesting to see whether GRL rejects our paper out of hand. Maybe it would help if I joined the Union of Concerned Scientists. Hmmmm.

P.S….another tidbit for those following Dessler’s claim that clouds can’t cause climate change…
Dessler claims that changes in ocean temperature are way too large to be caused by clouds. Well, the year-to-year changes in Levitus global ocean heat content of the 0-700 m layer during the 2000-2010 satellite period of record yields a yearly standard deviation of 0.5 Watts per sq. meter for the energy required. In comparison, the yearly standard deviation of the global oceanic CERES satellite radiative fluxes is 0.3 Watts per sq. meter, which represents 60% of the energy required to cause the ocean temperature changes. Using any reasonable feedback parameter combined with the sea surface temperature variations yields only 0.1 Watts per sq. meter.

Thus, cloud variations (or maybe even natural water vapor variations?) can constitute an important natural forcing component of climate variability. And since it is our physical interpretation of observed climate variability that impacts our estimates of climate sensitivity, it also impacts our estimates of future global warming (aka climate change).

At this point, I suspect Dessler’s conclusions to the contrary are partly the result of a large amount of noise in temperature changes with time computed from short-term Levitus ocean heat content data.

I’ve Looked at Clouds from Both Sides Now -and Before

October 8th, 2011

…sometimes, the most powerful evidence is right in front of your face…..

I never dreamed that anyone would dispute the claim that cloud changes can cause “cloud radiative forcing” of the climate system, in addition to their role as responding to surface temperature changes (“cloud radiative feedback”). (NOTE: “Cloud radiative forcing” traditionally has multiple meanings. Caveat emptor.)

But that’s exactly what has happened. Andy Dessler’s 2010 and 2011 papers have claimed, both implicitly and explicitly, that in the context of climate, with very few exceptions, cloud changes must be the result of temperature change only.

Shortly after we became aware of Andy’s latest paper, which finally appeared in GRL on October 1, I realized the most obvious and most powerful evidence of the existence of cloud radiative forcing was staring us in the face. We had actually alluded to this in our previous papers, but there are so many ways to approach the issue that it’s easy to get sidetracked by details, and forget about the Big Picture.

Well, the following graph is the Big Picture. It shows the 3-month variations in CERES-measured global radiative energy balance (which Dessler agrees is made up of forcing and feedback), and it also shows an estimate of the radiative feedback alone using HadCRUT3 global temperature anomalies, assuming a feedback parameter (λ) of 2 Watts per sq. meter per deg (click for full-size version):

What this graph shows is very simple, but also very powerful: The radiative variations CERES measures look nothing like what the radiative feedback should look like. You can put in any feedback parameter you want (the IPCC models range from 0.91 to 1.87…I think it could be more like 3 to 6 in the real climate system), and you will come to the same conclusion.

And if CERES is measuring something very different from radiative feedback, it must — by definition — be radiative forcing (for the detail-oriented folks, forcing = Net + feedback…where Net is very close to the negative of [LW+SW]).

The above chart makes it clear that radiative feedback is only a small portion of what CERES measures. There is no way around this conclusion.

Now, our 3 previous papers on this subject have dealt with trying to understand the extent to which this large radiative forcing signal (or whatever you want to call it) corrupts the diagnosis of feedback. That such radiative forcing exists seemed to me to be beyond dispute. Apparently, it wasn’t. Dessler (2011) tries to make the case that the radiative variations measured by CERES are not enough energy to change the temperature of the ocean mixed layer…but that is a separate issue; the issue addressed by our previous 3 papers is the extent to which radiative forcing masks radiative feedback. [For those interested, over the same period of record (April 2000 through June 2010) the standard deviation of the Levitus-observed 3-month changes in temperature with time of the upper 200 meters of the global oceans corresponds to 2.5 Watts per sq. meter]

I just wanted to put this evidence out there for people to see and understand in advance. It will be indeed part of our response to Dessler 2011, but Danny Braswell and I have so many things to say about that paper, it’s going to take time to address all of the ways in which (we think) Dessler is wrong, misused our model, and misrepresented our position.

UAH Global Temperature Update for September 2011: +0.29 deg. C

October 4th, 2011

The global average lower tropospheric temperature anomaly for September, 2011 retreated a little again, to +0.29 deg. C (click on the image for the full-size version):

The 3rd order polynomial fit to the data (courtesy of Excel) is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.

Here are this year’s monthly stats:

YR MON GLOBAL NH SH TROPICS
2011 1 -0.010 -0.055 0.036 -0.372
2011 2 -0.020 -0.042 0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 +0.117 +0.195 +0.039 -0.229
2011 5 +0.133 +0.145 +0.121 -0.043
2011 6 +0.315 +0.379 +0.250 +0.233
2011 7 +0.374 +0.344 +0.404 +0.204
2011 8 +0.327 +0.321 +0.332 +0.155
2011 9 +0.289 +0.309 +0.270 +0.175

The global sea surface temperatures from AMSR-E through the end of AMSR-E’s useful life (October 3, 2011) are shown next. The trend line is, again, for entertainment purposes only:

On the subject of the drop-off in temperatures seen in the AMSR-E data in the last week, I have been getting questions about the daily AMSU tracking data at the Discover website which shows Aqua AMSU channel 5 (which our monthly updates are computed from) is now entering record-low territory (for the date, anyway, and only since the Aqua record began in 2002). While I have always cautioned people against reading too much into week-to-week changes in global average temperature, this could portend a more significant drop in the next (October) temperature update, as the new La Nina approaches.

AMSR-E Ends 9+ Years of Global Observations

October 4th, 2011


UPDATE #1: See update at end.

The Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) was automatically spun down to its designed 4 rpm safe condition last night after recent increases in the amount of power required to keep it spinning at its nominal 40 rpm were beginning to cause noticeable jitter in NASA’s Aqua satellite.

The instrument has over 480 pounds of spinning mass, and the lubricant in the bearing assembly gradually deteriorates over time. This deterioration has been monitored, and automatic shutdown procedures have been in place for years if the amount of torque required to keep AMSR-E spinning exceeded a certain threshold.

Starting about October 1, AMSR-E was causing yaw vibrations in the Aqua satellite attitude which were increasingly exceeding the +/- 25 arcsecond limits that are required by other instruments on the spacecraft. Last night, the 4.5 Newton-meter torque limit was apparently exceeded, and the instrument was automatically spun down to 4 rpm.

At this point it appears that this event likely ends the useful life of AMSR-E, which has been continuously gathering global data on a variety of parameters from sea ice to precipitation to sea surface temperature. It’s 9+ year lifetime exceeded its 6 year design life.

AMSR-E was provided to NASA by Japan’s Aerospace Exploration Agency (JAXA), and was built by Mitsubishi Electric Company. It was launched aboard the Aqua satellite from Vandenberg AFB on May 2, 2002. It has been an extremely successful experiment, and has gathered a huge quantity of data that will be revealing secrets of weather and climate as scientific research with the archived data continues in the coming years.

As the U.S. Science Team Leader for AMSR-E, I would like to congratulate and thank all of those who made AMSR-E such a success: JAXA, MELCO, NASA, the University of Alabama in Huntsville, the National Snow and Ice Data Center (NSIDC) in Boulder, and the U.S. and Japanese Science Teams who developed the algorithms that turned the raw data collected by AMSR-E into so many useful products.

The good news is that AMSR2, a slightly modified and improved version of AMSR-E, will be launched early next year on Japan’s GCOM-W satellite, and will join Aqua and the other satellites in NASA’s A-Train constellation of Earth observation satellites in their twice-daily, 1:30 a.m./p.m. sun-synchronous polar orbit. It is my understanding that those data will be shared in near-real time with U.S. agencies.

We had hoped that AMSR-E would provide at least one year over data overlap with the new AMSR2 instrument. It remains to be determined – and is only speculation on my part – whether there might be an attempt to gather some additional data from AMSR-E later to help fulfill this cross-calibration activity with AMSR2. [The Aqua satellite can easily accommodate the extra torque imparted to the spacecraft, and last night’s spin-down of AMSR-E was mostly to eliminate the very slight chance of sudden failure of the AMSR-E bearing assembly which could have caused the Aqua satellite to go into an uncontrolled and unrecoverable tumble.]

Again, I want to thank and congratulate all of those who made AMSR-E such a huge success!

UPDATE #1: As of early this morning, the torque required to keep AMSR-E spinning at 4 rpm was too large for its own momentum compensation mechanism to handle, with excessive amounts of momentum being dumped to the spacecraft. As a result, the instrument has now been spun down to 0 rpm. The satellite has shed the excessive momentum, and is operating normally, as are the other instruments aboard the spacecraft (MODIS, CERES, and AIRS).

The Rest of the Cherries: 140 decades of Climate Models vs. Observations

September 22nd, 2011

Since one of the criticisms of our recent Remote Sensing paper was that we cherry-picked the climate models we chose to compare the satellite observations of climate variations to, here are all 140 10-year periods from all 14 climate models’ 20th Century runs we analyzed (click to see the full res. version):

As you can see, the observations of the Earth (in blue, CERES radiative energy budget versus HadCRUT3 surface temperature variations) are outside the range of climate model behavior, at least over the span of time lags we believe are most related to feedbacks, which in turn determine the sensitivity of the climate system to increasing greenhouse gas concentrations. (See Lindzen & Choi, 2011 for more about time lags).

Now, at ZERO time lag, there are a few decades from a few models (less than 10% of them) which exceed the satellite measurements. So, would you then say that the satellite measurements are “not inconsistent” with the models? I wouldn’t.

Especially since the IPCC’s best estimate of future warming (about 3 deg C.) from a doubling of atmospheric CO2 is almost exactly the AVERAGE response of ALL of the climate models. Note that the average of all 140 model decades (dashed black line in the above graph) is pretty darn far from the satellite data.

So, even with all of 140 cherries picked, we still see evidence there is something wrong with the IPCC models in general. And I believe the problem is they are too sensitive, and thus are predicting too much future global warming.

An Open Letter of Encouragement to Dr. Dessler

September 14th, 2011

Since I keep getting asked about the “latest” on the ongoing debate over clouds and feedback diagnosis between myself and Andy Dessler, I decided that this would be the best way to handle it under the current circumstances:


An Open Letter of Encouragement to Dr. Dessler

Dear Andy:

Thank you for the issues you have raised in your new paper, which I was only recently made aware of after it had already been peer reviewed and accepted for publication in Geophysical Research Letters.

Even though we disagree on the subject, I am pleased you have chosen to vigorously dispute the potential role of clouds in both confounding the diagnosis of the sensitivity of the climate system, as well as in contributing to climate variability and climate change.

I just wanted to encourage you to publish that paper as soon as you can, with or without the changes I suggested on my blog.

I am very sincere in my encouragement. I am anxious for the science to progress on this important issue, and so I eagerly await the official publication.

All the best,

-Roy W. Spencer

The Good, The Bad, and The Ugly: My Initial Comments on the New Dessler 2011 Study

September 7th, 2011

UPDATE: I have been contacted by Andy Dessler, who is now examining my calculations, and we are working to resolve a remaining difference there. Also, apparently his paper has not been officially published, and so he says he will change the galley proofs as a result of my blog post; here is his message:

“I’m happy to change the introductory paragraph of my paper when I get the galley proofs to better represent your views. My apologies for any misunderstanding. Also, I’ll be changing the sentence “over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming” to make it clear that I’m talking about cloud feedbacks doing the action here, not cloud forcing.”

Update #2 (Sept. 8, 2011): I have made several updates as a result of correspondence with Dessler, which will appear underlined, below. I will leave it to the reader to decide whether it was our Remote Sensing paper that should not have passed peer review (as Trenberth has alleged), or Dessler’s paper meant to refute our paper.

NOTE: This post is important, so I’m going to sticky it at the top for quite a while.
While we have had only one day to examine Andy Dessler’s new paper in GRL, I do have some initial reaction and calculations to share. At this point, it looks quite likely we will be responding to it with our own journal submission… although I doubt we will get the fast-track, red carpet treatment he got.

There are a few positive things in this new paper which make me feel like we are at least beginning to talk the same language in this debate (part of The Good). But, I believe I can already demonstrate some of The Bad, for example, showing Dessler is off by about a factor of 10 in one of his central calculations.

Finally, Dessler must be called out on The Ugly things he put in the paper (which he has now agreed to change).

1. THE GOOD

Estimating the Errors in Climate Feedback Diagnosis from Satellite Data

We are pleased that Dessler now accepts that there is at least the *potential* of a problem in diagnosing radiative feedbacks in the climate system *if* non-feedback cloud variations were to cause temperature variations. It looks like he understands the simple-forcing-feedback equation we used to address the issue (some quibbles over the equation terms aside), as well as the ratio we introduced to estimate the level of contamination of feedback estimates. This is indeed progress.

He adds a new way to estimate that ratio, and gets a number which — if accurate — would indeed suggest little contamination of feedback estimates from satellite data. This is very useful, because we can now talk about numbers and how good various estimates are, rather than responding to hand waving arguments over whether “clouds cause El Nino” or other red herrings.

I have what I believe to be good evidence that his calculation, though, is off by a factor of 10 or so. More on that under THE BAD, below.

Comparisons of Satellite Measurements to Climate Models

Figure 2 in his paper, we believe, helps make our point for us: there is a substantial difference between the satellite measurements and the climate models. He tries to minimize the discrepancy by putting 2-sigma error bounds on the plots and claiming the satellite data are not necessarily inconsistent with the models.

But this is NOT the same as saying the satellite data SUPPORT the models. After all, the IPCC’s best estimate projections of future warming from a doubling of CO2 (3 deg. C) is almost exactly the average of all of the models sensitivities! So, when the satellite observations do depart substantially from the average behavior of the models, this raises an obvious red flag.

Massive changes in the global economy based upon energy policy are not going to happen, if the best the modelers can do is claim that our observations of the climate system are not necessarily inconsistent with the models.

(BTW, a plot of all of the models, which so many people have been clamoring for, will be provided in The Ugly, below.)

2. THE BAD

The Energy Budget Estimate of How Much Clouds Cause Temperature Change

While I believe he gets a “bad” number, this is the most interesting and most useful part of Dessler’s paper. He basically uses the terms in the forcing-feedback equation we use (which is based upon basic energy budget considerations) to claim that the energy required to cause the observed changes in the global-average ocean mixed layer temperature are far too large to be caused by satellite-observed variations in the radiative input into the ocean brought about by cloud variations (my wording).

He gets a ratio of about 20:1 for non-radiatively forced (i.e. non-cloud) temperature changes versus radiatively (mostly cloud) forced variations. If that 20:1 number is indeed good, then we would have to agree this is strong evidence against our view that a significant part of temperature variations are radiatively forced. (It looks like Andy will be revising this downward, although it’s not clear by how much because his paper is ambiguous about how he computed and then combined the radiative terms in the equation, below.)

But the numbers he uses to do this, however, are quite suspect. Dessler uses NONE of the 3 most direct estimates that most researchers would use for the various terms. (A clarification on this appears below). Why? I know we won’t be so crass as to claim in our next peer-reviewed publication (as he did in his, see The Ugly, below) that he picked certain datasets because they best supported his hypothesis.

The following graphic shows the relevant equation, and the numbers he should have used since they are the best and most direct observational estimates we have of the pertinent quantities. I invite the more technically inclined to examine this. For those geeks with calculators following along at home, you can run the numbers yourself:

Here I went ahead and used Dessler’s assumed 100 meter depth for the ocean mixed layer, rather than the 25 meter depth we used in our last paper. (It now appears that Dessler will be using a 700 m depth, a number which was not mentioned in his preprint. I invite you to read his preprint and decide whether he is now changing from 100 m to 700 m as a result of issues I have raised here. It really is not obvious from his paper what he used).

Using the above equation, if I assumed a feedback parameter λ=3 Watts per sq. meter per degree, that 20:1 ratio Dessler gets becomes 2.2:1. If I use a feedback parameter of λ=6, then the ratio becomes 1.7:1. This is basically an order of magnitude difference from his calculation.

Again I ask: why did Dessler choose to NOT use the 3 most obvious and best sources of data to evaluate the terms in the above equation?:
(1) Levitus for observed changes in the ocean mixed layer temperature; (it now appears he will be using a number consistent with the Levitus 0-700 m layer).

(2) CERES Net radiative flux for the total of the 2 radiative terms in the above equation, (this looks like it could be a minor source of difference, except it appears he put all of his Rcld variability in the radiative forcing term, which he claims helps our position, but running the numbers will reveal the opposite is true since his Rcld actually contains both forcing and feedback components which partially offset each other.)

(3): HadSST for sea surface temperature variations. (this will likely be the smallest source of difference)

The Use of AMIP Models to Claim our Lag Correlations Were Spurious

I will admit, this was pretty clever…but at this early stage I believe it is a red herring.

Dessler’s Fig. 1 shows lag correlation coefficients that, I admit, do look kind of like the ones we got from satellite (and CMIP climate model) data. The claim is that since the AMIP model runs do not allow clouds to cause surface temperature changes, this means the lag correlation structures we published are not evidence of clouds causing temperature change.

Following are the first two objections which immediately come to my mind:

1) Imagine (I’m again talking mostly to you geeks out there) a time series of temperature represented by a sine wave, and then a lagged feedback response represented by another sine wave. If you then calculate regression coefficients between those 2 time series at different time leads and lags (try this in Excel if you want), you will indeed get a lag correlation structure we see in the satellite data.

But look at what Dessler has done: he has used models which DO NOT ALLOW cloud changes to affect temperature, in order to support his case that cloud changes do not affect temperature! While I will have to think about this some more, it smacks of circular reasoning. He could have more easily demonstrated it with my 2 sine waves example.

Assuming there is causation in only one direction to produce evidence there is causation in only one direction seems, at best, a little weak.

2) In the process, though, what does his Fig. 1 show that is significant to feedback diagnosis, if we accept that all of the radiative variations are, as Dessler claims, feedback-induced? Exactly what the new paper by Lindzen and Choi (2011) explores: that there is some evidence of a lagged response of radiative feedback to a temperature change.

And, if this is the case, then why isn’t Dr. Dessler doing his regression-based estimates of feedback at the time lag of maximum response, as Lindzen now advocates?

Steve McIntyre, who I have provided the data to for him to explore, is also examining this as one of several statistical issues. So, Dessler’s Fig. 1 actually raises a critical issue in feedback diagnosis he has yet to address.

3. THE UGLY

(MOST, IF NOT ALL, OF THESE OBJECTIONS WILL BE ADDRESSED IN DESSLER’S UPDATE OF HIS PAPER BEFORE PUBLICATION)

The new paper contains a few statements which the reviewers should not have allowed to be published because they either completely misrepresent our position, or accuse us of cherry picking (which is easy to disprove).

Misrepresentation of Our Position

Quoting Dessler’s paper, from the Introduction:

“Introduction
The usual way to think about clouds in the climate system is that they are a feedback… …In recent papers, Lindzen and Choi [2011] and Spencer and Braswell [2011] have argued that reality is reversed: clouds are the cause of, and not a feedback on, changes in surface temperature. If this claim is correct, then significant revisions to climate science may be required.”

But we have never claimed anything like “clouds are the cause of, and not a feedback on, changes in surface temperature”! We claim causation works in BOTH directions, not just one direction (feedback) as he claims. Dr. Dessler knows this very well, and I would like to know:

1) what he was trying to accomplish by such a blatant misrepresentation of our position, and

2) how did all of the peer reviewers of the paper, who (if they are competent) should be familiar with our work, allow such a statement to stand?

Cherry picking of the Climate Models We Used for Comparison

This claim has been floating around the blogosphere ever since our paper was published. To quote Dessler:

“SB11 analyzed 14 models, but they plotted only six models and the particular observational data set that provided maximum support for their hypothesis. “

How is picking the 3 most sensitive models AND the 3 least sensitive models going to “provide maximum support for (our) hypothesis”? If I had picked ONLY the 3 most sensitive, or ONLY the 3 least sensitive, that might be cherry picking…depending upon what was being demonstrated.

And where is the evidence those 6 models produce the best support for our hypothesis? I would have had to run hundreds of combinations of the 14 models to accomplish that. Is that what Dr. Dessler is accusing us of?

Instead, the point of using the 3 most sensitive and 3 least sensitive models was to emphasize that not only are the most sensitive climate models inconsistent with the observations, so are the least sensitive models.

Remember, the IPCC’s best estimate of 3 deg. C warming is almost exactly the warming produced by averaging the full range of its models’ sensitivities together. The satellite data depart substantially from that. I think inspection of Dessler’s Fig. 2 supports my point.

But, since so many people are wondering about the 8 models I left out, here are all 14 of the models’ separate results, in their full, individual glory:

I STILL claim there is a large discrepancy between the satellite observations and the behavior of the models.

CONCLUSION

These are my comments and views after having only 1 day since we received the new paper. It will take weeks, at a minimum, to further explore all of the issues raised by Dessler (2011).

Based upon the evidence above, I would say we are indeed going to respond with a journal submission to answer Dessler’s claims. I hope that GRL will offer us as rapid a turnaround as Dessler got in the peer review process. Feel free to take bets on that. 🙂

And, to end on a little lighter note, we were quite surprised to see this statement in Dessler’s paper in the Conclusions (italics are mine):

“These calculations show that clouds did not cause significant climate change over the last decade (over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming).”

Long term climate change can be caused by clouds??! Well, maybe Andy is finally seeing the light! 😉 (Nope. It turns out he meant ” *RADIATIVE FEEDBACK DUE TO* clouds can indeed cause significant warming”. An obvious, minor typo. My bad.)