The Good, The Bad, and The Ugly: My Initial Comments on the New Dessler 2011 Study

September 7th, 2011

UPDATE: I have been contacted by Andy Dessler, who is now examining my calculations, and we are working to resolve a remaining difference there. Also, apparently his paper has not been officially published, and so he says he will change the galley proofs as a result of my blog post; here is his message:

“I’m happy to change the introductory paragraph of my paper when I get the galley proofs to better represent your views. My apologies for any misunderstanding. Also, I’ll be changing the sentence “over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming” to make it clear that I’m talking about cloud feedbacks doing the action here, not cloud forcing.”

Update #2 (Sept. 8, 2011): I have made several updates as a result of correspondence with Dessler, which will appear underlined, below. I will leave it to the reader to decide whether it was our Remote Sensing paper that should not have passed peer review (as Trenberth has alleged), or Dessler’s paper meant to refute our paper.

NOTE: This post is important, so I’m going to sticky it at the top for quite a while.
While we have had only one day to examine Andy Dessler’s new paper in GRL, I do have some initial reaction and calculations to share. At this point, it looks quite likely we will be responding to it with our own journal submission… although I doubt we will get the fast-track, red carpet treatment he got.

There are a few positive things in this new paper which make me feel like we are at least beginning to talk the same language in this debate (part of The Good). But, I believe I can already demonstrate some of The Bad, for example, showing Dessler is off by about a factor of 10 in one of his central calculations.

Finally, Dessler must be called out on The Ugly things he put in the paper (which he has now agreed to change).

1. THE GOOD

Estimating the Errors in Climate Feedback Diagnosis from Satellite Data

We are pleased that Dessler now accepts that there is at least the *potential* of a problem in diagnosing radiative feedbacks in the climate system *if* non-feedback cloud variations were to cause temperature variations. It looks like he understands the simple-forcing-feedback equation we used to address the issue (some quibbles over the equation terms aside), as well as the ratio we introduced to estimate the level of contamination of feedback estimates. This is indeed progress.

He adds a new way to estimate that ratio, and gets a number which — if accurate — would indeed suggest little contamination of feedback estimates from satellite data. This is very useful, because we can now talk about numbers and how good various estimates are, rather than responding to hand waving arguments over whether “clouds cause El Nino” or other red herrings.

I have what I believe to be good evidence that his calculation, though, is off by a factor of 10 or so. More on that under THE BAD, below.

Comparisons of Satellite Measurements to Climate Models

Figure 2 in his paper, we believe, helps make our point for us: there is a substantial difference between the satellite measurements and the climate models. He tries to minimize the discrepancy by putting 2-sigma error bounds on the plots and claiming the satellite data are not necessarily inconsistent with the models.

But this is NOT the same as saying the satellite data SUPPORT the models. After all, the IPCC’s best estimate projections of future warming from a doubling of CO2 (3 deg. C) is almost exactly the average of all of the models sensitivities! So, when the satellite observations do depart substantially from the average behavior of the models, this raises an obvious red flag.

Massive changes in the global economy based upon energy policy are not going to happen, if the best the modelers can do is claim that our observations of the climate system are not necessarily inconsistent with the models.

(BTW, a plot of all of the models, which so many people have been clamoring for, will be provided in The Ugly, below.)

2. THE BAD

The Energy Budget Estimate of How Much Clouds Cause Temperature Change

While I believe he gets a “bad” number, this is the most interesting and most useful part of Dessler’s paper. He basically uses the terms in the forcing-feedback equation we use (which is based upon basic energy budget considerations) to claim that the energy required to cause the observed changes in the global-average ocean mixed layer temperature are far too large to be caused by satellite-observed variations in the radiative input into the ocean brought about by cloud variations (my wording).

He gets a ratio of about 20:1 for non-radiatively forced (i.e. non-cloud) temperature changes versus radiatively (mostly cloud) forced variations. If that 20:1 number is indeed good, then we would have to agree this is strong evidence against our view that a significant part of temperature variations are radiatively forced. (It looks like Andy will be revising this downward, although it’s not clear by how much because his paper is ambiguous about how he computed and then combined the radiative terms in the equation, below.)

But the numbers he uses to do this, however, are quite suspect. Dessler uses NONE of the 3 most direct estimates that most researchers would use for the various terms. (A clarification on this appears below). Why? I know we won’t be so crass as to claim in our next peer-reviewed publication (as he did in his, see The Ugly, below) that he picked certain datasets because they best supported his hypothesis.

The following graphic shows the relevant equation, and the numbers he should have used since they are the best and most direct observational estimates we have of the pertinent quantities. I invite the more technically inclined to examine this. For those geeks with calculators following along at home, you can run the numbers yourself:

Here I went ahead and used Dessler’s assumed 100 meter depth for the ocean mixed layer, rather than the 25 meter depth we used in our last paper. (It now appears that Dessler will be using a 700 m depth, a number which was not mentioned in his preprint. I invite you to read his preprint and decide whether he is now changing from 100 m to 700 m as a result of issues I have raised here. It really is not obvious from his paper what he used).

Using the above equation, if I assumed a feedback parameter λ=3 Watts per sq. meter per degree, that 20:1 ratio Dessler gets becomes 2.2:1. If I use a feedback parameter of λ=6, then the ratio becomes 1.7:1. This is basically an order of magnitude difference from his calculation.

Again I ask: why did Dessler choose to NOT use the 3 most obvious and best sources of data to evaluate the terms in the above equation?:
(1) Levitus for observed changes in the ocean mixed layer temperature; (it now appears he will be using a number consistent with the Levitus 0-700 m layer).

(2) CERES Net radiative flux for the total of the 2 radiative terms in the above equation, (this looks like it could be a minor source of difference, except it appears he put all of his Rcld variability in the radiative forcing term, which he claims helps our position, but running the numbers will reveal the opposite is true since his Rcld actually contains both forcing and feedback components which partially offset each other.)

(3): HadSST for sea surface temperature variations. (this will likely be the smallest source of difference)

The Use of AMIP Models to Claim our Lag Correlations Were Spurious

I will admit, this was pretty clever…but at this early stage I believe it is a red herring.

Dessler’s Fig. 1 shows lag correlation coefficients that, I admit, do look kind of like the ones we got from satellite (and CMIP climate model) data. The claim is that since the AMIP model runs do not allow clouds to cause surface temperature changes, this means the lag correlation structures we published are not evidence of clouds causing temperature change.

Following are the first two objections which immediately come to my mind:

1) Imagine (I’m again talking mostly to you geeks out there) a time series of temperature represented by a sine wave, and then a lagged feedback response represented by another sine wave. If you then calculate regression coefficients between those 2 time series at different time leads and lags (try this in Excel if you want), you will indeed get a lag correlation structure we see in the satellite data.

But look at what Dessler has done: he has used models which DO NOT ALLOW cloud changes to affect temperature, in order to support his case that cloud changes do not affect temperature! While I will have to think about this some more, it smacks of circular reasoning. He could have more easily demonstrated it with my 2 sine waves example.

Assuming there is causation in only one direction to produce evidence there is causation in only one direction seems, at best, a little weak.

2) In the process, though, what does his Fig. 1 show that is significant to feedback diagnosis, if we accept that all of the radiative variations are, as Dessler claims, feedback-induced? Exactly what the new paper by Lindzen and Choi (2011) explores: that there is some evidence of a lagged response of radiative feedback to a temperature change.

And, if this is the case, then why isn’t Dr. Dessler doing his regression-based estimates of feedback at the time lag of maximum response, as Lindzen now advocates?

Steve McIntyre, who I have provided the data to for him to explore, is also examining this as one of several statistical issues. So, Dessler’s Fig. 1 actually raises a critical issue in feedback diagnosis he has yet to address.

3. THE UGLY

(MOST, IF NOT ALL, OF THESE OBJECTIONS WILL BE ADDRESSED IN DESSLER’S UPDATE OF HIS PAPER BEFORE PUBLICATION)

The new paper contains a few statements which the reviewers should not have allowed to be published because they either completely misrepresent our position, or accuse us of cherry picking (which is easy to disprove).

Misrepresentation of Our Position

Quoting Dessler’s paper, from the Introduction:

“Introduction
The usual way to think about clouds in the climate system is that they are a feedback… …In recent papers, Lindzen and Choi [2011] and Spencer and Braswell [2011] have argued that reality is reversed: clouds are the cause of, and not a feedback on, changes in surface temperature. If this claim is correct, then significant revisions to climate science may be required.”

But we have never claimed anything like “clouds are the cause of, and not a feedback on, changes in surface temperature”! We claim causation works in BOTH directions, not just one direction (feedback) as he claims. Dr. Dessler knows this very well, and I would like to know:

1) what he was trying to accomplish by such a blatant misrepresentation of our position, and

2) how did all of the peer reviewers of the paper, who (if they are competent) should be familiar with our work, allow such a statement to stand?

Cherry picking of the Climate Models We Used for Comparison

This claim has been floating around the blogosphere ever since our paper was published. To quote Dessler:

“SB11 analyzed 14 models, but they plotted only six models and the particular observational data set that provided maximum support for their hypothesis. “

How is picking the 3 most sensitive models AND the 3 least sensitive models going to “provide maximum support for (our) hypothesis”? If I had picked ONLY the 3 most sensitive, or ONLY the 3 least sensitive, that might be cherry picking…depending upon what was being demonstrated.

And where is the evidence those 6 models produce the best support for our hypothesis? I would have had to run hundreds of combinations of the 14 models to accomplish that. Is that what Dr. Dessler is accusing us of?

Instead, the point of using the 3 most sensitive and 3 least sensitive models was to emphasize that not only are the most sensitive climate models inconsistent with the observations, so are the least sensitive models.

Remember, the IPCC’s best estimate of 3 deg. C warming is almost exactly the warming produced by averaging the full range of its models’ sensitivities together. The satellite data depart substantially from that. I think inspection of Dessler’s Fig. 2 supports my point.

But, since so many people are wondering about the 8 models I left out, here are all 14 of the models’ separate results, in their full, individual glory:

I STILL claim there is a large discrepancy between the satellite observations and the behavior of the models.

CONCLUSION

These are my comments and views after having only 1 day since we received the new paper. It will take weeks, at a minimum, to further explore all of the issues raised by Dessler (2011).

Based upon the evidence above, I would say we are indeed going to respond with a journal submission to answer Dessler’s claims. I hope that GRL will offer us as rapid a turnaround as Dessler got in the peer review process. Feel free to take bets on that. 🙂

And, to end on a little lighter note, we were quite surprised to see this statement in Dessler’s paper in the Conclusions (italics are mine):

“These calculations show that clouds did not cause significant climate change over the last decade (over the decades or centuries relevant for long-term climate change, on the other hand, clouds can indeed cause significant warming).”

Long term climate change can be caused by clouds??! Well, maybe Andy is finally seeing the light! 😉 (Nope. It turns out he meant ” *RADIATIVE FEEDBACK DUE TO* clouds can indeed cause significant warming”. An obvious, minor typo. My bad.)

Dessler vs. Rick Perry: Is the 2011 Texas Drought Evidence of Human-Caused Climate Change?

September 5th, 2011

One of the most annoying things about the climate change debate is that any regional weather event is blamed on humans, if even only partly. Such unscientific claims cannot be supported by data — they are little more than ambiguous statements of faith.

The current “exceptional” Texas drought is no exception. People seem to have short memories…especially if they were born after most of the major climate events of the past occurred.

Andy Dessler recently made what I’m sure he thought was a safe claim when faulting Texas Gov. Rick Perry for being “cavalier about climate change” (as if we could stop climate from changing by being concerned about it).

Dessler said, “..warming has almost certainly made the (Texas) heat wave and drought more extreme than it would otherwise have been.”

This clever tactic of claiming near-certainty of at least SOME effect of humans on weather events was originally invented by NASA’s James Hansen in his 1988 Senate testimony for Al Gore, an event that became the turning point for raising public awareness of “global warming” (oops, I’m sorry, I mean climate change).

The trouble is that climate change theory predicts changes, up and down, in just about anything you can imagine. So, anything unusual that happens anywhere, anytime, is deemed “consistent” with global warming.

But this tactic can work both ways — a specific drought might have instead been made LESS severe by the general tendency toward MORE rainfall, which is a much more robust prediction of the climate models with warming.

For example, let’s look at June-July total rainfall over the whole contiguous U.S. — which is only 1.6% of the Earth’s surface — over the last 100+ years (August data are not yet posted at NCDC):

What we see are some major drought events, and 2011 is not one of the big ones. The Big Kahuna was the Dust Bowl of the 1930’s. The 1950’s also experienced record droughts (see the animation here). These were before increasing CO2 in the atmosphere could be reasonably blamed for anything, except maybe enhancing plant growth a little.

Even NOAA’s Tom Karl back in 1981, before global warming politics took over his job description at NCDC, authored a paper on how the 1980 drought (which was pretty darn bad, long-lived, and widespread) was less severe than those in the 1930s and 1950s:

Note the price tag of the 1980 drought: $43 Billion. They are saying the current TX-OK drought will run somewhere north of $5 Billion.

But what do we ALSO see in the long term in the above U.S. rainfall plot? If anything, an UPWARD trend in rainfall. This is for the whole U.S., not just Texas….

Yes, I Know Texas is Like a Whole Other Country…
…but it is only 0.14% of the surface of the Earth. It is much easier for naturally-occurring stagnant weather patterns to cause drought (or flood) conditions over one, or even several states, because the descending (or ascending) portions of weather systems cover these smaller regional scales. It’s rare for them to cover the whole U.S.

So, now let’s look at what the rainfall record looks like for Texas:

Even though the August data are not yet available at this writing, I’m quite sure this year’s Texas drought will indeed be a record one….at least in the rather short (in climate parlance) period of record (just over 100 years) that we have enough rainfall data to analyze.

But what else do we see in the record? How about that big rainfall PEAK in 2007? I’ll bet someone can dig up an “expert” back in 2007 saying the Texas floods of 2007 were also caused by global warming.

And note that the long-term rainfall trend in Texas is not downward.

Surely, Dr. Dessler knows that a single data point (2011) does not constitute a “trend”.

The fact is that record dry (and wet) years in relatively small regions are actually quite common…because they usually happen in different places each year. Weather records are location-specific. This year is Texas’ super-drought year. Last year it was in part of Ukraine. Next year it will be somewhere else, maybe multiple places.

And even if droughts (or floods) do end up becoming more frequent, the question of just how much of that change can be blamed on humans versus Mother Nature still remains unanswered…unless you accept the pseudo-scientific faith-based statements put out by the IPCC leadership.

More Thoughts on the War Being Waged Against Us

September 5th, 2011

After having a day or so to digest some of what others have said about this whole mess, I’ve been trying to find better ways of expressing the science which is being disputed here. I’ve also gone back and tried to figure out exactly which part of our analysis was (supposedly) in error.

A Re-Examination of our Paper
So, first I went back and re-read our paper to find out what we did that was so seriously in error that it caused the journal’s Editor-in-Chief to resign (but not retract the paper?)

My conclusion is that it is still one damn fine and convincing paper. The evidence verges on being indisputable.

Our paper not only didn’t ignore previous work on the subject (as we have been accused of by Kevin Trenberth), our main purpose was to show why the commonly used data analysis methods in previous works was wrong. To accuse us of ignoring previous work reveals either total ignorance or deception on the part of our critics. (Publishing a paper that “ignored previous work” was a central reason given by the Editor-in-Chief for his resignation).

The key figures in our paper are Fig. 3 & Fig. 4. We reveal the large discrepancy between climate models and observations in how the Earth gains & loses energy to space during warming and cooling, and show based upon basic forcing-feedback theory why most previous estimates of feedbacks from observational data are (1) virtually worthless, and (2) have likely given the illusion of higher climate sensitivity than what really exists in nature.

It is something we have shown before using phase space analysis.

We are told our paper will indeed be disputed this week, as Andy Dessler has hurriedly written and gotten favorable peer review on a paper in Geophysical Research Letters. (Gee, I wonder if the peer reviewers were also associated with the IPCC, whose models they are trying to protect from scrutiny?)

We Need Scientific Analysis, Not Opinion Polls of Scientists
What is particularly frustrating in all this is the lack of people who are willing to actually read our papers and examine the evidence. Most, if not all, of our critics could not even explain what we have shown with the evidence. They simply assume we must be wrong.

They instead resort to nearly libelous ad hominem attacks, and hand-waving objections which are either straw men, red herrings, or just plain false.

They claim the model we used was “bad” (even though it is commonly used in many previous studies, and recommended to us by one of the leading climate modelers in the world), and that is was “tuned” to match the data. The last claim is absolutely hilarious, since the more complex climate models they use are constantly being re-tuned by small armies of scientists in efforts to get them to better agree with the observed behavior of the climate system.

Our critics then repeat each others’ talking points to the press and in blogs, and since few outsiders are willing to actually read our papers, the public resorts to simply accepting opinions they hear through the various media outlets.

Where Have All the Real Scientists Gone?
The basic issue we research is not that difficult to understand. And unless a few of you physicist-types out there get involved and provide some truly independent analysis of all this, the few of us out here who are revealing why the IPCC climate models being used to predict global warming are nowhere close to having been “validated”, are going to lose this battle.

We simply cannot compete with a good-ole-boy, group think, circle-the-wagons peer review process which has been rewarded with billions of research dollars to support certain policy outcomes.

It is obvious to many people what is going on behind the scenes. The next IPCC report (AR5) is now in preparation, and there is a bust-gut effort going on to make sure that either (1) no scientific papers get published which could get in the way of the IPCC’s politically-motivated goals, or (2) any critical papers that DO get published are discredited with any and all means available.

We are constantly being demanded to meet a higher standard than our critics hold themselves to when it comes to getting research proposals funded, or getting research results published. This war was going on many years before the ClimateGate e-mails were leaked and revealed the central players’ active interference in the peer review process. We seldom complained about this professional bias against us because it ends up sounding like sour grapes.

But when we are actively being accused of what the other side is guilty of, I will not stay silent.

And (BTW) we get no funding from Big Oil or other private energy interests. Another urban legend.

I hate to say it, but we need some sharper tools in our shed than we have right now. And the fresh eyes we need cannot have the threat of a loss of government funding hanging over their heads if what they find happens to disagree with Al Gore, James Hansen, et al.

A Primer on Our Claim that Clouds Cause Temperature Change

September 3rd, 2011

…and Why Dessler, Trenberth, and the IPCC are Wrong

After the resignation of the Editor-in-Chief at Remote Sensing over the publication of our paper in that journal, I thought it would be good to summarize as simply as I can what the controversy is all about. [I am also including Trenberth in this discussion because there is a misperception that the paper by Trenberth et al. (2010), which only dealt with the tropics, was ignored in our analysis. Believe it or not, it’s quite common to ignore previous papers that are not relevant to your own paper. 🙂 Also, Trenberth sat next to me during congressional testimony where he confidently asserted (as I recall) “clouds don’t cause climate change”.]

Are Clouds Capable of Causing Temperature Changes?
At the heart of this debate is whether cloud changes, through their ability to alter how much sunlight is allowed in to warm the Earth, can cause temperature change.

We claim they can, and have demonstrated so with both phase space plots of observed temperature versus Earth radiative budget variations here, and with lag-regression plots of the same data here, and with a forcing-feedback model of the average climate system in both of those publications. (The model we used was suggested to us by Isaac Held, Princeton-GFDL, who is hardly a global warming “skeptic”.)

The Dessler and Trenberth contrary view – as near as I can tell – is that clouds cannot cause temperature change, unless those cloud changes were themselves caused by some previous temperature change. In other words, they believe cloud changes can always be traced to some prior temperature change. This temperature-forcing-clouds direction of causation is “cloud feedback”.

Put more simply, Dessler and Trenberth believe causation between temperature and clouds only flows in one direction :

Temperature Change => Cloud Change,

whereas we and others believe (and have demonstrated) it flows in both directions,

Temperature Change <= => Cloud Change.

Why is this Important?

Because it affects our ability to find the Holy Grail of climate research: cloud feedback. Even the IPCC admits the biggest uncertainty in how much human-caused climate change we will see is the degree to which cloud feedback [temperature change => cloud change] will magnify (or reduce) the weak direct warming tendency from more CO2 in the atmosphere.

The IPCC claim is that clouds will change in response to warming in ways which magnify that warming (positive cloud feedback), but by an unknown amount. All of the 20+ climate models tracked by the IPCC exhibit from weak to strongly positive cloud feedbacks.

But we claim (and have demonstrated) that causation in the opposite direction [cloud change => temperature change] gives the illusion of positive cloud feedback, even if negative cloud feedback really exists. Thus, any attempt to estimate feedback in the real climate system must also address this source of “contamination” of the feedback signal.

It would be difficult for me to overstate the importance of this issue to global warming theory. Sufficiently positive cloud feedback could cause a global warming Armageddon. Sufficiently negative cloud feedback could more than cancel out any other positive feedbacks in the climate system, and relegate manmade global warming to the realm of just an academic curiosity.

So, How Can We Know the Difference in these Two Directions of Causation?
There is one big difference between clouds-causing-temperature change (our view of what happens), and temperature-causing-cloud change (which is cloud feedback).

Cloud feedback happens rapidly, in a matter of days to a few weeks at the very most, due to the rapidity with which the atmosphere adjusts to a surface temperature change. It this paper, we even showed evidence that the peak net radiative feedback (from clouds + temperature + water vapor] occurs within a couple of days of peak temperature.

I have more extensive evidence now that the lag is closer to zero days.

In contrast, causation in the opposite direction (clouds forcing temperature change) involves a time lag of many months, due to the time it takes for the immense thermal inertia of the ocean to allow a temperature response to a change in absorbed sunlight.

As mentioned above, the large time lag involved in clouds-causing-temperature change can be demonstrated with either lag regression, or phase space plots of the data. There is no other explanation for this behavior we have published.

We even see this behavior in the IPCC climate models themselves….every one of them.

But Why Does it Even Matter Which Direction the Causation Takes?
What we have shown repeatedly is that if there are clouds-forcing-temperature present, this will always decorrelate the data (because of the inherent time lag involved between a cloud change and the ocean temperature response), which then confounds the estimation of feedback in statistical comparisons of the two kinds of data.

The existence of very low statistical correlation coefficients in all of the previous studies attempting to diagnose feedback in the traditional manner is, by itself, evidence of this effect. For example, the data Dessler analyzed had a correlation coefficient of about 0.1 (as far as I can tell, anyway…for some reason he chose not to list this very basic statistic in his paper. Why did the peer reviewers not catch such an obvious omission?).

But Couldn’t the Cloud Changes Have been Produced by Some previous Temperature Change?
This is a question I hear repeatedly. I will address it in 2 ways.

First, I believe the simple answer is “no”, because temperature-causing-cloud changes (cloud feedback) occurs very rapidly, with little time lag. This is because the atmosphere responds rapidly to a surface temperature change, in a matter of days to weeks at the most.

Secondly, just for the sake of argument, let’s assume our critics are right, and there IS a substantial time lag in the cloud feedback response to a temperature change. As I have challenged Dessler to do, if he really believes that is happening, then he should do LAGGED regression to estimate feedback…that is, adjust for the time lag in his regression analysis.

And when he does that, his weak positive cloud feedback diagnosis will suddenly turn into a negative feedback diagnosis. I’ve done it, and it is what Lindzen and Choi did in their recently published paper, which resulted in a diagnosis of strongly negative feedback.

We will see when Dessler’s new paper appears, reportedly being published this coming week in GRL, whether he will include time lags in his analysis.

But What Else Could Cause Clouds to Change, Besides Temperature?
Any “expert” who asks such a naive question obviously has little training in meteorology. Unfortunately, this is indeed the case for many climate scientists.

Cloud formation is influenced by countless processes…the presence of cloud condensation nuclei, the temperature lapse rate and temperature inversions, wind shear, the presence of fronts, changes in ocean upwelling, to name a few.

The climate system is a non-linear dynamical system, and it is constantly changing. Chaos is not just a short term phenomenon affecting weather. I think that long-time scale quasi-chaotic changes in ocean circulation, like that associated with the Pacific Decadal Oscillation, are capable of causing climate change. The great climate shift of 1977 is evidence of that.

Even the IPCC and the climate modelers know that the huge reflective regions of marine stratocumulus over the eastern ocean basins have a dramatic effect on climate, and so any changes in upwelling of cool water in these regions can then indirectly cause global warming or cooling.

Of course, there is also the Svensmark et al. theory of cosmic ray indirect forcing cloud cover, and I suspect there are effects on cloud formation we have not even discovered yet.

Just because we do not understand these things well enough to put them in a climate model does not mean they don’t exist.

What it All Means
This cloud issue has become very contentious because, if we (or those working on the cosmic ray effect on clouds) are correct, it means Mother Nature is perfectly capable of causing her own climate change.

And this possibility cannot be permitted by the IPCC, because it then begs the question of whether climate change — both past and future — is more natural than anthropogenic. What is particularly discouraging is that the vast majority of scientists contacted by reporters to comment on our paper clearly had not even read the paper. They just repeated what other scientists had said. And I doubt even those original scientists read it. All they know is that it dissed the climate models, and so it must be wrong.

[We have even had papers rejected by peer reviewers who we KNOW didn’t read the paper. They objected to “claims” we never even made in our paper. This is the sad state of peer review when a scientific discipline is so politicized.]

Unfortunately, the cloud feedback holy grail, for as important as it is to knowing how much impact humans have on the climate system, still cannot be reliably diagnosed from our observations of the climate system. We have shown clear evidence here and here that the dominant influence in the satellite observations and in the models is clouds-causing-temperature change. And we have shown theoretically that in such a situation, one cannot diagnose a feedback — it is lost in the noise.

And if you try to diagnose feedback from satellite data like Dessler has, it will usually give the illusion of positive feedback — even if negative feedback is present.

Any agreement between models and observations found by studies like Dessler’s in such statistics probably just means they have similar levels of cloud-causing-temperature change, not similar feedbacks.

At the end of the day, the dirty little secret is that there is still no way to test the IPCC climate models for their feedback behavior, which means there is no way to know which (if any of them) is even close to being correct in its predictions for the future.

The very fact that the 20+ climate models the IPCC tracks still span just as wide a range of feedbacks as climate models did 20 years ago is evidence by itself that the climate community still can’t demonstrate what the real cloud feedbacks in the climate system are. Otherwise, they would tune their models accordingly.

The disconcerting conclusion is that global warming-related policy decisions are being guided by models which still have no way to be tested in their long-term predictions.

Finally, the fact that the media and pundits like Al Gore have been so successful at convincing the public that the climate models are reliable for forecasting the future shows that IPCC scientists have a much, much bigger problem with the media misrepresenting their work than I do.

And I don’t see those scientists trying to set that record straight.

UAH Global Temperature Update for August, 2011: +0.33 deg. C

September 2nd, 2011

NOTE: Updated with tropical sea surface temperatures.

The global average lower tropospheric temperature anomaly for August, 2011 retreated a little, to +0.33 deg. C (click on the image for a LARGE version):

Note that this month I have taken the liberty of adding a 3rd order polynomial fit to the data (courtesy of Excel). This is for entertainment purposes only, and should not be construed as having any predictive value whatsoever.

Here are the stats…we are beginning to see cooling in the tropics from La Nina conditions which are re-emerging there:

YR MON GLOBAL NH SH TROPICS
2011 1 -0.010 -0.055 +0.036 -0.372
2011 2 -0.020 -0.042 +0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 +0.117 +0.195 +0.039 -0.229
2011 5 +0.133 +0.145 +0.121 -0.043
2011 6 +0.315 +0.379 +0.250 +0.233
2011 7 +0.374 +0.344 +0.404 +0.204
2011 8 +0.325 +0.323 +0.327 +0.157

The global sea surface temperatures from AMSR-E through the end of August are shown next. The trend line is, again, for entertainment purposes only:

The tropical SST data, of course, show the coolness of La Nina (and previous warmth of El Nino) more clearly, since they are events which have their maximum temperature signature in the tropics:

Editor-in-Chief of Remote Sensing Resigns from Fallout Over Our Paper

September 2nd, 2011

[NOTE: the August, 2011 temperature update appears below this post.]

SCORE:
IPCC :1
Scientific Progress: 0

[also see updates at end of post]

It has been brought to my attention that as a result of all the hoopla over our paper published in Remote Sensing recently, that the Editor-in-Chief, Wolfgang Wagner, has resigned. His editorial explaining his decision appears here.

First, I want to state that I firmly stand behind everything that was written in that paper.

But let’s look at the core reason for the Editor-in-Chief’s resignation, in his own words, because I want to strenuously object to it:

…In other words, the problem I see with the paper by Spencer and Braswell is not that it declared a minority view (which was later unfortunately much exaggerated by the public media) but that it essentially ignored the scientific arguments of its opponents. This latter point was missed in the review process, explaining why I perceive this paper to be fundamentally flawed and therefore wrongly accepted by the journal

But the paper WAS precisely addressing the scientific arguments made by our opponents, and showing why they are wrong! That was the paper’s starting point! We dealt with specifics, numbers, calculations…while our critics only use generalities and talking points. There is no contest, as far as I can see, in this debate. If you have some physics or radiative transfer background, read the evidence we present, the paper we were responding to, and decide for yourself.

If some scientists would like do demonstrate in their own peer-reviewed paper where *anything* we wrote was incorrect, they should submit a paper for publication. Instead, it appears the IPCC gatekeepers have once again put pressure on a journal for daring to publish anything that might hurt the IPCC’s politically immovable position that climate change is almost entirely human-caused. I can see no other explanation for an editor resigning in such a situation.

People who are not involved in scientific research need to understand that the vast majority of scientific opinions spread by the media recently as a result of the fallout over our paper were not even the result of other scientists reading our paper. It was obvious from the statements made to the press.

Kudos to Kerry Emanuel at MIT, and a couple other climate scientists, who actually read the paper before passing judgment.

I’m also told that RetractionWatch has a new post on the subject. Their reporter told me this morning that this was highly unusual, to have an editor-in-chief resign over a paper that was not retracted.

Apparently, peer review is now carried out by reporters calling scientists on the phone and asking their opinion on something most of them do not even do research on. A sad day for science.

UPDATE #1: Since I have been asked this question….the editor never contacted me to get my side of the issue. He apparently only sought out the opinions of those who probably could not coherently state what our paper claimed, and why.

UPDATE #2: This ad hominem-esque Guardian article about the resignation quotes an engineer (engineer??) who claims we have a history of publishing results which later turn out to be “wrong”. Oh, really? Well, in 20 years of working in this business, the only indisputable mistake we ever made (which we immediately corrected, and even published our gratitude in Science to those who found it) was in our satellite global temperature monitoring, which ended up being a small error in our diurnal drift adjustment — and even that ended up being within our stated error bars anyway. Instead, it has been our recent papers have been pointing out the continuing mistakes OTHERS have been making, which is why our article was entitled. “On the Misdiagnosis of….”. Everything else has been in the realm of other scientists improving upon what we have done, which is how science works.

UPDATE #3: At the end of the Guardian article, it says Andy Dessler has a paper coming out in GRL next week, supposedly refuting our recent paper. This has GOT to be a record turnaround for writing a paper and getting it peer reviewed. And, as usual, we NEVER get to see papers that criticize our work before they get published.

The Al Gore Show: 24 Hours of Denying Reality

August 29th, 2011

Maybe the best way to summarize the main message of this post is this:

There have been no weather events observed to date – including Hurricane Irene — which can be reasonably claimed to be outside the realm of natural climate variability.

Now, you can believe – as Al Gore claims – that the present warm period we are experiencing has caused more hurricanes, more tornadoes, too much rain, too little rain, too much snow, too little snow, etc., but those are matters of faith, not of observable scientific reality.

Until a month or so ago, we were near record lows in global tropical cyclone activity, after a precipitous 6-year drop following the most recent 2005 peak in activity (click for full size version):

From what I can tell at Ryan Maue’s website, it sounds like global activity is now back up and running about normal.

Also, we have not had a Cat 3 or stronger hurricane make landfall in the U.S. in almost 6 years now, which is the longest ‘drought’ for U.S. landfalling major hurricanes on record.

There is even published evidence that the 1970s and 1980s might have experienced the lowest levels of hurricane activity in 270 years (Nyberg et al. 2007 Nature 447: 698-702), and that the 20th Century (a period of warming) experienced less hurricane activity than in previous centuries (Chenoweth and Divine 2008 Geochemistry, Geophysics, Geosystems).

Claims that warming “should” or “will” cause more hurricanes are based upon theory, that’s all. What I have listed are based upon historical events, which suggest (if anything) periods of warmth might also be periods of fewer hurricanes, not more.

24 Hours of Denying Reality

On September 14, Al Gore will host a “global” event called 24 Hours of Reality, which is part of his Climate Reality Project. As the website states:

“24 Hours of Reality will focus the world’s attention on the full truth, scope, scale and impact of the climate crisis. To remove the doubt. Reveal the deniers. And catalyze urgency around an issue that affects every one of us.”

From what I have been hearing, Mr. Gore will be emphasizing record weather events as proof of anthropogenic global warming. What most people don’t realize is that you can have a 100 year weather record event every year, if they are in different places.

Besides, as a meteorologist I must question the whole idea of 100-year event. Since even the longest weather station datasets only go back about 100 years, it is questionable whether we can even say what constitutes a 100-year event.

I especially dislike Gore’s and others’ use of the pejorative “denier”. Even some climate scientists who should know better have started using the term.

What exactly does Mr. Gore think we “deny”? Do we deny climate? No, we were studying climate since before he could spell the word.

Do we deny global warming? No, we believe it has indeed warmed in the last few hundred years, just like it did before the Medieval Warm Period around 1000 AD:

So what do we deny, if anything? Well, what *I* deny is that we can say with any level of certainty how much of our recent warmth is due to humanity’s greenhouse gas emissions versus natural climate variability.

No one pays me to say this. It’s the most obvious scientific conclusion based upon the evidence. When the IPCC talks about the high “probability” that warming in the last 50 years is mostly manmade, they are talking about their level of faith. Statistical probabilities do not apply to one-of-a-kind, theoretically-expected events.

I could have done better in my career if I played along with the IPCC global warming talking points, which would have led to more funded contracts and more publications.

It is much easier to get published if you include phrases like, “…this suggests anthropogenic global warming could be worse than previously thought” in your study.

In contrast, Mr. Gore has made hundreds of millions of dollars by preaching his message of a “climate crisis”.

I would say that it is Mr. Gore who is the “climate denier”, since he denies the role of nature in climate variability. He instead chooses to use theory as his “reality”.

What I worry about is what will happen if we get another Hurricane Andrew (1992) which hit Miami as a Cat 5, or Camille in 1969, also a Cat 5. The reporters will probably have heart attacks.

O’Reilly, O’Bama, and the O’Conomy

August 22nd, 2011

..or, It’s Not About Money…It’s Our Standard of Living

I just endured a rather inane discussion on the O’Reilly Factor with actor/pundit Wayne Rogers and economist/comedian/actor/pundit Ben Stein, over whether President Obama helps or hurts the economy.

The debate quickly turned, as it often does, to whether it would help the economy to tax the rich more.

What annoys me the most about such debates is that they equate “money” with our standard of living.

They are not the same thing.

If you think it’s about money, then let’s just have the government print a million dollars for every man, woman, and child, and we can all sit at home and order stuff over the internet.

Oh…except the stuff we want isn’t being made anymore, because all of the workers are at home ordering stuff over the internet.

The only way for us to raise our standard of living is for people to provide as many goods and services as possible to each other which are needed and wanted. (Money is just a convenient means of exchange of goods and services which almost never have equal value…otherwise, we could just barter).

And keeping our standard of living requires rewarding (1) innovation, and (2) efficiency. It almost always involves mass production…which means business — usually of the “Big” variety — which in turn requires lots of people having jobs.

Anything that government does which stands in the way of business being allowed to do what it does best hurts productivity, which in turn reduces our standard of living.

As long as people insist on arguing over where the ‘money’ will come from, rather than what needs to be done to raise our standard of living (or even just maintain it), we will continue to be misled by politicians who might be well intentioned, but are clueless about what is required for prosperity to exist.

You can learn more about such basic economic concepts — which have been known for centuries, but every generation seems to have trouble internalizing — in my book Fundanomics.

Deep Ocean Temperature Change Spaghetti: 15 Climate Models Versus Observations

August 14th, 2011

The following comparison between the 20th Century runs from most (15) of the IPCC AR4 climate models, and Levitus observations of ocean warming during 1955-1999, further bolsters the case for a relatively low climate sensitivity: estimated here to be about 1.3 deg. C for a future doubling of atmospheric CO2. This is quite a bit lower than the IPCC’s best estimate of 3 deg. C warming.

But the behavior of the models’ temperatures in the deep ocean was not at all what I expected. They say “too many cooks spoil the broth”. Well, it looks like 15 climate modeling groups make spaghetti out of ocean temperature trends.

Deep Ocean Spaghetti

The deep-ocean temperature trends in the 15 models which had complete ocean data archived for the period 1955-1999 are surprising, because I expected to see warming at all depths. Instead, the models exhibit wildly different behaviors, with deep-ocean cooling just as likely as warming depending upon the model and ocean layer in question (click for the full-size version):


Three of the models actually produced average cooling of the full depth of the oceans while the surface warmed, which seems physically implausible to say the least. More on that in a minute.

The most common difference between the models and the observations down to 700 m (the deepest level for which we have Levitus observations to compare to) is that the models tend to warm the ocean too much. Of those models that don’t, almost all produce unexpected cooling below the mixed layer (approximately the top 100 m of the ocean).

From what I understand, the differences between the various models’ temperature trends are due to some combination of at least 3 processes:

1) CLIMATE SENSITIVITY: More sensitive models should store more heat in the ocean over time; this is the relationship we want to exploit to estimate the sensitivity of the real climate system from the rates of observed warming compared to the rates of warming in the climate models.

2) CHANGES IN VERTICAL MIXING OVER TIME: The deep ocean is filled with cold, dense water formed at high latitudes, while the upper layers are warmed by the sun. Vertical mixing acts to reduce that temperature difference. Thus, if there is strengthening of ocean mixing over time, there would be deep warming and upper ocean cooling, as the vertical temperature differential is reduced. On the other hand, weakening mixing over time would do the opposite, with deep ocean cooling and upper ocean warming. These two effects, which can be seen in a number of the models, should cancel out over the full depth of the ocean.

3) SPURIOUS TEMPERATURE TRENDS IN THE DEEP OCEAN This is a problem that the models apparently have not fully addressed. Because it takes about ~1,000 years for the ocean circulation to overturn, it takes a very long run of a climate model before the model’s deep ocean settles into a stable temperature, say to 0.1 deg. C or less. While some knowledgeable expert reading this might want to correct me, it appears to me that some of these models have spurious temperature trends, unrelated to the CO2 (and other) forcings imposed on the models during 1955-1999. This is likely due to insufficient “spin-up” time for the model to reach a stable deep-ocean temperature. Until this problem is fixed, I don’t see how models can address the extent to which the extra heat from “global warming” is (or isn’t) being mixed into the deep ocean. Maybe the latest versions of the climate models, which will be archived in the coming months, do a better job.

Of course, we would like to exploit process (1) to get an estimate of how sensitive the real climate system is, using the Levitus observations (green curve in the above plot). Unfortunately, the 2nd and 3rd processes causing temperature trends unrelated to upper ocean warming seem to be a real problem in most of these models.

Choosing the Best Subset of Models
Obviously, using models that produce a global net cooling of the oceans during a period of surface warming (1955-1999) cannot be expected to provide a useful comparison to observations. Maybe someone can correct me, but it appears those models do not even conserve energy.

For quantitative comparison to the Levitus observations, I decided to accept only those models whose monthly mixed layer temperature variations during 1955-1999 had at least a 0.7 correlation with the Levitus observations in the 0-50 m layer over the same period. In other words, I omitted the models that behaved least like the observations.

The remaining 4 models’ trends are shown in the next plot, along with their average (click for large version):

Note that the 4-model average (black curve) shows warming down to about 2,500 meters depth. Now that we have excluded the models that are the most unrealistic, we can make a more meaningful quantitative comparison between the Levitus observations (which I have extrapolated to zero warming at 3,000 m depth) and the models.

Estimating Climate Sensitivity from the Levitus Observations
As reported by Forster & Taylor (2006 J. Climate), the average surface warming produced by 2100 in these 4 models (based upon the SRES A1B emissions scenario) is 3.9 deg. C. If we scale that by the smaller amount of warming seen in the observations, you get the following plot for estimated warming by 2100 based upon averaging of the trends to various depths (click for large version):

While the Levitus observations in the mixed layer are consistent with the model predictions of about 3 deg. C of warming by 2100, the much weaker warming in the entire 0-700 m layer suggests a much lower rate of future warming: about 1.3 deg. C or so.

If I loosen the requirements on the number of models accepted, and include 9 (rather than 4) out of 15, the predicted rate of warming increases a little, to 1.5 deg. C.

Discussion
Since this result uses the IPCC models themselves to “calibrate” the Levitus observations, it cannot be faulted for using a model that is “too simple”. To the extent the average IPCC model mimics nature, the models themselves suggest the relatively weak ocean warming observed during 1955-1999 indicates relatively low climate sensitivity.

Previous investigators (as well as the IPCC AR4 report) have claimed that warming of the oceans is “consistent with” anthropogenic forcing of the climate system. But they did not say exactly how consistent, in a quantitative sense.

The IPCC AR4 report admits that the climate models tend to warm the deep ocean too much, but the report puts error bars on both the models and observations which overlap each other, so then it can be claimed the overlap is evidence of “agreement” between models and observations.

The trouble with the ‘overlapping error bars’ argument – which seems to be a favorite statistical ploy of the IPCC – is that it cuts both ways: the other ends of the error bars, where there is no overlap, will suggest even larger disagreement between models and observations than if no error bars were used in the first place!

Nothing is gained with the statistical obfuscation of using overlapping error bars…except to demonstrate that the models and observations do not necessarily disagree.

That is very different from saying they do agree.

THE BOTTOM LINE
It would be difficult to overstate the importance of the results, if they hold up to scrutiny: The observed rate of warming of the ocean has been too weak to be consistent with a sensitive climate system. This is demonstrated with the IPCC models themselves.

The resulting climate sensitivity (around 1.3 deg. C) just happens to be about the same sensitivity I have been getting using the simple forcing-feedback-diffusion model to match the Levitus observations of ocean warming during 1955-2010 directly.

Finally, it should be mentioned the above analysis assumes that there has been no significant natural source of warming during 1955-1999. If there has, then the diagnosed climate sensitivity would be even lower still.

I would be surprised if none of the climate modelers have performed a basic analysis similar to the one above. But since the results would be so damaging to the IPCC’s claims of much greater climate sensitivity (greater future warming) I would expect those results would never see the light of day.

Is Gore’s Missing Heat Really Hiding in the Deep Ocean?

August 7th, 2011

NOTE: For those who are offended by my bringing up Al Gore in this post (but are apparently not offended by Gore falsely accusing scientists like me of being ‘global warming deniers’), I suggest you just focus on the evidence I present. You are invited to offer an alternative explanation for the evidence, but I will not allow you to divert attention from it through irrelevant “copy and paste” factoids you have gathered from other scientific publications. If you persist, I will be forced to adopt the RealClimate tactic of deleting comments, which so far I have been able to avoid on this blog. We’ll just call it “fighting fire with fire”.

As I and others have pointed out, the 20th Century runs of the IPCC climate models have, in general, created more virtual warming in the last 50 years than the real climate system has warmed.

That statement is somewhat arguable, though, since the modelers can run a number of realizations, each with its own “natural” year-to-year internal climate variability, and get different temperature trends for any given 50-year period.

Furthermore, uncertainty over how fast heat is being mixed into the deep ocean also complicates matters. If extra surface heating from more CO2 is being mixed deeper and faster than the modelers have assumed, then climate models warming the surface too fast in the past 50 years does not necessarily mean we will not see their forecasts come true eventually.

As Kevin Trenberth has recently alluded to, it only delays the day of reckoning.

Are More Expensive Models Better?
I have tried in recent posts to estimate climate sensitivity by using a simple forcing-feedback-diffusion model, and tuning it to match observations (which is exactly what the Big Boys do), but it appears that our critics will always question the validity of a simple model, even though it contains the same fundamental physics the IPCC models must also abide by.

So, let’s skip the simple model entirely, and just compare the IPCC climate models to the observations.

IPCC Models versus Ocean Observations, 1955-2000
Since so many people are now pointing to ocean heat content increases, rather than surface warming, as a better index of how sensitive our climate system is, let’s see how observed ocean warming compares to the warming simulated by two very different IPCC climate models.

We have begun analyzing the deep ocean temperature output from the 20th Century runs of the IPCC AR4 climate models. The first we chose were one of the most sensitive models (IPSL-CM4) and one of the least sensitive models (NCAR PCM1).

There is a simple way to use these models to get some idea of what the observed ocean warming tells us about climate sensitivity.

We can simply compare each of the models’ rate of ocean warming during 1955-2000 to their known climate sensitivity, and then see what the warming rate in the Levitus ocean temperature observations suggests for a climate sensitivity in the real climate system.

The following plot shows the temperature trends, as a function of ocean depth, between 1955 (when the ocean observations start) and 2000 (when the climate model experiments end) for those 2 models as well as in the Levitus observations (click for larger version):

Clearly, the most sensitive model (IPSL-CM4) warms the ocean the most, the least sensitive model (NCAR PCM1) warms the ocean much less, and the Levitus observations appear to exhibit the least warming. (NOTE: I have linearly extrapolated the Levitus observations from the deepest reported level, 700 m, to an assumed zero trend at 2,000 m depth.)

If we compute the average warming trends for the 0-700 m layer for the 2 models, and compare them to the known sensitivity of those models, we get the 2 blue dots in the following plot (we will be adding as many models to this as possible later; click for larger version):

If we then use the model relationship between 1955-2000 warming and climate sensitivity (the solid line), we see that the warming trend in the Levitus observations fall on that at a climate sensitivity of 1.3 deg. C.

If you are wondering what the results are if we go deeper, from 700 meters to 2,000 meters deep (keeping in mind we have assumed that the observed warming goes to zero at 2,000 m depth), the answer is that it is nearly the same: 1.4 deg. C, rather than 1.3 deg. C.

This is similar to the kinds of numbers I have been getting recently using a simple forcing-feedback-diffusion model and matching the Levitus observations directly with it. This level of warming is below the 1.5 deg. lower limit that the IPCC has set for total warming in response to 2XCO2.

Obviously, we need to add more of the IPCC models to this comparison, which we will be doing in the coming weeks, to see if there is indeed a strong relationship between model warming and model sensitivity, which there should be if the different models used similar climate forcings.

But the results should not depart too much from what is shown above because the line must go through (0,0) on the graph (zero climate sensitivity means zero warming), and the upper end of the line will be fixed by the 3 most sensitive IPCC climate models, of which IPSL-CM4 is one.

Could Climate Sensitivity be even Lower?
The above analysis assumes there have been no natural forcings of warming. But to the extent that recent warming was partly due to some natural process, this would mean climate sensitivity is even less.

Discussion
Once again we see evidence that the IPCC models are too sensitive, which means they are predicting too much warming for our future, which means Mr. Gore needs to chill out a bit.

Also, the list of modelers’ potential excuses for their models warming more than observed is rapidly dwindling. For example,

1) If the above results are any indication, it is unlikely the heat is hiding in the deep ocean.

2) Blaming Chinese coal-fired power plants for a lack of warming is just taking the modelers anthropocentrism to an even higher plane. There seems to be no good evidence to support such a claim anyway.

3) Another trick the IPCC uses is to put error bars on both the observations and the on the model results until they overlap. It is then claimed that models and observations “agree” to within the margin of error. But what they don’t realize with this last bit of statistical obfuscation is they are also admitting that there is a HUGE disagreement between models and observations when one goes to the other end of those error bars.

“Overlapping error bars” is the last resort for getting two numbers to appear to agree better than they really do.

It’s time for climate modelers to face up to the explanation they have been avoiding at all cost: the climate system is simply not nearly as sensitive as they claim it is.

If they ever have to admit the climate system is insensitive, it is the end of the IPCC and the policy changes that institution was originally formed to advance.