Our Refutation of Dessler (2010) is Accepted for Publication

July 15th, 2011

Some of you might remember last year’s little dust-up between Andy Dessler and me over feedbacks in the climate system.

Our early-2010 paper showed extensive evidence of why previous attempts to diagnose feedbacks (which determine climate sensitivity) have likely led to overestimates of how sensitive the climate system is to forcings like that from increasing CO2. The basic reason is that internal radiative forcing from natural cloud variations causes a temperature-radiation relationship in the data which gives the illusion of high climate sensitivity, even if climate sensitivity is very low.

Dessler’s late-2010 paper basically blew off our arguments and proceeded to diagnose cloud feedback from satellite data in the traditional manner. His justification for ignoring our arguments was that since: 1) most of the temperature variability during the satellite record was due to El Nino and La Nina (which is true), and 2) no one has published evidence that ‘clouds cause El Nino and La Nina’, then he could ignore our arguments.

Well, our paper entitled On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance which refutes Dessler’s claim, has just been accepted for publication. In it we show clear evidence that cloud changes DO cause a large amount of temperature variability during the satellite period of record, which then obscures the identification of temperature-causing-cloud changes (cloud feedback).

Along with that evidence, we also show the large discrepancy between the satellite observations and IPCC models in their co-variations between radiation and temperature:

Given the history of the IPCC gatekeepers in trying to kill journal papers that don’t agree with their politically-skewed interpretations of science (also see here, here, here, here), I hope you will forgive me holding off for on giving the name of the journal until it is actually published.

But I did want to give them plenty of time to work on ignoring our published research as they write the next IPCC report. πŸ™‚

And this is not over…I am now writing up what I consider to be our most convincing evidence yet that the climate system is relatively insensitive.

Atlantis Launch SRB Camera Videos

July 14th, 2011

This is SO cool. Cameras mounted on both of the solid rocket boosters during the final Shuttle launch show what it would be like to ride one from the launch pad, into space, falling back through the atmosphere, and then splashing into the ocean. The total video runs 32 minutes, with multiple camera views from both boosters. I watched the whole thing and is it was worth it.

Understanding James Hansen’s View of Our Climate Future

July 13th, 2011

I’ve been wading through James Hansen’s recent 52-page unpublished paper explaining why he thinks the cooling effect of manmade sulfate aerosols has been underestimated by climate modelers.

This is the same theme as the “cooling from Chinese pollution is canceling out carbon dioxide warming” you might have heard about recently.

As I read Hansen’s paper, I stumbled upon a sentence on page 23 that sounds like one I just wrote in a new paper we are preparing. I’m going to use Hansen’s statement — which I agree with — because it provides a good introduction to understanding the basics of climate change theory:

“…surface temperature change depends upon three factors: (1) the net climate forcing, (2) the equilibrium climate sensitivity, and (3)…..the rate at which heat is transported into the deeper ocean (beneath the mixed layer).”

To better understand those 3 factors, consider what controls how much a pot of water on the stove will warm over a short period of time. The temperature rise depends upon (1) how much you turn up the stove, or cover up the pot with a lid (“forcing”); (2) how fast the pot can lose extra heat to its surroundings as it warms, thus limiting the temperature rise (“climate sensitivity”); and (3) the depth of the water in the pot.

Most people working in this business (including the IPCC) agree that probably the biggest uncertainty in determining the extent to which manmade global warming is something we need to worry about is #2, climate sensitivity.

But not Hansen.

Hansen believes he knows climate sensitivity very accurately, based upon paleoclimate theories of what caused temperature changes hundreds of thousands to millions of years ago, and how large those temperature changes were. Most of Hansen’s climate sensitivity claims are based upon the Ice Ages and the Interglacial periods.

I must admit, it astounds me how some scientists can be so sure of theories which involve events in the distant past that we cannot measure directly. Yet we measure the entire Earth every day with a variety of satellite instruments, and we are still trying to figure out from that abundance of data how today’s climate system works!

In Hansen’s case, in order to explain the amount of warming in recent decades, he thinks he knows #2 (the climate sensitivity) is quite high, and so he has been experimenting with various realistic values for #3 (the assumed rate of heat diffusion into the deep ocean) and has decided that factor #1 (the radiative forcing of the climate system) has not been as large as everyone has been assuming. Again, this is in order to explain why surface warming has not been as strong as expected.

Now, I tend to agree with Hansen that the main portion of that forcing, from increasing CO2 (a warming effect), is known pretty well. (Please, no flaming from the sky dragon slayers out there…I already know about your arguments). So in his view there MUST be some cooling influence canceling it out. That’s where the extra dose of aerosol cooling comes in.

All modelers have already fudged in various amounts of cooling from sulfate aerosols in order to prevent their climate models from warming more than has been observed. But Hansen ALSO thinks that the real climate system does not mix heat into the deep ocean as fast as the IPCC climate models do.

Unfortunately, correcting this error (if it exists, which I think it does) would push the models in the direction of too much surface warming. Therefore, Hansen thinks this must then mean that the IPCC models are assuming too much forcing (the only remaining possibility of the 3 factors).

Of course, as Hansen correctly points out, accurate global measurements of the cooling effect of aerosols are essentially non-existent. How convenient. This means that modelers can continue to use increasing amounts of sulfate aerosols as an “excuse” for a lack of recent warming, despite the lack of quantitative evidence that this is actually occurring.

As I sometimes point out, this line of reasoning verges on blaming NO climate change on humans, too.

It is unfortunately that so few of us (me, Lindzen, Douglass, and a few others) are actively researching the OTHER possibility: that climate sensitivity has been greatly overestimated. Lindzen and Choi have a new paper in press on the subject, and Braswell and I have another that I expect to be accepted for publication in the next few days.

I sort of understand the reluctance to research the possibility, though. If climate sensitivity is low, then global warming, climate disruption, tipping points, and carbon footprints all suddenly lose their interest. And we can’t have that.

I agree with Hansen that our best line of observational evidence is how fast the oceans warm — at all depths. Our recent work on estimating climate sensitivity from the rate of warming between 1955-2010 at different depths has been very encouraging, something which I hope to provide an update on soon.

Global SST Update: Still No Sign of Resumed Warming

July 8th, 2011

Here’s the global average sea surface temperature (SST) update from AMSR-E on NASA’s Aqua satellite, updated through yesterday, July 7, 2011:

The anomalies are relative the existing period of record, which is since June 2002.

As can be seen, the SSTs have not quite recovered from the coolness of the recent La Nina.

Something else I track is the ocean cloud water anomalies, also from AMSR-E, which I have calibrated in terms of anomalies in reflected sunlight based upon Aqua CERES data:

Why I watch this is it often predicts future SST behavior. For instance, the circled portion in 2010 shows a period of enhanced reflection of sunlight (thus reduced solar input into the ocean), and this corresponded to strong cooling of SSTs during 2010 as seen in the first graph.

So, the recent new enhancement of cloudiness (smaller circle) suggests a fall of SST in the next month or so. After that, it generally takes another month or so before ocean changes are transferred to the global oceanic atmosphere through enhanced or decreased convective overturning and precipitation.

More on the Divergence Between UAH and RSS Global Temperatures

July 8th, 2011

After talking with John Christy, I decided I should further expound upon the points I made in my last post.

The issue is that the two main satellite-based records of global lower tropospheric temperature change have been diverging in the last 10 years, with the RSS version giving cooler anomalies than our (UAH) version in recent years, as shown in the following plot:
(the RSS anomalies have been re-computed to be relative to the 1981-2010 period we use in the UAH dataset)

Ten years ago, this meant that the AGW folks were claiming RSS was right and we (UAH) were wrong, since the RSS global warming trends were greater than ours.

But now the shoe is on the other foot, and the RSS linear trend since January 1998 has actually cooled slightly (-0.03 deg. C per decade) while ours has warmed slightly (almost +0.05 deg. C per decade).

John works hard at making our dataset as good as it can be, and has correctly reminded me that he and others have several peer reviewed and published papers recent years on the subject of the accuracy of the UAH dataset:

Christy, J.R. and Norris, W.B. 2006. Satellite and VIZ-radiosonde intercomparisons for diagnosis of nonclimatic influences. Journal of Atmospheric and Oceanic Technology 23: 1181-1194.

Christy, J.R., Norris, W.B., Spencer, R.W. and Hnilo, J.J. 2007. Tropospheric temperature change since 1979 from tropical radiosonde and satellite measurements. Journal of Geophysical Research 112: doi:10.1029/2005JD0068.

Christy, J.R. and Norris, W.B. 2009. Discontinuity issues with radiosondes and satellite temperatures in the Australia region 1979-2006. Journal of Atmospheric and Oceanic Technology 25: OI:10.1175/2008JTECHA1126.1.

Christy, J.; Herman, B.; Pielke, Sr., R.; Klotzbach, P.; McNider, R.; Hnilo, J.; Spencer, R.; Chase, T. et al. (2010). “What Do Observational Datasets Say About Modeled Tropospheric Temperature Trends Since 1979?”. Remote Sensing 2 (9): 2148. doi:10.3390/rs2092148

Douglass, D. and J. R. Christy, 2009: Limits on CO2 climate forcing from recent temperature data of Earth, Energy & Environment, Volume 20, Numbers 1-2, January 2009 , pp. 177-189(13) doi:10.1260/095830509787689277.

Randall, R.M. and Herman, B.M. 2008. Using limited time period trends as a means to determine attribution of discrepancies in microwave sounding unit derived tropospheric temperature time series. Journal of Geophysical Research: doi:10.1029/2007JD008864.

Bengtsson, L. and K.I.Hodges, On the Evaluation of Temperature Trends in the Tropical Troposphere, Clim. Dyn., doi 10.1007/s00382-009-0680-y, 2009.

These papers either directly or indirectly address the quality of the UAH datasets, including comparisons to the RSS datasets.

Based upon the evidence to date, it is pretty clear that (1) the UAH dataset is more accurate than RSS, and that (2) the RSS practice of using a climate model to correct for the effect of diurnal drift of the satellite orbits on the temperature measurements is what is responsible for the spurious behavior noted in the above graph.

Our concerns about the diurnal drift adjustment issue have been repeatedly passed on to RSS in recent years.

What Will the Next IPCC Report Say?

As an aside, it will be interesting to see how the next IPCC report will handle the various global temperature datasets. There have been a few recent papers that have gone through great pains to explain away the lack of long-term warming in the satellite and radiosonde data (the missing “hot spot”) by trying to infer its presence from upper tropospheric wind data (a dubious technique since geostrophic balance is a poor assumption in the tropics), or by using a few outlier radiosonde stations with poor quality control and spurious warming trends.

Call me a cynic, but I think we can expect the IPCC to simply ignore (or at most brush aside) any published evidence that does not fit the AGW template.

On the Divergence Between the UAH and RSS Global Temperature Records

July 7th, 2011

…or, OMG! HAS UAH BEEN BOUGHT OFF BY GREENPEACE!?

Over the last ten years or so there has been a growing inconsistency between the UAH and Remote Sensing Systems versions of the global average lower tropospheric temperature anomalies. Since I sometimes get the question why there is this discrepancy, I decided it was time to address it.

If we look at the entire 30+ year record, we see that the UAH and RSS temperature variations look very similar, with a correlation coefficient of 0.963 and linear trends which are both about +0.14 deg. C per decade:


(In the above plot I have re-computed the RSS anomalies so they are relative to the 1981-2010 average annual cycle we use; this does not affect the trends…just makes it more of an apples-to-apples comparison).

But if we examine a time series of the DIFFERENCE between the two temperature records, we see some rather interesting structure:


(Note: I have applied a 3-month smoother to the data to reduce noise).

As can be seen, in the last 10 years or so the RSS temperatures have been cooling relative to the UAH temperatures (or UAH warming relative to RSS…same thing). The discrepancy is pretty substantial…since 1998, the divergence is over 50% of the long-term temperature trends seen in both datasets.

WHY THE DIVERGENCE?

So, why the discrepancy? Well, if it was OUR (UAH) data that was cooling relative to RSS, people would accuse us of being bought off by Exxon-Mobil (I wish!…still waiting for that check..). At least that has been the history of this debate.

But now WE are the ones with “excess” warming. So where are the accusations that RSS is being bought off by Big Oil?

Hmmmm?

(It’s OK, we are used to the hypocrisy. πŸ™‚ )

Anyway, my UAH cohort and boss John Christy, who does the detailed matching between satellites, is pretty convinced that the RSS data is undergoing spurious cooling because RSS is still using the old NOAA-15 satellite which has a decaying orbit, to which they are then applying a diurnal cycle drift correction based upon a climate model, which does not quite match reality. We have not used NOAA-15 for trend information in years…we use the NASA Aqua AMSU, since that satellite carries extra fuel to maintain a precise orbit.

Of course, this explanation is just our speculation at this point, and more work would need to be done to determine whether this is the case. The RSS folks are our friends, and we both are interested in building the best possible datasets.

But, until the discrepancy is resolved to everyone’s satisfaction, those of you who REALLY REALLY need the global temperature record to show as little warming as possible might want to consider jumping ship, and switch from the UAH to RSS dataset.

It’s OK, we’ve developed thick skin over the years. πŸ™‚ You can always come home later.

Gee, I wonder if some of all that green money will start flowing our way now? I’m not going to hold my breath.

UAH Global Temperature Update for June, 2011: +0.31 deg. C

July 7th, 2011

Post-La Nina Warming Continues
The global average lower tropospheric temperature anomaly for June, 2011 increased to +0.31 deg. C (click on the image for a LARGE version):

The Northern Hemisphere, Southern Hemisphere, and and Tropics all experienced temperature anomaly increases in June:

YR MON GLOBAL NH SH TROPICS
2011 1 -0.010 -0.055 +0.036 -0.372
2011 2 -0.020 -0.042 +0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 +0.117 +0.195 +0.039 -0.229
2011 5 +0.133 +0.145 +0.121 -0.043
2011 6 +0.314 +0.377 +0.251 +0.235

I would like to remind everyone that month-to-month changes in global-average tropospheric temperature have a large influence from fluctuations in the average rate of heat transfer from the ocean to the atmosphere. In other words, they are not of radiative origin (e.g. not from greenhouse gases). El Nino/La Nina is probably the most dramatic example of this kind of activity, but there are also “intraseasonal oscillations” in the ocean-atmosphere energy exchanges occurring on an irregular basis, too.

YEARLY temperature averages probably provide a better indication of the existence of radiative forcings on the climate system (whether warming or cooling). Nevertheless, we must remember that even DECADAL time scale (or longer) changes in the ocean circulation could also be involved, which can cause long-term climate change independent of any kind of greenhouse gas (or cosmic ray-induced) radiative forcing. (That last sentence has not been approved by the IPCC…but I don’t really care.)

FUNDANOMICS: The Free Market, Simplified

July 4th, 2011


I’m pretty excited that today (Independence Day, 2011) is the release date for my new book, Fundanomics: The Free Market, Simplified.

Our friend, Josh, did the cover art and it perfectly captures one of the book’s main messages: the greatest prosperity for ALL in a society is achieved when people are free to benefit from their good ideas.

In Chapter 1, A Tale of Two Neanderthals, Borgg and Glogg are the tribe’s firestarters, who get the idea to invent firesticks (matches). This leads to a system of trading with a neighboring tribe which has many great hunters, and as a result the inventors’ tribe never goes hungry again.

But the favored treatment the inventors receive from the tribe’s elders later leads to resentment in the tribe, and people forget how much better off they all are than before — even the poorest among them. Technology and prosperity might change, but human nature does not.

Simply put, a successful economy is just people being allowed to provide as much stuff as possible for each other that is needed and wanted. Economics-wise, everything else is details. When we allow politicians and opportunistic economists to fool us into supporting a variety of technical and murky government “fixes” for the economy, we lose sight of the fundamental motivating force which must be preserved for prosperity to exist: Liberty and the pursuit of happiness.

The main role of the government in the economy is help ensure people play fair…and then get out of the way.

I devote each chapter to a common economic myth.

For example, it’s not about money, which has no inherent value and is simply a convenient means of exchange of goods and services that is more efficient than bartering.

It’s not even about “jobs”, because it makes all the difference what is done in those jobs. Many poor countries have a much lower standard of living than ours, yet fuller employment. If we want full employment, just have half the population dig holes in the ground and the other half can fill them up again. The goal is a higher standard of living…not just “jobs”.

And the desire of some for a “more level playing field” and for “spreading the wealth around” is simply pandering to selfishness and laziness. The truth is that most of the wealth has already been spread around, in the form of a higher standard of living. If we do not allow the few talented and risk-taking people among us have at least the hope of personally benefiting in proportion to their good ideas, then economic progress stops.

The good news is that those few talented people need help, which is where most of the rest of us come in. One person with a new idea for a computer cannot design, manufacture, market, distribute, and sell millions of computers to the rest of society. They need our help, and in the process everyone benefits.

I also examine the role of various government economic programs, most of which end up hurting more than helping. A major reason why the government is so prone to failure is the lack of disincentives against failure in government service. In the real marketplace failures are not rewarded, which helps keep us on the right track to prosperity.

Even the truly needy in our country would be better off if we allowed private charitable organizations, rather than inefficient government bureaucracies to compete for the public’s donations.

I’ve been interested in basic economics for the last 25 years, but frustrated by the technical details (marginal costs, money supply, etc.) that too often scare people away from understanding the most basic forces which propel societies to ever high standards of living. Now, with our country facing tough decisions about our financial future, I decided it was time to stop yelling at the idiots on TV (and giving away all my ideas to talk show hosts) and put the material in a short — less than 100 pages — book that would be approachable by anyone.

I’ll be signing the first 500 copies. The price is $12.95 (including free shipping in the U.S.) You can see all of the chapter first pages at Fundanomics.org. I think this book would be especially valuable to homeschoolers.

(NOTE REGARDING COMMENTS, BELOW: In response to a comment that it was ironic for a scientist whose research is 100% funded by the U.S. Government to be against wasteful government spending, my statement that “I view my job a little like a legislator” has caused quite a stir, especially over at ThinkProgress. This was a rather poor analogy…my point was that a federally-funded person like myself can be against excess government spending, just as some federally-funded legislators are, that’s all. I did not mean to imply I wanted to be a de facto legislator. The context of the full comment, below, should have made that clear.

And, once again, ThinkProgress reveals the hypocrisy of those who think its OK for Al Gore to play a climate scientist, or NASA’s James Hansen to actively campaign for Malthusian energy policy changes and for presidential candidates – in violation of the Hatch Act, as NASA employees are told during their annual ethics training classes.)

More Evidence that Global Warming is a False Alarm: A Model Simulation of the last 40 Years of Deep Ocean Warming

June 25th, 2011

NOTE: I am making available the Excel spreadsheet with the simple forcing-feedback-diffusion model so people can experiment with it. The spreadsheet is fairly self-explanatory. THE DIFFUSION COEFFICIENTS CANNOT BE VARIED TOO DRASTICALLY SINCE, WITH A MONTHLY TIME STEP, THE MODEL WILL CREATE UNSTABLE TEMPERATURE OSCILLATIONS. This is a common problem with numerical integration, which could be eliminated by reducing the time step, but I wanted to keep the model file size manageable:
simple-forcing-feedback-ocean-heat-diffusion-model-v1.0
FOLLOWUP NOTE: The above spreadsheet has an error in the equations, which does not change the conclusions, but affects the physical consistency of the calculations. The heat capacity used for water is 10 times too low, and the diffusion coefficients are also 10x too low. Those errors cancel out. I will post a new spreadsheet when I get back to the office, as I am on travel now.

NASA’s James Hansen is probably right about this point: the importance of ocean heat storage to a better understanding of how sensitive the climate system is to our greenhouse gas emissions. The more efficient the oceans are at storing excess heat during warming, the slower will be the surface temperature response of the climate system to an imposed energy imbalance.

Unfortunately, the uncertainties over the rate at which vertical mixing takes place in the ocean allows climate modelers to dismiss a lack of recent warming by simply asserting that the deep oceans must somehow be absorbing the extra heat. Think Trenberth’s “missing heat“. (For a discussion of the complex processes involved in ocean mixing see here.)

Well, maybe what is really missing is the IPCC’s willingness to admit the climate system is simply not as sensitive to our greenhouse gas emissions as they claim it is. Maybe the missing heat is missing because it does not really exist.

This is where we can learn from the 40+ year record of deep ocean temperature changes. Even the 2007 IPCC report admitted the oceans have warmed more slowly at depth than the climate models can explain.

Here I will show quantitatively with a simple forcing-feedback-diffusion model that recent ocean warming is actually consistent with a climate sensitivity which is so low that the IPCC considers it very unlikely.

I will also show how disingenuous the IPCC 2007 report was in presenting the ocean warming evidence to support its view that anthropogenic global warming will be a serious problem.

The Need for Ocean Layers in a Climate Model

Ten years ago Dick Lindzen used a simple climate model to look into the issue of whether the rate of ocean warming at 500 meters depth might help us better understand how sensitive the climate system is. He concluded that it is the surface temperature that is most important, not what happens deeper down in the ocean.

Conceptually, the line of reasoning here would be that the deep ocean can’t warm unless the surface warms first, and the rate of surface warming is largely controlled by atmospheric feedbacks.

In contrast, Roger Pielke, Sr. has always maintained that the heat content of the ocean is what we should be monitoring, not the surface temperature, to better understand how sensitive the climate system is to various forcings.

I WILL say I firmly believe that the surface temperature is THE MOST important temperature in the climate system. This is because (1) the surface is where most sunlight is absorbed, (2) the atmosphere is then convectively coupled to the surface, and (3) the surface and atmosphere together are the ONLY way for the Earth to radiatively cool to space in the face of continuous solar heating.

But, as we will see, the detailed profile of recent warming with depth in the ocean does appear to have additional information about climate sensitivity that is not apparent from surface warming alone.

Explaining the Observed Heating Profile of the Ocean

The following picture is worth a thousand words….but I will try to use fewer than that. First, it is partly a reworking of Fig. 9.15 of the IPCC 2007 report, showing the substantial discrepancy between observed global-average warming of the oceans to 700 meters depth (red curve), and warming estimated from the PCM1 climate model (solid green curve) for the 40-year period ending (I believe) around 2005 (click for the large version).

The green dashed line is my simple model simulation of the PCM1 model’s 40-year warming profile, where I used the GISS yearly global climate forcings to force temperature changes in the 30-layer model, where all layers have adjustable diffusion coefficients.

To get this match to the PCM1 model results, I specified the known PCM1 model net feedback parameter (1.8 Watts per sq. meter per degree), and then adjusted the diffusion coefficients between the simple model’s 50-meter thick layers extending from the surface to a depth of 1,500 meters until I got good agreement between my simple model and the PCM1 model results.

As you can see, my model fit (green dashed line) to the PCM1 model results (green solid line) is pretty good. (Some small amount of warming occurs all the way to the 1,500 m bottom of the model, although it is extremely weak). This demonstrates that the simple model can basically replicate the behavior of the much more complex PCM1 model, albeit for only global-average results.

Next, after I got the simple model to mimic the PCM1 model, then I tried to explain the observations (red) curve by adjusting (1) the model sensitivity (assumed feedback parameter), and (2) the diffusion coefficients, until the model explained the actual observed warming profile. Interestingly, the diffusion coefficients only needed to be changed for the top three ocean layers (down to 200 m depth, which would be about the bottom of the themocline.) The rest of the diffusion coefficients remained the same as in the PCM1-matching simulation.

The result is the blue curve. Significantly, the simple model required a feedback parameter equivalent to a climate sensitivity of only 1.3 deg. C in response to a doubling of CO2. This is well below the range of warming the IPCC claims is most likely (2.5 to 4 deg. C).

What It Means

The bottom line is that 40 years of warming of the 0-700 meter ocean layer has been so modest that, even if we assume it was caused by the GISS forcings (which Hansen believes will eventually cause strong warming) , it corresponds to low climate sensitivity anyway.

In other words, the oceans have not warmed enough to support the IPCC’s predictions of future warming.

The problem in the IPCC models seems to be that they mix excess heat too rapidly from the mixed layer into the deep ocean. This allows the models to retain high climate sensitivity, while limiting the amount of surface warming they produce to match the observed warming to date.

Voila! The models can thus “explain” the surface temperature record AND STILL predict strong warming for the future.

Even though the model I use is admittedly simple, this does not really matter because, in the global average, long-term temperature change is only a function of 3 basic processes:

(1) the strength of the forcing (imposed energy imbalance on the climate system, due to whatever);

(2) the strength of the climate system’s resistance to that forcing (net feedback, which determines climate sensitivity); and,

(3) the rate of ocean mixing (which affects surface temperature, which affects the rate of energy loss to space through feedback processes).

Everything else is details.

How the IPCC Cheated

In the process of this little study I learned that the IPCC’s 2007 presentation of the ocean warming data was, at best, disingenuous. Here’s the original PCM1 panel of Fig. 9.15 from the 2007 IPCC report, from which I took numbers to re-plot on the figure, above:

With this figure, the IPCC was cleverly able to make it LOOK like there was general agreement between their climate models (green shaded area) and observations (red curve), with no less than four ploys:

1) They chose a climate model (PCM1) that is the 2nd LEAST sensitive of the twenty-something climate models they survey. PCM1 produces even less warming than the IPCC’s official projected range of warming from a doubling of CO2.

2) For the PCM1 model results, they presented a rather broad range of warming (green shaded area), meant to represent natural climate variations about the average warming produced by the model. In this way, they were able to get the weak observed warming to better overlap with the model produced warming, suggesting agreement.

3) They omitted the 0 deg. (no temperature change) vertical line from the figure, the presence of which would have visually revealed the significant discrepancy between the PCM1 model results and the observations.

4) They made the ocean depth scale nonlinear, which disproportionally emphasized the agreement in the relatively shallow mixed layer of the ocean, while downplaying the rather large discrepancy deeper down. But there is NO physical reason to make the ocean depth scale nonlinear; the total heat carrying capacity of the ocean varies linearly with depth, not non-linearly.

Conclusion

It appears that the vertical profile of ocean warming could be a key ingredient in getting a better idea of how sensitive the climate system is to our greenhouse gas emissions. The results here suggests the warming has been considerably weaker than what would be expected for a sensitive climate system.

The sensitivity number I estimate — 1.3 deg. C — arguably puts future warming in the realm of “eh, who cares?”

It will be interesting to see how the next IPCC report, now in the early stages of preparation, explains away the increasing discrepancies between their climate models and the observations. Since IPCC outcomes are ultimately driven by desired governmental policies and politicians, rather than science, I’m sure the wordsmithing (and figuresmithing) will be artfully done.

IMPORTANT NOTE: The above model simulation does not account for the possibility that some of recent warming could have been due to one or more natural processes, in which case the diagnosed climate sensitivity would be even lower.

UAH Temperature Update for May, 2011: +0.13 deg. C

June 7th, 2011

Little Change from Last Month
The global average lower tropospheric temperature anomaly for May, 2011 was just about the same as last month: up slightly to +0.13 deg. C (click on the image for a LARGE version):

Note the tropics continue to warm as La Nina fades:

YR MON GLOBAL NH SH TROPICS
2011 1 -0.010 -0.055 0.036 -0.372
2011 2 -0.020 -0.042 0.002 -0.348
2011 3 -0.101 -0.073 -0.128 -0.342
2011 4 0.117 0.195 0.039 -0.229
2011 5 0.131 0.143 0.120 -0.044

I have also updated the global sea surface temperature (SST) anomalies computed from AMSR-E through yesterday, June 8 6 (note that the base period is different, so the zero line is different than for the lower tropospheric temperature plot above):

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!