TRMM Satellite Suggests July 2009 Not a Record for Sea Surface Temperatures

August 26th, 2009

(UPDATED 8/26/09 at 13:30 CDT
Added a new plot comparing TMI and AMSR-E global monthly SSTs)

NOAA/NCDC recently announced that July 2009 set a new record high global sea surface temperature (SST) for the month of July, just edging out July 1998. This would be quite significant since July 1998 was very warm due to a strong El Nino, whereas last month (July, 2009) is just heading into an El Nino which has hardly gotten rolling yet.

If July was indeed a record, one might wonder if we are about to see a string of record warm months if a moderate or strong El Nino does sustain itself, with that natural warming being piled on top of the manmade global warming that the “scientific consensus” is so fond of.

I started out looking at the satellite microwave SSTs from the AMSR-E instrument on NASA’s Aqua satellite. Even though those data only extend back to 2002, I though it would provide a sanity check. My last post described a significant discrepancy I found between the NOAA/NCDC “ERSST” trend and the satellite microwave SST trend (from the AMSR-E instrument on Aqua) over the last 7 years…but with the AMSR-E giving a much warmer July 2009 anomaly than the NCDC claimed existed! The discrepancy was so large that my sanity-check turned into me going a little insane trying to figure it out.

So, since we have another satellite dataset with a longer record that would allow a direct comparison between 1998 and 2009, I decided to analyze the full record from the TRMM Microwave Imager (TMI). The TRMM satellite covers the latitudes between 40N and 40S, so a small amount of N. Hemisphere ocean is being missed, and a large chunk of the ocean around Antarctica will be missed as well. But since my analysis of the ERSST and AMSR-E SST data suggested the discrepancy between them was actually between these latitudes as well, I decided that the results should give a pretty good independent check on the NOAA numbers. All of the original data that went into the averaging came from the Remote Sensing Systems (RSS) website, SSMI.com. Anomalies were computed about the mean annual cycle from data over the whole period of record.

The results are shown in the following three panels. The first panel shows monthly SST anomalies since January 1998, and as can be seen July 2009 came in about 0.06 deg. C below July 1998. At face value, this suggests that July 2009 might not have been a record. And as you can see from the first 3 weeks of August data, it looks like this month will come in even cooler.

TMI-SST-comparisons-1998-2009
Now, if you are wondering how accurate these monthly anomalies are, the second panel shows the validation statistics that RSS archives in near-real time. Out of the 5 different classes of in situ validation data, I chose just the moored buoys due to their large volume of data (over 200,000 matchups between buoys and satellite observations), and a relatively fixed geographic coverage (unlike drifting buoys). As can be seen, the TMI SST record shows superb long-term stability. The 0.15 deg. C cool bias in the TMI measurements is from the “cool skin” effect, with water temperatures in the upper few millimeters being slightly cooler on average than the SSTs measured by the buoys, typically at a depth around 1 meter.

The third and final panel in the above figure shows that a substantial fraction of the monthly SST variability from year to year is due to the Southern Oscillation (El Nino/La Nina), and the Pacific Decadal Oscillation, PDO. Each of these indices have a correlation of 0.33 with SST for monthly averages over the 40N-40S latitude band, while their sum (taking the negative of the SOI first) is correlated at 0.39. I did not look at lag correlations, which might be higher, and it looks like some additional time averaging would increase the correlation.

I will post again when I have new information on my previously reported discrepancy between NOAA’s results and the AMSR-E results. That is still making me a little crazy.

8/26/09 13:30 CDT UPDATE

I computed the monthly global (60N to 60S latitudes) AMSR-E SST anomalies, adjusted them for the difference in annual cycles with the longer TMI record, and then plotted the AMSR-E and TMI SST anomalies together. Even though the TMI can not measure poleward of 40 deg. latitude (N or S), we see reasonable agreement between the two products.

TMI-AMSRE-SST-comparisons-1998-2009
None of this represents proof that July 2009 was not a record warm month in ocean surface temperatures, but it does cast significant doubt on the claim. But the focus on a single month misses the big picture: recent years have yet to reach the warmth of 1998. Only time will tell whether we get another year that approaches that unusual event.


Something’s Fishy With Global Ocean Temperature Measurements

August 22nd, 2009

(edited 8/23/09 0710 CDT: Changed plots & revised text to reflect the fact that NCDC, not CRU, is apparently the source of the SST dataset; also add discussion of possible RFI interference in satellite measurements)

(edited 8/22/09 1415 CDT: added plot of trend differences by month at bottom)

In my previous blog posting I showed the satellite-based global-average monthly sea surface temperature (SST) variations since mid-2002, which was when the NASA Aqua satellite was launched carrying the Advanced Microwave Scanning Radiometer for EOS (AMSR-E). The AMSR-E instrument (which I serve as the U.S. Science Team Leader for) provides nearly all-weather SST measurements.

The plot I showed yesterday agreed with the NOAA announcement that July 2009 was unusually warm…NOAA claims it was even a new record for July based upon their 100+ year record of global SSTs.

But I didn’t know just HOW warm, since our satellite data extend back to only 2002. So, I decided to download the NOAA/NCDC SST data from their website — which do NOT include the AMSR-E measurements — to do a more quantitative comparison.

From the NOAA data, I computed monthly anomalies in exactly the same manner I computed them with the AMSR-E data, that is, relative to the June 2002 through July 2009 period of record. The results (shown below) were so surprising, I had to go to my office this Saturday morning to make sure I didn’t make a mistake in my processing of the AMSR-E data.

Global-SST-NCDC-vs-AMSRE

As can be seen, the satellite-based temperatures have been steadily rising relative to the conventional SST measurements, with a total linear increase of 0.15 deg C over the 7 year period of record versus the conventional SST measurements.

If the satellite data are correct, then this means that the July 2009 SSTs reached a considerably higher record temperature than NOAA has claimed. The discrepancy is huge in terms of climate measurements; the trend in the difference between the two datasets shown in the above figure is the same size as the anthropogenic global warming signal expected by the IPCC.

I have no idea what is going on here. Frank Wentz and Chelle Gentemann at Remote Sensing Systems have been very careful about tracking the accuracy of the AMSR-E SST retrievals with millions of buoy measurements. I checked their daily statistics they post at their website and I don’t see anything like what is shown in the above figure.

Is it possible that the NCDC SST temperature dataset has been understating recent warming? I don’t know…I’m mystified. Maybe Frank, Chelle, Phil Jones, or some enterprising blogger out there can figure this one out.

UPDATE #1 (8/22/09)
Here’s the trend differences between the satellite and in-situ data, broken out by calendar month. The problem seems to be mainly a Northern Hemisphere warm season phenomenon.

Global-SST-NCDC-vs-AMSRE-trend-diff-by-month

UPDATE#2 (8/23/09)
Anthony Watts has suggested that the radio frequency interference (RFI) that we see in the AMSR-E 6.9 GHz data over land might be gradually invading the ocean as more boats install various kinds of microwave transmitters. While it’s hard for me to believe such an effect could be this strong (we have never seen obvious evidence of oceanic RFI before), this is still an interesting hypothesis, so this week I will examine the daily 1/4 deg. grids of AMSR-E SST and compute a spatial “speckle” statistic to see if there is any evidence of this kind of interference increasing over time. I should note that we HAVE seen more RFI reflected off the ocean from geostationary TV communication satellites in the AMSR-E data in recent years.


Record July 2009 Sea Surface Temperatures? The View from Space

August 21st, 2009

Since NOAA has announced that their data show July 2009 global-average sea surface temperatures (SSTs) reaching a record high for the month of July, I thought I would take a look at what the combined AMSR-E & TMI instruments on NASA’s Aqua and TRMM satellites (respectively) had to say. I thought it might at least provide an independent sanity check since NOAA does not include these satellite data in their operational product.

The SSTs from AMSR-E are geographically the most complete record of global SSTs available since the instrument is a microwave radiometer and can measure the surface through most cloud conditions. AMSR-E (launched on Aqua in May 2002) provides truly global coverage, while the TMI (which was launched on TRMM in late 1997) does not, so the combined SST product produced by Frank Wentz’s Remote Sensing Systems provides complete global coverage only since the launch of Aqua (mid-2002). Through a cooperative project between RSS, NASA, and UAH, The digital data are available from the same (NASA Discover) website that our daily tropospheric temperatures are displayed, but for the SSTs you have to read the daily binary files and compute the anomalies yourself. I use FORTRAN for this, since it’s the only programming language I know.

As can be seen in the following plot of running 11 day average anomalies, July 2009 was indeed the warmest month during the relatively short Aqua satellite period of record, with the peak anomaly occurring about July 18.

AMSRE-SST-global

The large and frequent swings in global average temperature are real, and result from changes in the rate at which water evaporates from the ocean surface. These variations are primarily driven by tropical Intraseasonal Oscillations, which change tropical-average surface winds by about 2 knots from lowest wind conditions to highest wind conditions.

As can be seen, the SSTs started to fall fast during the last week of July. If you are wondering what I think they will do in the coming months, well, that’s easy…I have no clue.


July 2009 Global Temperature Update: +0.41 deg. C

August 5th, 2009


YR MON GLOBE NH SH TROPICS
2009 1 +0.304 +0.443 +0.165 -0.036
2009 2 +0.347 +0.678 +0.016 +0.051
2009 3 +0.206 +0.310 +0.103 -0.149
2009 4 +0.090 +0.124 +0.056 -0.014
2009 5 +0.045 +0.046 +0.044 -0.166
2009 6 +0.003 +0.031 -0.025 -0.003
2009 7 +0.410 +0.211 +0.609 +0.427

July 2009 experienced a large jump in the global average tropospheric temperature anomaly, from +0.00 deg. C in June to +0.41 deg. C in July, with the tropics and southern hemisphere showing the greatest warming.

NOTE: For those who are monitoring the daily progress of global-average temperatures here, we will be switching from NOAA-15 to Aqua AMSU in the next few weeks, which will provide more accurate tracking on a daily basis. We will be including both our lower troposphere (LT) and mid-tropospheric (MT) pre-processing of the data.


My Favorite Renewable Energy Concept: The Solar Updraft Tower

August 5th, 2009

There are many different ways that you can extract usable energy from sunlight, and they all have their advantages and disadvantages. Historically, the biggest disadvantage has been cost when compared to more traditional sources of energy, such as coal-fired power plants. For if solar power was an economical and practical alternative to other forms of energy generation today, it would already be deployed on a wide scale.

I think it is only a matter of time before renewable energy sources become more cost competitive. The question is which methods make the most sense. My favorite idea is the ‘Solar Tower’ (or ‘solar updraft tower’, or ‘solar chimney’), an artists rendering of which is shown below.

Solar-Tower

While most people have never heard about it, the Solar Tower design was implemented on a small scale in Spain years ago to test the concept. More recently, a privately-funded company called EnviroMission has been working toward the construction of one or more 200 megawatt power plants in the Australian Outback. The company has also been actively pursuing plans to build power plants in China and Nevada.

The design appeals to me because it harnesses the weather, albeit on a small scale. Specifically, it collects the daily production of warm air that forms near the ground, and funnels all of that warm air into a chimney where turbines are located to extract energy from the rising air. It’s a little like wind tower technology, but rather than just extracting energy from whatever horizontally-flowing wind happens to be passing by, the Solar Tower concentrates all of that warm air heated by the ground into the central tower, or chimney, where the air naturally rises. Even on a day with no wind, the solar tower will be generating electricity while conventional wind towers are sitting there motionless.

The total amount of energy that can be generated by a Solar Tower depends upon two main factors: (1) how much land area is covered by the clear canopy, and (2) the total height of the tower. EnviroMission’s baseline design has included a glass canopy covering up to several square miles of desert land, and a tower 1,000 meters tall. Such a tower would be the tallest manmade structure of any kind in the world, although more recently EnviroMission has been talking about several smaller-scale power plants as a more cost-effective approach. Since the design is proprietary, details have remained secret.

Since the Solar Tower is based upon physical processes that people like me deal with routinely in our research, I can immediately see ways in which the efficiency of the design can be maximized. For instance, a third major factor that also determines how much energy would be generated is the temperature difference between the power plant’s surroundings and the air underneath the canopy. After all, it is that temperature difference which provides the energy source, since warm air is less dense than cool air, and so ‘wants’ to rise.

This means that you could increase the power plant’s output by building it where the sand is quite reflective (bright), and then covering the ground under the canopy area with black rock — say crushed lava rock — which would then get the hottest when the sun shines on it.

One of the advantages of a Solar Tower over using photovoltaic cells to generate electricity is that the Solar Tower keeps generating electricity even after the sun goes down. Because the ground under the canopy stays warm at night, it continues to warm the air while the land around the canopy cools much more rapidly. This maintains a temperature difference between the canopy-covered air and the plant’s surroundings, which translates into continued energy generation at night. Additionally, the Solar Tower does not require the huge volume of water that coal-fired plants use.

From what I’ve read, the Solar Tower is potentially cost-competitive with coal-fired power plants, but the investment in infrastructure is large, and there is still some uncertainty (and therefore investment risk) involved in just how efficient Solar Towers would be.

If the government insists on providing subsidies for renewable energy, I think Solar Towers might be one of the best investments of the public’s money. But the last I knew, the U.S. Department of Energy was not actively pursuing the Solar Tower technology. I have no idea whether government involvement would help or hurt the private efforts of EnviroMission. Ultimately, the technology needs to be sustainable from a cost standpoint, and when that point is reached it is best if the government stays out of the way.

But just from a political standpoint, I think the Obama administration would benefit by pursuing Solar Towers as a national goal to reduce our dependence on foreign energy. If electricity in the sunnier parts of the country was cheap enough, then plug-in hybrid cars would become more popular there as well, which would reduce our dependence on foreign oil.

A very cool computer-generated video tour of an EnviroMission Solar Tower design can be viewed here (be sure to turn the sound up!). As the video shows, a Solar Tower 1,000 meters tall would also provide quite a tourist attraction.


STILL No Tropical Storms? Must Be Global Warming

August 3rd, 2009

So, where are all of the news stories about the fact we’ve had no tropical storms yet this year? As can be seen in the following graphic, as of this date in 2005 we already had 8 named storms in the Atlantic basin. And tomorrow, August 4, that number will increase to 9. In 2005 we were even told to expect more active hurricane seasons from now on because of global warming.

named-storms-climatology

Of course, even though it is interesting that the 2009 tropical season is off to such a slow start, it may well have no significance in terms of long-term trends. But the lack of news coverage on the subject does show the importance of unbiased reporting when it comes to global warming. Let me explain.

Let’s say we really were in a slow, long-term cooling trend. What if the media decided they would only do news stories when there are record high temperatures or heat waves, ignoring record cold, and would then attribute those events to human-caused global warming? This would end up making the public fearful of global warming, even if the real threat was from global cooling.

The public expects – or used to expect – the media to report on all sides of important issues, so that we can be better informed on the state of the world. There have always been high temperature records set, and there have always been heat waves. In some sense, unusual weather is normal. It might not happen every day, but you can be assured, it will happen. But reporting on heat-related events while ignoring cold temperature records or events that do not support the claims of global warming theorists, will lead to a bias in the way the public views climate change.

Of course, someone might come along and claim that global warming has disrupted tropical storm activity this year, and so an unusually quiet season will also be claimed as evidence of global warming. This has already happened to some extent with cold weather and more snow being blamed on global warming.

But when the natural climate cycle deniers reach that level of desperation, they only appear that much more ridiculous to those of us who have not yet lost our ability to reason.


Rise of the Natural Climate Cycle Deniers

July 29th, 2009

Those who promote the theory that mankind is responsible for global warming have been working for the past 20 years on a revisionist climate history. A history where climate was always in a harmonious state of balance until mankind came along and upset that balance.

The natural climate cycle deniers have tried their best to eliminate the Medieval Warm Period and the Little Ice Age from climate data records by constructing the uncritically acclaimed and infamous “hockey stick” of global temperature variations (or non-variations) over the last one- to two-thousand years.

Before being largely discredited by a National Academies review panel, this ‘poster child’ for global warming was heralded as proof of the static nature of the climate system, and that only humans had the power to alter it.

While the panel was careful to point out that the hockey stick might be correct, they said that the only thing science could say for sure is that it has been warmer lately than anytime in the last 400 years. Since most of those 400 years was during the Little Ice Age, I would say this is a good thing. It’s like saying this summer has been warmer than any period since…last fall.

These deniers claim that the Medieval Warm Period was only a regional phenomenon, restricted to Europe. Same for the Little Ice Age. Yet when a killer heat wave occurred in France in 2003, they hypocritically insisted that this event had global significance, caused by anthropogenic ‘global’ warming.

The strong warming that occurred up until 1940 is similarly a thorn in the side of the natural climate cycle deniers, since atmospheric carbon dioxide increases from fossil fuel burning before 1940 were too meager to have caused it. So, the ‘experts’ are now actively working on reducing the magnitude of that event by readjusting some ship measurements of ocean temperatures from that era.

Yet, they would never dream of readjusting the more recent thermometer record, which clearly has localized urban heat island effects that have not yet been removed (e.g., see here and here). As Dick Lindzen of MIT has pointed out, it is highly improbable that every adjustment the climate revisionists ever make to the data should always just happen to be in the direction of agreeing with the climate models.

Of course, global warming has indeed occurred…just as global cooling has occurred before, too. While the global warming ‘alarmists’ claim we ‘skeptics’ have our heads stuck in the sand about the coming climate catastrophe, they don’t realize their heads are stuck in the sand about natural climate variability. Their repeated referrals to skeptic’s beliefs as “denying global warming” is evidence of either their dishonesty, or their stupidity.

The climate modelers’ predictions of the coming global warming Armageddon is of a theoretical event in the distant future, created by mathematical climate models, and promoted by scientists and politicians who have nothing to lose since it will be decades before they are proved wrong. They profess the utmost confidence in these theoretical predictions, yet close their eyes and ears to the natural rhythms exhibited by nature, both in the living and non-living realms, in the present, and in the previously recorded past.

They readily admit that cycles exist in weather, but can not (or will not) entertain the possibility that cycles might occur in climate, too. Every change the natural cycle deniers see in nature is inevitably traced to some evil deed done by humans. They predictably prognosticate such things as, “If this trend continues, the Earth will be in serious trouble”. To them behavior of nature is simple, static, always in-balance – if not sacred…in a quasi-scientific sort of way, of course.

They can not conceive of nature changing all by itself, even though evidence of that change is all around us. Like the more activist environmentalists, their romantic view of a peaceful, serene natural world ignores the stark reality that most animals on the Earth are perpetually locked in a life-or-death struggle for existence. The balances that form in nature are not harmonious, but unsteady and contentious stalemates — like the Cold War between the United States and the former Soviet Union.

Meanwhile, humans are doing just what the other animals are doing: modifying and consuming their surroundings in order to thrive. The deniers curiously assert that all other forms of life on the planet have the ‘right’ to do this – except humans.

And when the natural cycle deniers demand changes in energy policy, most of them never imagine that they might personally be inconvenienced by those policies. Like Al Gore, Robert F. Kennedy, Jr., and Leonardo DiCaprio, they scornfully look down upon the rest of humanity for using up the natural resources that they want for themselves.

And the few who freely choose to live such a life then want to deny others the freedom to choose, by either regulating or legislating everyone else’s behavior to conform to their own behavior.

The natural climate cycle deniers’ supposedly impartial science is funded by government research dollars that would mostly dry up if the fears of manmade global warming were to evaporate. With contempt they point at the few million dollars that Exxon-Mobil spent years ago to support a few scientists who maintained a healthy skepticism about the science, while the scientific establishment continues to spent tens of billions of your tax dollars.

So, who has the vested financial interest here?

Even the IPCC in its latest (2007) report admits that most of the warming in the last 50 years might be natural in origin — although they consider it very unlikely, with (in their minds) less than 10% probability. So, where is the 10% of the global warming research budget to study that possibility? It doesn’t exist, because — as a few politicians like to remind us — “the science is settled”.

The natural climate cycle deniers claim to own the moral high ground, because they are saving future generations from the ravages of (theoretical) anthropogenic climate change. A couple of them have called for trials and even executions of scientists who happen to remain skeptical of humanity being guilty of causing climate change.

Yet the energy policies they advocate are killing living, breathing poor people around the world, today. Those who are barely surviving in poverty are being pushed over the edge by rising corn prices (because of ethanol production), and decimated economies from increasing regulation and taxation of carbon based fuels in countries governed by self-righteous elites.

But the tide is turning. As the climate system stubbornly refuses to warm as much as 95% of the climate models say it should be warming, the public is turning skeptical as well. Only time will tell whether our future is one of warming, or of cooling. But if the following average of 18 proxies for global temperatures over the last 2,000 years is any indication, it is unlikely that global temperatures will remain constant for very long.

The above graph shows an average of 18 non-tree ring proxies of temperature from 12 locations around the Northern Hemisphere, published by Craig Loehle in 2007, and later revised in 2008, clearly showing that natural climate variability happens with features that coincide with known events in human history.

As Australian geologist Bob Carter has been emphasizing, we shouldn’t be worrying about manmade climate change. We should instead fear that which we know occurs: natural climate change. Unfortunately, it is the natural climate cycle deniers who are now in control of the money, the advertising, the news reporting, and the politicians.


New Study in Science Magazine: Proof of Positive Cloud Feedback?

July 26th, 2009

(edited 12:15 p.m. 7/26/09 for clarity)
(3:25 p.m. DOH! Hawaii IS part of the U.S…)

I’m getting a lot of e-mails asking about a new study by Clement et al. published last week in Science, which shows that since the 1950s, periods of warmth over the northeastern Pacific Ocean have coincided with less cloud cover. The authors cautiously speculate that this might be evidence of positive cloud feedback.

This would be bad news for the Earth and its inhabitants since sufficiently strong positive cloud feedbacks would have the potential of amplifying the small amount of direct warming from our carbon dioxide emissions to disastrous proportions.

The authors are appropriately cautious about the interpretation of their results, which are indeed interesting. The very fact that the only 2 IPCC climate models that behaved in a manner similar to the observations were the most sensitive AND the least sensitive models shows that interpretation of the study results as proof of positive cloud feedback would be very premature.

But how could such a dichotomy exist? How could what seems to be clear evidence of positive feedback in the observations agree with both the climate model that predicts the MOST global warming for our future, as well as the model that predicts the LEAST warming for our future?

In my view, the interpretation of their results in terms of cloud feedback has the same two problems that previous studies have had. These problems have to do with (1) the regional character of the study, and (2) the issue of causation when analyzing cloud and temperature changes.

Problem #1: Interpretation of Feedbacks from a Regionally-Limited Study
Back in 1991, Ramanathan and Collins published a paper in Nature which indirectly argued for negative cloud feedbacks based upon the observation that regional warming in the deep tropics is accompanied by more convective cloud (thunderstorm) activity, which then shades the ocean from solar heating. This was the basis for what became known as the “Thermostat Hypothesis”, an unfortunate name since there are many potential thermostatic mechanisms in the climate system.

But as subsequently pointed out by other researchers, cloud feedbacks can not be deduced based upon the behavior of only one branch of vertical atmospheric circulation systems — in their case the ascending branches of the tropical and subtropical atmospheric circulation system known as the Hadley and Walker circulations. This is because a change in the ascending branch of these circulations, which occurs over the warmest waters of the deep tropics, is always accompanied by a change in the descending branch, and the two changes usually largely cancel out.

The new Clement et al. study has the same problem, but in their case they studied changes in the strength of one portion of the descending branch of an atmospheric circulation system, generally between Hawaii and Mexico, where there is little precipitation and relatively sunny conditions prevail. So, even if the regional cloud response they measured was indeed feedback in origin (the ‘causation’ issue which I address as Problem #2, below), it must be lumped in with whatever regional changes occurred elsewhere in concert with it before one can meaningfully address cloud feedbacks.

Unfortunately, further complicating feedback diagnosis is the fact that these atmospheric circulation cells are interconnected all around the world. And since cloud feedbacks are, strictly speaking, most meaningfully addressed only when the whole circulation system is included — both ascending and descending branches — we need global measurements. Except for our relatively recent satellite monitoring capabilities, though, we do not have sufficiently accurate cloud measurements over all regions of the Earth to do this over any extended period of time, such as the Clement study that used ship observations extending back to the 1950s.

But there is a bigger – and less well appreciated — problem in the inference of positive cloud feedback from studies like that of Clement et al.: that of causation.

Problem #2: The Importance of Causation in Determining Cloud Feedbacks
I am now convinced that the causation issue is at the heart of most misunderstandings over feedbacks. It is the issue that I spend most of my research time on, and we have been studying it with output from 18 of the fully coupled climate models tracked by the IPCC, a simple climate model we developed, and with satellite data.

[As an aside, as a result of reviews of our extensive paper on the subject that was submitted to the Journal of Geophysical Research, we are revamping the paper at the request of the journal editor, and working on a resubmission. So, I am hopeful that it will eventually be published in some form in JGR.]

Using the example of the new Clement et al. study (the observation that periods of unusual warmth on the northeast Pacific coincided with periods of less cloud cover)…what if most of that warming was actually caused by the decrease in clouds, rather than the decrease in clouds being caused by the warming (which would be, by definition, positive cloud feedback)? In other words, what if causation is actually working in the opposite direction to feedback?

It turns out that, when a circulation-induced change in clouds causes a temperature change, it can almost totally obscure the signature of feedback — even if the feedback is strongly negative. This is easily demonstrated with a simple forcing-feedback model, which is what one portion of our new JGR paper submission deals with. It was also demonstrated theoretically in our 2008 Journal of Climate paper.

In other words, a cloud change causing a temperature change gives the illusion of positive feedback – even if negative feedback is present.

In the case of the new Science magazine study, one of the major changes seen in temperature and cloudiness (and other weather parameters) occurred during the Great Climate Shift of 1977, an event which is known to have been accompanied by changes in atmospheric circulation patterns, such as immediate and prolonged warming in Alaska and the Arctic. And since circulation changes can cause cloud changes, this is one example of a situation where one can be fooled regarding the direction of causation.

This issue of the direction of causation is not easy to get around. While I’ve found some researchers who think this is only a matter of semantics, and who claim that all we need to know is how clouds and temperature vary together, our 2008 J. Climate paper demonstrated that this is definitely not the case.

The bottom line is that it is very difficult to infer positive cloud feedback from observations of warming accompanying a decrease in clouds, because a decrease in clouds causing warming will always “look like” positive feedback.

Based upon the few conversations I have had with other researchers in the field on this subject, the issue of causation remains a huge source of confusion among climate experts on the subject of feedbacks. Even though our 2008 Journal of Climate paper on the subject outlined the basic issue, the climate research community has still failed to grasp its significance. Hopefully, the more thorough treatment we provide in our JGR resubmission will help the community better understand the problem — if it ever gets published.


How Do Climate Models Work?

July 13th, 2009

Since fears of manmade global warming — and potential legislation or regulations of carbon dioxide emissions — are based mostly upon the output of climate models, it is important for people to understand the basics of what climate models are, how they work, and what their limitations are.

Climate Models are Computer Programs

Generally speaking, a climate model is a computer program mostly made up of mathematical equations. These equations quantitatively describe how atmospheric temperature, air pressure, winds, water vapor, clouds, and precipitation all respond to solar heating of the Earth’s surface and atmosphere. Also included are equations describing how the so-called “greenhouse” elements of the atmosphere (mostly water vapor, clouds, carbon dioxide, and methane) keep the lower atmosphere warm by providing a radiative ‘blanket’ that partly controls how fast the Earth cools by loss of infrared to outer space.

The equation computations are made at individual gridpoints on a three-dimensional grid covering the Earth (see image below).
climate-model-1

In “coupled” climate models, there are also equations describing the three-dimensional oceanic circulation, how it transports absorbed solar energy around the Earth, and how it exchanges heat and moisture with the atmosphere. Modern coupled climate models also include a land model that describes how vegetation, soil, and snow or ice cover exchange energy and moisture with the atmosphere.

You can make computer visualizations of how these processes evolve as the model is run on the computer, such as the nice example shown below produced by Gary Strand at the National Center for Atmospheric Research (NCAR). This particular image shows sea surface temperatures, near-surface winds, and sea ice concentrations in one of the NCAR models at some point during a run of the model on a supercomputer. [Note the model does not have an actual physical shape in the computer…it is just a long series of computations.]
climate-model-2

If you want to see how a climate model simulation evolves over time, a striking YouTube video of the NCAR CCSM climate model is shown here.

The Importance of Energy Balance in Climate Models

Climate models are usually used to study how the Earth’s climate might respond to small changes in either the intensity of sunlight being absorbed by the Earth, or in the case of anthropogenic global warming, the addition of manmade greenhouse gases that further reduce the atmosphere’s ability to cool to outer space.

For it is the balance between these two flows of radiant energy – solar energy in, and infrared energy out of the climate system — that is believed to control the average temperature of the climate system over the long run. If the two radiant energy flows are in balance, then the average temperature of the climate system remains pretty constant. If they are out of balance, the average temperature of the climate system can be expected to change.

If this fundamental concept of “energy balance” sounds foreign to you, it shouldn’t because it is part of your everyday experience. For instance, the temperature of a pot of water warming on the stove will only increase as long as the rate of energy gain from the stove is greater than the rate of energy loss by the pot to its surroundings. Once the pot warms to the point where the rate of energy loss equals the rate of energy gain, its temperature then will remain constant.

Similarly, the temperature of the inside of a car sitting in the sun will increase only until the rate at which sunlight is absorbed by the car equals the rate at which heat is lost by the hot car to its surroundings.

In the same manner, any imbalance in the average flows of energy in and out of the climate system can be expected to cause a temperature change. When averaged over the whole Earth, the rates of absorbed solar energy and infrared energy lost are estimated to be about 235 or 240 Watts per square meter. I say “estimated” because our satellite system for measuring the radiative energy budget of the Earth is still not quite good enough to measure it to this level of absolute accuracy.

A variety of adjustable parameters in the model are tuned until the model approximates the average seasonal change in weather patterns around the world, and also absorbs sunlight and emits infrared energy to space at a global-average rate of about 235 or 240 Watts per sq. meter. The modelers tend to assume that if the model does a reasonably good job of mimicking these basic features of the climate system, then the model will be able to predict global warming. This might or might not be a good assumption – no one really knows.

It is also important to understand that even if a climate model handled 95% of the processes in the climate system perfectly, this does not mean the model will be 95% accurate in its predictions. All it takes is one important process to be wrong for the models to be seriously in error. For instance, how the model alters cloud cover with warming can make the difference between anthropogenic global warming being catastrophic, or just lost in the noise of natural climate variability.

Anthropogenic Global Warming in Climate Models

Our addition of carbon dioxide to the atmosphere by burning fossil fuels is estimated to have caused an imbalance of about 1.5 Watts per sq. meter between the 235 to 240 Watts per sq. meter of average absorbed sunlight and emitted infrared radiation. The extra CO2 makes the infrared greenhouse blanket covering the Earth slightly thicker. This energy imbalance is too small to be measured from satellites; it must be computed based upon theory.

So, if the Earth was initially in a state of energy balance, and the rate of sunlight being absorbed by the Earth was exactly 240 Watts per sq. meter, then the rate of infrared loss to outer space would have been reduced from 240 Watts per sq. meter to 238.5 Watts per sq. meter (240 minus 1.5).

This energy imbalance causes warming in the climate model. And since a warmer Earth (just like any warmer object) loses infrared energy faster than a cool object, the modeled climate system will warm up until energy balance is once again is restored. At that point, the rate at which infrared energy is lost to space once again equals the rate at which sunlight is absorbed by the Earth, and the temperature will once again remain fairly constant.

What Determines How Much the Model will Warm?

The largest source of uncertainty in climate modeling is this: will the climate system act to reduce, or enhance, the small amount of CO2 warming?

The climate model (as well as the real climate system) has different ways in which an energy imbalance like that from adding CO2 to the atmosphere can be restored. The simplest response would be for the temperature alone to increase. For instance, it can be calculated theoretically that the ~40% increase in atmospheric CO2 humans are believed to have caused in the last 150 years would only cause about 0.5 deg. C warming to restore energy imbalance. This theoretical response is called the “no feedback” case because nothing other than temperature changed.

But a change in temperature can be expected to change other elements of the climate system, like clouds and water vapor. These other, indirect changes are called feedbacks, and they can either amplify the CO2-only warming, or reduce it. As shown in the following figure, all 20+ climate models currently tracked by the United Nations’ Intergovernmental Panel on Climate Change (IPCC) now amplify the warming.
21-ipcc-climate-models

This amplification is mostly through an increase in water vapor — Earth’s main greenhouse gas — and through a decrease in low- and middle-altitude clouds, the primary effect of which is to let more sunlight into the system and cause further warming. These indirect changes in response to warming are called feedbacks. The models amplify the CO2 warming with positive water vapor feedback, and with positive cloud feedback.

But is this the way that the real climate system operates?

Uncertainties in Climate Model Cloud and Water Vapor Processes

The climate model equations are only approximations of the physical processes that occur in the atmosphere. While some of those approximations are highly accurate, some of the most important ones from the standpoint of climate change are unavoidably crude. This is because the real processes they represent are either (1) too complex to include in the model and still have the model run fast on a computer, or (2) because our understanding of those processes is still too poor to accurately model them with equations.

This is especially true for cloud formation and dissipation, which in turn has a huge impact on how much sunlight is absorbed by the climate system. The amount of cloud cover generated in the model in response to solar heating helps control the Earth’s temperature, so the manner in which clouds change with warming is of huge importance to global warming predictions.

Climate modelers are still struggling to get the models to produce cloud cover amounts and types like those seen in different regions, and during different seasons. The following NASA MODIS image of the western U.S. and eastern Pacific Ocean shows a wide variety of cloud types which are controlled by a variety of different processes.
blue-marble-west-us

The complexity of clouds is intuitively understood by everyone, experts and non-experts alike. It is probably safe to say that all climate modelers recognize that the modeling of cloud behavior accurately is very difficult, and is something which has not yet been achieved in global climate models.

All of the IPCC climate models reduce low- and middle-altitude cloud cover with warming, a positive feedback. This is the main reason for the differences in warming produced by different climate models (Trenberth and Fasullo, 2009). I predict that this kind of model behavior will eventually be shown to be incorrect. And while the authors were loathe to admit it, there is already some evidence showing up in the peer-reviewed scientific literature that this is the case (Spencer et al., 2007; Caldwell and Bretherton, 2009).

I believe that the modelers have mistakenly interpreted decreased cloud cover with warming in the real climate system as positive cloud feedback (warming causing a cloud decrease), when in reality it was actually the decrease in clouds that mostly caused the warming. This is basically an issue of causation: one direction of causation has been ignored when trying to estimate causation in the opposite direction (Spencer and Braswell, 2008).

The fundamental issue of causation in climate modeling isn’t restricted to just clouds. While warming will, on average, cause an increase in low-level water vapor, precipitation systems control the water vapor content of most of the rest of the atmosphere. As shown in the following illustration, while evaporation over most of the Earth’s surface is continuously trying to enhance the Earth’s natural greenhouse effect by adding water vapor, precipitation is continuously reducing the greenhouse effect by converting that water vapor into clouds, then into precipitation.
evaporation-precipitation

But while the physics of evaporation at the Earth’s surface is understood pretty well, the processes controlling the conversion of water vapor into precipitation in clouds are complex and remain rather mysterious. And it is the balance between these two processes — evaporation and precipitation — that determines atmospheric humidity.

Even in some highly complex ‘cloud resolving models’ – computer models that use much more complex computations to actually ‘grow’ clouds in the models – the point at which a cloud starts precipitating in the model is given an ad hoc constant value. I consider this to be a huge source of uncertainty, and one that is not appreciated even by most climate modelers. The modelers tune the models to approximate the average relative humidity of the atmosphere, but we still do not understand from ‘first principles’ why the average humidity has its observed value. We would have to thoroughly understand all of the precipitation processes, which we don’t.

In the end, many of the approximations in climate models will probably end up being not very important for forecasting climate change…but it takes only one critical process to be wrong for model projections of warming to be greatly in error. The IPCC admits that their largest source of uncertainty is low cloud feedback, that is, how low cloud cover will change with warming. And, as just mentioned, I believe how precipitation efficiency might change with temperature is also a wild card in climate model predictions.

Sources of Global Warming: Humans or Nature?

At this point hopefully you understand that climate modelers think global warming is the result of humans ‘upsetting’ the Earth’s radiative energy balance. And I agree with them that adding more carbon dioxide to the atmosphere must have some effect…but how large is this change in comparison the energy imbalances the climate system imposes upon itself?

It turns out that the modelers have made a critical assumption that ends up leading to the their conclusion that the climate system is very sensitive to our greenhouse gas emissions: that the climate system was in a state of energy balance in the first place.

There is a pervasive, non-scientific belief in the Earth sciences that nature is in a fragile state of balance. Whether it is ecosystems or the climate systems, you will hear or read scientists claims about the supposed fragility of nature.

But this is a subjective concept, not a scientific one. Still, it makes its way into the scientific literature (read the abstract to this seminal paper on the first satellite measurements of the Earth’s energy budget…look for “delicately balanced”). Just because nature tends toward a balance does not mean that balance is in any way ‘fragile’. And what does ‘fragile’ even mean when nature itself is always upsetting that balance anyway?

Why is this important to climate modeling? Because if climate researchers ignore naturally-induced climate variability, and instead assume that most climate changes are due to the activity of humans, they will inevitably come to the conclusion that the climate system is fragile: that is, that feedbacks are positive. It’s a little like some ancient tribe of people believing that severe weather events are the result of their moral transgressions.

If the warming observed during the 20th Century was due to human greenhouse gas emissions, then the climate system must be pretty sensitive (positive feedbacks). But if the warming was mostly due to a natural change in cloud cover, then the climate system is more likely to be insensitive (negative feedbacks). And there is no way to know whether natural cloud changes occurred during that time simply because our global cloud observations over the last century are nowhere near accurate enough.

So, climate modelers simply assume that there are no natural long-term changes in clouds, water vapor, etc. But they do not realize that in the process they will necessarily come to the conclusion that the climate system is very sensitive (feedbacks are positive). As a result, they program climate models so that they are sensitive enough to produce the warming in the last 50 years with increasing carbon dioxide concentrations. They then point to this as ‘proof’ that the CO2 caused the warming, but this is simply reasoning in a circle.

Climate modelers have simply assumed that the Earth’s climate system was in a state of energy balance before humans started using fossil fuels. But as is evidenced by the following temperature reconstruction for the last 2,000 years (from Loehle, 2007), continuous changes in temperature necessarily imply continuous changes in the Earth’s energy balance.
2000-years-of-global-temperatures-small

And while changes in solar activity are one possible explanation for these events, it is also possible that there are long-term, internally-generated fluctuations in global energy balance brought about by natural cloud or water vapor fluctuations. For instance, a change in cloud cover will change the amount of sunlight being absorbed by the Earth, thus changing global temperatures. Or, a change in precipitation processes might alter how much of our main greenhouse gas — water vapor — resides in the atmosphere. Changes in either of these will cause global warming or global cooling.

But just like the tribe ancient people not understanding that there are physical processes at work in nature that cause storms to occur, climate modelers tend to view climate change as something that is largely human in origin – presumably the result of our immoral burning of fossil fuels.

Faith-Based Climate Modeling

There is no question that much expense and effort has gone into the construction and improvement of climate models. But that doesn’t mean those models can necessarily predict climate 20, 50, or 100 years from now. Ultimately, the climate researcher (and so the politician) must take as a matter of faith that today’s computerized climate models contain all of the important processes necessary to predict global warming.

This is why validating the predictions of any theory is so important to the progress of science. The best test of a theory is to see whether the predictions of that theory end up being correct. Unfortunately, we have no good way to rigorously test climate models in the context of the theory that global warming is manmade. While some climate modelers will claim that their models produce the same “fingerprint” of manmade warming as seen in nature, there really is no such fingerprint. This is because warming due to more carbon dioxide is, for all practical purposes, indistinguishable from warming due to, say, a natural increase in atmospheric water vapor.

The modeler will protest, “But what could cause such a natural change in water vapor?” Well, how about just a small change in atmospheric circulation patterns causing a decrease in low cloud cover over the ocean? That would cause the oceans to warm, which would then warm and humidify the global atmosphere (Compo and Sardeshmukh, 2009). Or how about the circulation change causing a change in wind shear across precipitation systems? This would lead to a decrease in precipitation efficiency, leading to more water vapor in the atmosphere, also leading to a natural ‘greenhouse’ warming (Renno et al., 1994).

To reiterate, just because we don’t understand all of the ways in which nature operates doesn’t mean that we humans are responsible for the changes we see in nature.

The natural changes in climate I am talking about can be thought of as ‘chaos’. Even though all meteorologists and climate researchers agree that chaos occurs in weather, climate modelers seem to not entertain the possibility that climate can be chaotic as well (Tsonis et al., 2007). If they did believe that was possible, they would then have to seriously consider the possibility that most of the warming we saw in the 20th Century was natural, not manmade. But the IPCC remains strangely silent on this issue.

The modelers will claim that their models can explain the major changes in global average temperatures over the 20th Century. While there is some truth to that, it is (1) not likely that theirs is a unique explanation, and (2) this is not an actual prediction since the answer (the actual temperature measurements) were known beforehand.

If instead the modelers were NOT allowed to see the temperature changes over the 20th Century, and then were asked to produce a ‘hindcast’ of global temperatures, then this would have been a valid prediction. But instead, years of considerable trial-and-error work has gone into getting the climate models to reproduce the 20th Century temperature history, which was already known to the modelers. Some of us would call this just as much an exercise in statistical ‘curve-fitting’ as it is ‘climate model improvement’.

The point is that, while climate models currently offer one possible explanation for climate change (humanity’s greenhouse gas emissions), it is by no means the only possible one. And any modeler who claims they have found the only possible cause of global warming is being either disingenuous, or they have let their faith overpower their ability to reason.

Even the IPCC (2007) admits there is a 10% chance that they are wrong about humans being responsible for most of the warming observed in the last 50 years. That, by itself, shows that anyone who says “the science is settled” doesn’t know what they are talking about.

Conclusion

There is no question that great progress has been made in climate modeling. I consider computer modeling to be an absolutely essential part of climate research. After all, without running numbers through physical equations in a theoretically-based model, you really can not claim that you understand very much about how climate works.

But given all of the remaining uncertainties, I do not believe we can determine — with any objective level of confidence — whether any of the current model projections of future warming can be believed. Any scientist who claims otherwise either has political or other non-scientific motivations, or they are simply being sloppy.


REFERENCES CITED:

Caldwell, P., and C. S. Bretherton, 2009. Response of a subtropical stratocumulus-capped mixed layer to climate and aerosol changes. Journal of Climate, 22, 20-38.

Compo, G.P., and P. D. Sardeshmukh, 2009. Oceanic influences on recent continental warming, Climate Dynamics, 32, 333-342.

Intergovernmental Panel on Climate Change, 2007. Climate Change 2007: The Physical Science Basis, report, 996 pp., Cambridge University Press, New York City.

Loehle, 2007. A 2,000 year global temperature reconstruction on non-treering proxy data. Energy & Environment, 18, 1049-1058.

Renno, N.O., K.A. Emanuel and P.H. Stone, 1994. A Radiative-convective model
with an explicit hydrologic cycle: 1.Formulation and sensitivity to model parameters.
Journal of Geophysical Research, 99, 14,429-14,441

Spencer, R.W., W. D. Braswell, J. R. Christy, and J. Hnilo, 2007. Cloud and radiation budget changes associated with tropical intraseasonal oscillations, Geophys. Res. Lett., 34, L15707 doi:10.1029/2007GL029698.

Spencer, R.W., and W.D. Braswell, 2008. Potential biases in cloud feedback diagnosis: A
simple model demonstration, J. Climate, 21, 5624-5628.

Trenberth, K.E.., and J.T. Fasullo, 2009. Global warming due to increasing absorbed solar radiation. Geophysical Research Letters, 36, L07706, doi:10.1029/2009GL037527.

Tsonis, A. A., K. Swanson, and S. Kravtsov, 2007. A new dynamical mechanism for major climate shifts. Geophysical Research Letters, 34, L13705, doi:10.1029/2007GL030288.

June 2009 Global Temperature Anomaly Update: 0.00 deg. C

July 3rd, 2009

(edit at 3:10 pm CDT: changed “tropics” to “Southern Hemisphere”)

YR MON GLOBE   NH   SH   TROPICS
2009   1   0.304   0.443   0.165   -0.036
2009   2   0.347   0.678   0.016   0.051
2009   3   0.206   0.310   0.103   -0.149
2009   4   0.090   0.124   0.056   -0.014
2009   5   0.045   0.046   0.044   -0.166
2009   6   0.001   0.032   -0.030   -0.003

1979-2009 Graph

June 2009 saw another — albeit small — drop in the global average temperature anomaly, from +0.04 deg. C in May to 0.00 deg. C in June, with the coolest anomaly (-0.03 deg. C) in the Southern Hemisphere. The decadal temperature trend for the period December 1978 through June 2009 remains at +0.13 deg. C per decade.

NOTE: A reminder for those who are monitoring the daily progress of global-average temperatures here:

(1) Only use channel 5 (“ch05”), which is what we use for the lower troposphere and middle troposphere temperature products.
(2) Compare the current month to the same calendar month from the previous year (which is already plotted for you).
(3) The progress of daily temperatures (the current month versus the same calendar month from one year ago) should only be used as a rough guide for how the current month is shaping up because they come from the AMSU instrument on the NOAA-15 satellite, which has a substantial diurnal drift in the local time of the orbit. Our ‘official’ results presented above, in contrast, are from AMSU on NASA’s Aqua satellite, which carries extra fuel to keep it in a stable orbit. Therefore, there is no diurnal drift adjustment needed in our official product.