The Global Average Urban Heat Island Effect in 2000 Estimated from Station Temperatures and Population Density Data

March 3rd, 2010

UPDATE #1 (12:30 p.m. CST, March 3): Appended new discussion & plots showing importance of how low-population density stations are handled.

UPDATE #2 (9:10 a.m. CST, March 4): Clarifications on methodology and answers to questions.

ABSTRACT
Global hourly surface temperature observations and 1 km resolution population density data for the year 2000 are used together to quantify the average urban heat island (UHI) effect. While the rate of warming with population increase is the greatest at the lowest population densities, some warming continues with population increases even for densely populated cities. Statistics like those presented here could be used to correct the surface temperature record for spurious warming caused by the UHI effect, providing better estimates of temperature trends.

METHOD
Using NOAA’s International Surface Hourly (ISH) weather data from around the world during 2000, I computed daily, monthly, and then 1-year average temperatures for each weather station. For a station to be used, a daily average temperature computation required the 4 synoptic temperature observations at 00, 06, 12, and 18 UTC; a monthly average required at least 20 good days per month; and a yearly average required all 12 months.

For each of those weather station locations I also stored the average population density from the 1 km gridded global population density data archived at the Socioeconomic Data and Applications Center (SEDAC).
pop-density-2000

All station pairs within 150 km of each other had their 1-year average difference in temperature related to their difference in population. Averaging of these station pairs’ results was done in 10 population bins each for Station1 and Station2, with bin boundaries at 0, 20, 50, 100, 200, 400, 800, 1600, 3200, 6400, and 50000 persons per sq. km.

Because some stations are located next to large water bodies, I used an old USAF 1/6 deg lat/lon percent water coverage dataset to ensure that there was no more than a 20% difference in the percent water coverage between the two stations in each match-up. (I believe this water coverage dataset is no longer publicly available).

Elevation effects were estimated by regressing station pair temperature differences against station elevation differences, which yielded a cooling rate of 5.4 deg. C per km increase in station elevation. Then, all station temperatures were adjusted to sea level (0 km elevation) with this relationship.

After all screening, a total of 10,307 unique station pairs were accepted for analysis from 2000.

RESULTS & DISCUSSION
The following graph shows the average rate of warming with population density increase (vertical axis), as a function of the average populations of the station pairs. Each data point represents a population bin average for the intersection of a higher population station with its lower-population station mate.
pop-density-vs-rate-of-ISH-station-warming

Using the data in the above graph, we can now compute average cumulative warming from a population density of zero, the results of which are shown in the next graph. [Note that this step would be unnecessary if every populated station location had a zero-population station nearby. In that case, it would be much easier to compute the average warming associated with a population density increase.]
ISH-station-warming-vs-pop-density

This graph shows that the most rapid rate of warming with population increase is at the lowest population densities. The non-linear relationship is not a new discovery, as it has been noted by previous researchers who found an approximate logarithmic dependence of warming on population.

Significantly, this means that monitoring long-term warming at more rural stations could have greater spurious warming than monitoring in the cities. For instance, a population increase from 0 to 20 people per sq. km gives a warming of +0.22 deg C, but for a densely populated location having 1,000 people per sq. km, it takes an additional 1,500 people (to 2,500 people per sq. km) to get the same 0.22 deg. C warming. (Of course, if one can find stations whose environment has not changed at all, that would be the preferred situation.)

Since this analysis used only 1 year of data, other years could be examined to see how robust the above relationship is. Also, since there are gridded population data for 1990, 2000, and 2010 (estimated), one could examine whether there is any indication of the temperature-population relationship changing over time.

This is the type of information which I can envision being used to adjust station temperatures throughout the historical record, even as stations come, go, and move. As mentioned above, the elevation adjustment for individual stations can be done fairly easily, and the population adjustments could then be done without having to inter-calibrate stations.

Such adjustments help to maximize the number of stations used in temperature trend analysis, rather than simply throwing the data out. Note that the philosophy here is not to provide the best adjustments for each station individually, but to do adjustments for spurious effects which, when averaged over all stations, will remove the effect when averaged over all stations. This ensures simplicity and reproducibility of the analysis.

UPDATE #1:
The above results are quite sensitive to how the stations with very low population densities are handled. I’ve recomputed the above results by adding a single data point representing 724 more station pairs where BOTH stations are within the lowest population density category: 0 to 20 people per sq. km. This increases the signal of warming at low population densities, from the previously mentioned +0.22 deg C warming from zero to 20 people per sq. km, to +0.77 deg. C of warming.
ISH-station-warming-vs-pop-density-with-lowest-bin-full

This is over a factor of 3 more warming from 0 to 20 persons per sq. km with the additional data. This is important because most weather observation sites have relatively low population densities: in my dataset, I find that one-half of all stations have population densities below 100 persons per sq. km. The following plot zooms in on the lower left corner of the previous plot so you can better see the warming at the lowest population densities.

ISH-station-warming-vs-pop-density-with-lowest-bin-full-0-to-200

Clearly, any UHI adjustments to past thermometer data will depend upon how the UHI effect is quantified at these very low population densities.

Also, since I didn’t mention it earlier, I should clarify that population density is just an accessible index that is presumed to be related to how much the environment around the thermometer site has been modified over time, by replacing vegetation with manmade structures. Population density is not expected to always be a good index of this modification — for instance, population densities at large airports can be expected to be low, but the surrounding runway surfaces and airplane traffic can be expected to cause considerable spurious warming, much more than would be expected for their population density.

UPDATE #2: Clarifications and answers to questions

After sifting through the 212 comments posted in the last 12 hours at Anthony Watts’ site, I thought I would answer those concerns that seemed most relevant.

Many of the questions and objections posted there were actually answered by others peoples’ posts — see especially the 2 comments by Jim Clarke at time stamps 18:23:56 & 01:32:40. Clearly, Jim understood what I did, why I did it, and phrased the explanations even better than I could have.

Some readers were left confused since my posting was necessarily greatly simplified; the level of detail for a journal submission would increase by about a factor of ten. I appreciate all the input, which has helped clarify my thinking.

RATIONALE FOR THE STUDY

While it might not have been obvious, I am trying to come up with a quantitative method for correcting past temperature measurements for the localized warming effects due to the urban heat island (UHI) effect. I am generally including in the “UHI effect” any replacement of natural vegetation by manmade surfaces, structures and active sources of heat. I don’t want to argue about terminology, just keep things simple.

For instance, the addition of an outbuilding and a sidewalk next to an otherwise naturally-vegetated thermometer site would be considered UHI-contaminated. (As Roger Pielke, Sr., has repeatedly pointed out, changes in land use, without the addition of manmade surfaces and structures, can also cause temperature changes. I consider this to be a much more difficult influence to correct for in the global thermometer data.)

The UHI effect leads to a spurious warming signal which, even though only local, has been given global significance by some experts. Many of us believe that as much as 50% (or more) of the “global warming” signal in the thermometer data could actually be from local UHI effects. The IPCC community, in contrast, appears to believe that the thermometer record has not been substantially contaminated.

Unless someone quantitatively demonstrates that there is a significant UHI signal in the global thermometer data, the IPCC can claim that global temperature trends are not substantially contaminated by such effects.

If there were sufficient thermometer data scattered around the world that are unaffected by UHI effects, then we could simply throw away all of the contaminated data. A couple of people wondered why this is not done. I believe that there is not enough uncontaminated data to do this, which means we must find some way of correcting for UHI effects that exist in most of the thermometer data — preferably extending back 100 years or more.

Since population data is one of the few pieces of information that we have long term records for, it makes sense to determine if we can quantify the UHI effect based upon population data. My post introduces a simple method for doing that, based upon the analysis of global thermometer and population density data for a single year, 2000. The analysis needs to be done for other years as well, but the high-resolution population density data only extends back to 1990.

Admittedly, if we had good long-term records of some other variable that was more closely related to UHI, then we could use that instead. But the purpose here is not to find the best way to estimate the magnitude of TODAY’S UHI effect, but to find a practical way to correct PAST thermometer data. What I posted was the first step in that direction.

Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.

STATION PAIRING

My goal is to quantify how much of a UHI temperature rise occurs, on average, for any population density, compared to a population density of zero. We can not do this directly because that would require a zero-population temperature measurement near every populated temperature measurement location. So, we must do it in a piecewise fashion.

For every closely-spaced station pair in the world, we can compare the temperature difference between the 2 stations to the population density difference between the two station locations. Using station pairs is easily programmable on a computer, allowing the approx 10,000 temperature measurements sites to be processed relatively quickly.

Using a simple example to introduce the concept, theoretically one could compute:

1) how much average UHI warming occurs from going from 0 to 20 people per sq. km, then
2) the average warming going from 20 to 50 people per sq. km, then
3) the average warming going from 50 to 100 people per. sq. km,
etc.

If you can compute all of these separate statistics, we can determine how the UHI effect varies with population density going from 0 to the highest population densities.

Unfortunately, the populations of any 2 closely-spaced stations will be highly variable, not neatly ordered like this simple example. We need some way of handling the fact that stations do NOT have population densities exactly at 0, 20, 100 (etc.) persons per sq. km., but can have ANY population density. I handle this problem by doing averaging in specific population intervals.

For each pair of closely spaced stations, if the higher-population station is in population interval #3, and the lower population station is in population interval #1, I put that station pair’s year-average temperature difference in a 2-dimensional (interval#3, interval#1) population “bin” for later averaging.

Not only is the average temperature difference computed for all station pairs falling in each population bin, but also computed are the average populations in those bins. We will need those statistics later for our calculations of how temperature increases with population density.

Note that we can even compute the temperature difference between stations in the SAME population bin, as long as we keep track of which one has the higher population and which has the lower population. If the population densities for a pair of stations are exactly the same, we do not include that pair in the averaging.

The fact that the greatest warming RATE is observed at the lowest population densities is not a new finding. My comment that the greatest amount of spurious warming might therefore occur at the rural (rather than urban) sites, as a couple of people pointed out, presumes that rural sites tend to increase in population over the years. This might not be the case for most rural sites.

Also, as some pointed out, the UHI warming will vary with time of day, season, geography, wind conditions, etc. These are all mixed in together in my averages. But the fact that a UHI signal clearly exists without any correction for these other effects means that the global warming over the last 100 years measured using daily max/min temperature data has likely been overestimated. This is an important starting point, and its large-scale, big-picture approach complements the kind of individual-station surveys that Anthony Watts has been performing.

Spurious Warming in the Jones U.S. Temperatures Since 1973

February 27th, 2010

INTRODUCTION
As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009
I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.
CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.
CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.
CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES
I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.


New Work on the Recent Warming of Northern Hemispheric Land Areas

February 20th, 2010

INTRODUCTION

Arguably the most important data used for documenting global warming are surface station observations of temperature, with some stations providing records back 100 years or more. By far the most complete data available are for Northern Hemisphere land areas; the Southern Hemisphere is chronically short of data since it is mostly oceans.

But few stations around the world have complete records extending back more than a century, and even some remote land areas are devoid of measurements. For these and other reasons, analysis of “global” temperatures has required some creative data massaging. Some of the necessary adjustments include: switching from one station to another as old stations are phased out and new ones come online; adjusting for station moves or changes in equipment types; and adjusting for the Urban Heat Island (UHI) effect. The last problem is particularly difficult since virtually all thermometer locations have experienced an increase in manmade structures replacing natural vegetation, which inevitably introduces a spurious warming trend over time of an unknown magnitude.

There has been a lot of criticism lately of the two most publicized surface temperature datsets: those from Phil Jones (CRU) and Jim Hansen (GISS). One summary of these criticisms can be found here. These two datasets are based upon station weather data included in the Global Historical Climate Network (GHCN) database archived at NOAA’s National Climatic Data Center (NCDC), a reduced-volume and quality-controlled dataset officially blessed by your government for climate work.

One of the most disturbing changes over time in the GHCN database is a rapid decrease in the number of stations over the last 30 years or so, after a peak in station number around 1973. This is shown in the following plot which I pilfered from this blog.

Given all of the uncertainties raised about these data, there is increasing concern that the magnitude of observed ‘global warming’ might have been overstated.

TOWARD A NEW SATELLITE-BASED SURFACE TEMPERATURE DATASET

We have started working on a new land surface temperature retrieval method based upon the Aqua satellite AMSU window channels and “dirty-window” channels. These passive microwave estimates of land surface temperature, unlike our deep-layer temperature products, will be empirically calibrated with several years of global surface thermometer data.

The satellite has the benefit of providing global coverage nearly every day. The primary disadvantages are (1) the best (Aqua) satellite data have been available only since mid-2002; and (2) the retrieval of surface temperature requires an accurate adjustment for the variable microwave emissivity of various land surfaces. Our method will be calibrated once, with no time-dependent changes, using all satellite-surface station data matchups during 2003 through 2007. Using this method, if there is any spurious drift in the surface station temperatures over time (say due to urbanization) this will not cause a drift in the satellite measurements.

Despite the shortcomings, such a dataset should provide some interesting insights into the ability of the surface thermometer network to monitor global land temperature variations. (Sea surface temperature estimates are already accurately monitored with the Aqua satellite, using data from AMSR-E).

THE INTERNATIONAL SURFACE HOURLY (ISH) DATASET

Our new satellite method requires hourly temperature data from surface stations to provide +/- 15 minute time matching between the station and the satellite observations. We are using the NOAA-merged International Surface Hourly (ISH) dataset for this purpose. While these data have not had the same level of climate quality tests the GHCN dataset has undergone, they include many more stations in recent years. And since I like to work from the original data, I can do my own quality control to see how my answers differ from the analyses performed by other groups using the GHCN data.

The ISH data include globally distributed surface weather stations since 1901, and are updated and archived at NCDC in near-real time. The data are available for free to .gov and .edu domains. (NOTE: You might get an error when you click on that link if you do not have free access. For instance, I cannot access the data from home.)

The following map shows all stations included in the ISH dataset. Note that many of these are no longer operating, so the current coverage is not nearly this complete. I have color-coded the stations by elevation (click on image for full version).
ISH-station-map-1901-thru-2009

WARMING OF NORTHERN HEMISPHERIC LAND AREAS SINCE 1986

Since it is always good to immerse yourself into a dataset to get a feeling for its strengths and weaknesses, I decided I might as well do a Jones-style analysis of the Northern Hemisphere land area (where most of the stations are located). Jones’ version of this dataset, called “CRUTem3NH”, is available here.

I am used to analyzing large quantities of global satellite data, so writing a program to do the same with the surface station data was not that difficult. (I know it’s a little obscure and old-fashioned, but I always program in Fortran). I was particularly interested to see whether the ISH stations that have been available for the entire period of record would show a warming trend in recent years like that seen in the Jones dataset. Since the first graph (above) shows that the number of GHCN stations available has decreased rapidly in recent years, would a new analysis using the same number of stations throughout the record show the same level of warming?

The ISH database is fairly large, organized in yearly files, and I have been downloading the most recent years first. So far, I have obtained data for the last 24 years, since 1986. The distribution of all stations providing fairly complete time coverage since 1986, having observations at least 4 times per day, is shown in the following map.
ISH-station-map-1986-thru-2009-6-hrly

I computed daily average temperatures at each station from the observations at 00, 06, 12, and 18 UTC. For stations with at least 20 days of such averages per month, I then computed monthly averages throughout the 24 year period of record. I then computed an average annual cycle at each station separately, and then monthly anomalies (departures from the average annual cycle).

Similar to the Jones methodology, I then averaged all station month anomalies in 5 deg. grid squares, and then area-weighted those grids having good data over the Northern Hemisphere. I also recomputed the Jones NH anomalies for the same base period for a more apples-to-apples comparison. The results are shown in the following graph.
ISH-vs-CRUTem3NH-1986-thru-2009

I’ll have to admit I was a little astounded at the agreement between Jones’ and my analyses, especially since I chose a rather ad-hoc method of data screening that was not optimized in any way. Note that the linear temperature trends are essentially identical; the correlation between the monthly anomalies is 0.91.

One significant difference is that my temperature anomalies are, on average, magnified by 1.36 compared to Jones. My first suspicion is that Jones has relatively more tropical than high-latitude area in his averages, which would mute the signal. I did not have time to verify this.

Of course, an increasing urban heat island effect could still be contaminating both datasets, resulting in a spurious warming trend. Also, when I include years before 1986 in the analysis, the warming trends might start to diverge. But at face value, this plot seems to indicate that the rapid decrease in the number of stations included in the GHCN database in recent years has not caused a spurious warming trend in the Jones dataset — at least not since 1986. Also note that December 2009 was, indeed, a cool month in my analysis.

FUTURE PLANS
We are still in the early stages of development of the satellite-based land surface temperature product, which is where this post started.

Regarding my analysis of the ISH surface thermometer dataset, I expect to extend the above analysis back to 1973 at least, the year when a maximum number of stations were available. I’ll post results when I’m done.

In the spirit of openness, I hope to post some form of my derived dataset — the monthly station average temperatures, by UTC hour — so others can analyze it. The data volume will be too large to post at this website, which is hosted commercially; I will find someplace on our UAH computer system so others can access it through ftp.

While there are many ways to slice and dice the thermometer data, I do not have a lot of time to devote to this side effort. I can’t respond to all the questions and suggestions you e-mail me on this subject, but I promise I will read them.


January 2010 Global Tropospheric Temperature Map

February 9th, 2010

Here’s the UAH lower tropospheric temperature anomaly map for January, 2010. As can be seen, Northern Hemispheric land, on a whole, is not as cold as many of us thought (click on image for larger version). Below-normal areas were restricted to parts of Russia and China, most of Europe, and the southeastern United States. Most of Canada and Greenland were well above normal:
UAH_LT_2010_01_grid
It should also be remembered that lower tropospheric temperature anomalies for one month over a small region are not necessarily going to look like surface temperature anomalies.

Since January 2010 was the third-warmest month in the 32-year satellite record, it might be of interest to compare the above patterns with the warmest month of record, April, 1998, which was an El Nino year, too:
UAH_LT_1998_04_grid


Some Thoughts on the Warm January, 2010

February 8th, 2010

I continue to get lots of e-mails asking how global average tropospheric temperatures for January, 2010 could be at a record high (for January, anyway, in the 32 year satellite record) when it seems like it was such a cold January where people actually live.

I followed up with a short sea surface temperature analysis from AMSR-E data which ended up being consistent with the AMSU tropospheric temperatures.

I’m sure part of the reason is warm El Nino conditions in the Pacific. Less certain is my guess that when the Northern Hemisphere continents are unusually cold in winter, then ocean surface temperatures, at least in the Northern Hemisphere, should be unusually warm. But this is just speculation on my part, based on the idea that cold continental air masses can intensify when they get land-locked, with less flow of maritime air masses over the continents, and less flow of cold air masses over the ocean. Maybe the Arctic Oscillation is an index of this, as a few of you have suggested, but I really don’t know.

Also, remember that there are always quasi-monthly oscillations in the amount of heat flux from the ocean to the atmosphere, primarily in the tropics, which is why a monthly up-tick in tropospheric temperatures is usually followed by a down-tick the next month, and vice-versa.

So, it could be that all factors simply conspired to give an unusually warm spike in January…only time will tell.

But this event has also spurred me to do something I’ve been putting off for years, which is develop limb corrections for the Aqua AMSU instrument. This will allow us to make global grids from the data (current grids are still based upon NOAA-15, which we know has a spurious warming over land areas from orbital decay and a changing local observation time). Since the Aqua AMSU is the first instrument on a satellite whose orbit is actively maintained, there will be no problem with those data since Aqua came online in mid-2002.

[Don’t get confused here…we use NOAA-15 AMSU ONLY to get spatial patterns, which are then forced to match the Aqua AMSU measurements when averaged in latitude bands. So, using NOAA-15 data does not corrupt the global or latitude-band averages…but they do affect how the warm and cool patterns are partitioned between land and ocean.]

I might also extend the analysis to specifically retrieve near-surface temperatures over land. I did this several years ago with SSM/I data over land, but never tried to get it published. It could be that such a comparison between AMSU surface and near-surface channels will uncover some interesting things about the urban heat island effect, since I use hourly surface temperature observations as training data in that effort.


NASA Aqua Sea Surface Temperatures Support a Very Warm January, 2010

February 4th, 2010

When I saw the “record” warmth of our UAH global-average lower tropospheric temperature (LT) product (warmest January in the 32-year satellite record), I figured I was in for a flurry of e-mails: “But this is the coldest winter I’ve seen since there were only 3 TV channels! How can it be a record warm January?”

Sorry, folks, we don’t make the climate…we just report it.

But, I will admit I was surprised. So, I decided to look at the AMSR-E sea surface temperatures (SSTs) that Remote Sensing Systems has been producing from NASA’s Aqua satellite since June of 2002. Even though the SST data record is short, and an average for the global ice-free oceans is not the same as global, the two do tend to vary together on monthly or longer time scales.

The following graph shows that January, 2010, was indeed warm in the sea surface temperature data:
AMSR-E-SST-thru-Jan-2010
But it is difficult to compare the SST product directly with the tropospheric temperature anomalies because (1) they are each relative to different base periods, and (2) tropospheric temperature variations are usually larger than SST variations.

So, I recomputed the UAH LT anomalies relative to the SST period of record (since June, 2002), and plotted the variations in the two against each other in a scatterplot (below). I also connected the successive monthly data points with lines so you can see the time-evolution of the tropospheric and sea surface temperature variations:
UAH-LT-vs-AMSR-E-SST-thru-Jan-2010
As can be seen, January, 2010 (in the upper-right portion of the graph) is quite consistent with the average relationship between these two temperature measures over the last 7+ years.

[NOTE: While the tropospheric temperatures we compute come from the AMSU instrument that also flies on the NASA Aqua satellite, along with the AMSR-E, there is no connection between the calibrations of these two instruments.]


January 2010 UAH Global Temperature Update +0.72 Deg. C

February 4th, 2010

UPDATE (4:00 p.m. Jan. 4): I’ve determined that the warm January 2010 anomaly IS consistent with AMSR-E sea surface temperatures from NASA’s Aqua satellite…I will post details later tonight or in the a.m. – Roy


YR MON GLOBE NH SH TROPICS
2009 01 +0.304 +0.443 +0.165 -0.036
2009 02 +0.347 +0.678 +0.016 +0.051
2009 03 +0.206 +0.310 +0.103 -0.149
2009 04 +0.090 +0.124 +0.056 -0.014
2009 05 +0.045 +0.046 +0.044 -0.166
2009 06 +0.003 +0.031 -0.025 -0.003
2009 07 +0.411 +0.212 +0.610 +0.427
2009 08 +0.229 +0.282 +0.177 +0.456
2009 09 +0.422 +0.549 +0.294 +0.511
2009 10 +0.286 +0.274 +0.297 +0.326
2009 11 +0.497 +0.422 +0.572 +0.495
2009 12 +0.288 +0.329 +0.246 +0.510
2010 01 +0.724 +0.841 +0.607 +0.757

UAH_LT_1979_thru_Jan_10

The global-average lower tropospheric temperature anomaly soared to +0.72 deg. C in January, 2010. This is the warmest January in the 32-year satellite-based data record.

The tropics and Northern and Southern Hemispheres were all well above normal, especially the tropics where El Nino conditions persist. Note the global-average warmth is approaching the warmth reached during the 1997-98 El Nino, which peaked in February April of 1998.

This record warmth will seem strange to those who have experienced an unusually cold winter. While I have not checked into this, my first guess is that the atmospheric general circulation this winter has become unusually land-locked, allowing cold air masses to intensify over the major Northern Hemispheric land masses more than usual. Note this ALSO means that not as much cold air is flowing over and cooling the ocean surface compared to normal. Nevertheless, we will double check our calculations to make sure we have not make some sort of Y2.01K error (insert smiley). I will also check the AMSR-E sea surface temperatures, which have also been running unusually warm.

After last month’s accusations that I’ve been ‘hiding the incline’ in temperatures, I’ve gone back to also plotting the running 13-month averages, rather than 25-month averages, to smooth out some of the month-to-month variability.

We don’t hide the data or use tricks, folks…it is what it is.

[NOTE: These satellite measurements are not calibrated to surface thermometer data in any way, but instead use on-board redundant precision platinum resistance thermometers (PRTs) carried on the satellite radiometers. The PRT’s are individually calibrated in a laboratory before being installed in the instruments.]


Evidence for Natural Climate Cycles in the IPCC Climate Models’ 20th Century Temperature Reconstructions

January 27th, 2010

What can we learn from the IPCC climate models based upon their ability to reconstruct the global average surface temperature variations during the 20th Century?

While the title of this article suggests I’ve found evidence of natural climate cycles in the IPCC models, it’s actually the temperature variability the models CANNOT explain that ends up being related to known climate cycles. After an empirical adjustment for that unexplained temperature variability, it is shown that the models are producing too much global warming since 1970, the period of most rapid growth in atmospheric carbon dioxide. This suggests that the models are too sensitive, in which case they are forecasting too much future warming, too.

Climate Models’ 20th Century Runs
We begin with the IPCC’s best estimate of observed global average surface temperature variations over the 20th Century, from the “HadCRUT3” dataset. (Monthly running 3-year averages are shown throughout.) Of course, there are some serious concerns over the validity of this observed temperature record, especially over the strength of the long-term warming trend, but for the time being let’s assume it is correct (click on image to see a large version).
IPCC-17-model-20th-Century-vs-HadCRUT3-large

Also shown in the above graph is the climate model temperature reconstruction for the 20th Century averaged across 17 of the 21 climate models which the IPCC tracks. To provide a reconstruction of 20th Century temperatures included in the PCMDI archive of climate model experiments, each modeling group was asked to use whatever forcings they believed were involved in producing the observed temperature record. Those forcings generally include increasing carbon dioxide, various estimates of aerosol (particulate) pollution, and for some of the models, volcanoes. (Also shown are polynomial fits to the curves, to allow a better visualization of the decadal time scale variations.)

There are a couple of notable features in the above chart. First, the average warming trend across all 17 climate models (+0.64 deg C per century) exactly matches the observed trend…I didn’t plot the trend lines, which lie on top of each other. This agreement might be expected since the models have been adjusted by the various modeling groups to best explain the 20th Century climate.

The more interesting feature, though, is the inability of the models to mimic the rapid warming before 1940, and the lack of warming from the 1940s to the 1970s. These two periods of inconvenient temperature variability are well known: (1) the pre-1940 warming was before atmospheric CO2 had increased very much; and (2) the lack of warming from the 1940s to the 1970s was during a time of rapid growth in CO2. In other words, the stronger warming period should have been after 1940, not before, based upon the CO2 warming effect alone.

Natural Climate Variability as an Explanation for What The Models Can Not Mimic
The next chart shows the difference between the two curves in the previous chart, that is, the 20th Century temperature variability the models have not, in an average sense, been able to explain. Also shown are three known modes of natural variability: the Pacific Decadal Oscillation (PDO, in blue); the Atlantic Multidecadal Oscillation (AMO, in green); and the negative of the Southern Oscillation Index (SOI, in red). The SOI is a measure of El Nino and La Nina activity. All three climate indicies have been scaled so that their net amount of variability (standard deviation) matches that of the “unexplained temperature” curve.
IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-large

As can be seen, the three climate indices all bear some level of resemblance to the unexplained temperature variability in the 20th Century.

An optimum linear combination of the PDO, AMO, and SOI that best matches the models’ “unexplained temperature variability” is shown as the dashed magenta line in the next graph. There are some time lags included in this combination, with the PDO preceding temperature by 8 months, the SOI preceding temperature by 4 months, and the AMO having no time lag.
IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-fit-large

This demonstrates that, at least from an empirical standpoint, there are known natural modes of climate variability that might explain at least some portion of the temperature variability seen during the 20th Century. If we exclude the post-1970 data from the above analysis, the best combination of the PDO, AMO, and SOI results in the solid magenta curve. Note that it does a somewhat better job of capturing the warmth around 1940.

Now, let’s add this natural component in with the original model curve we saw in the first graph, first based upon the full 100 years of overlap:
IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-fit-2-large

We now find a much better match with the observed temperature record. But we see that the post-1970 warming produced by the combined physical-statistical model tends to be over-stated, by about 40%. If we use the 1900 to 1970 overlap to come up with a natural variability component, the following graph shows that the post-1970 warming is overstated by even more: 74%.
IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-fit-3-large

Interpretation
What I believe this demonstrates is that after known, natural modes of climate variability are taken into account, the primary period of supposed CO2-induced warming during the 20th Century – that from about 1970 onward – does not need as strong a CO2-warming effect as is programmed into the average IPCC climate model. This is because the natural variability seen BEFORE 1970 suggests that part of the warming AFTER 1970 is natural! Note that I have deduced this from the IPCC’s inherent admission that they can not explain all of the temperature variability seen during the 20th Century.

The Logical Absurdity of Some Climate Sensitivity Arguments
This demonstrates one of the absurdities (Dick Lindzen’s term, as I recall) in the way current climate change theory works: For a given observed temperature change, the smaller the forcing that caused it, the greater the inferred sensitivity of the climate system. This is why Jim Hansen believes in catastrophic global warming: since he thinks he knows for sure that a relatively tiny forcing caused the Ice Ages, then the greater forcing produced by our CO2 emissions will result in even more dramatic climate change!

But taken to its logical conclusion, this relationship between the strength of the forcing, and the inferred sensitivity of the climate system, leads to the absurd notion that an infinitesimally small forcing causes nearly infinite climate sensitivity(!) As I have mentioned before, this is analogous to an ancient tribe of people thinking their moral shortcomings were responsible for lightning, storms, and other whims of nature.

This absurdity is avoided if we simply admit that we do not know all of the natural forcings involved in climate change. And the greater the number of natural forcings involved, then the less we have to worry about human-caused global warming.

The IPCC, though, never points out this inherent source of bias in its reports. But the IPCC can not admit to scientific uncertainty…that would reduce the chance of getting the energy policy changes they so desire.


Is Spencer Hiding the Increase? We Report, You Decide

January 16th, 2010

One of the great things about the internet is people can post anything they want, no matter how stupid, and lots of people who are incapable of critical thought will simply accept it.

I’m getting emails from people who have read blog postings accusing me of “hiding the increase” in global temperatures when I posted our most recent (Dec. 2009) global temperature update. In addition to the usual monthly temperature anomalies on the graph, for many months I have also been plotting a smoothed version, with a running 13 month average. The purpose of such smoothing is to better reveal longer-term variations, which is how “global warming” is manifested.

But on the latest update, I switched from 13 months to a running 25 month average instead. It is this last change which has led to accusations that I am hiding the increase in global temperatures. Well, here’s a plot with both running averages in addition to the monthly data. I’ll let you decide whether I have been hiding anything:
UAH-LT-13-and-25-month-filtering

Note how the new 25-month smoother minimizes the warm 1998 temperature spike, which is the main reason why I switched to the longer averaging time. If anything, this ‘hides the decline’ since 1998…something I feared I would be accused of for sure after I posted the December update.

But just the opposite has happened, with accusations I have hidden the increase. Go figure.


A Demonstration that Global Warming Predictions are Based More On Faith than On Science

January 12th, 2010

I’m always searching for better and simpler ways to explain the reason why I believe climate researchers have overestimated the sensitivity of our climate system to increasing carbon dioxide concentrations in the atmosphere.

What follows is a somewhat different take than I’ve used in the past. In the following cartoon, I’ve illustrated 2 different ways to interpret a hypothetical (but realistic) set of satellite observations that indicate (1) warming of 1 degree C in global average temperature, accompanied by (2) an increase of 1 Watt per sq. meter of extra radiant energy lost by the Earth to space.
Three-cases-global-forcing-feedback

The ‘consensus’ IPCC view, on the left, would be that the 1 deg. C increase in temperature was the cause of the 1 Watt increase in the Earth’s cooling rate. If true, that would mean that a doubling of atmospheric carbon dioxide by late in this century (a 4 Watt decrease in the Earth’s ability to cool) would eventually lead to 4 deg. C of global warming. Not good news.

But those who interpret satellite data in this way are being sloppy. For instance, they never bother to investigate exactly WHY the warming occurred in the first place. As shown on the right, natural cloud variations can do the job quite nicely. To get a net 1 Watt of extra loss you can (for instance) have a gain of 2 Watts of forcing from the cloud change causing the 1 deg. C of warming, and then a resulting feedback response to that warming of an extra 3 Watts.

The net result still ends up being a loss of 1 extra Watt, but in this scenario, a doubling of CO2 would cause little more than 1 deg. C of warming since the Earth is so much more efficient at cooling itself in response to a temperature increase.

Of course, you can choose other combinations of forcing and feedback, and end up deducing just about any amount of future warming you want. Note that the major uncertainty here is what caused the warming in the first place. Without knowing that, there is no way to know how sensitive the climate system is.

And that lack of knowledge has a very interesting consequence. If there is some forcing you are not aware of, you WILL end up overestimating climate sensitivity. In this business, the less you know about how the climate system works, the more fragile the climate system looks to you. This is why I spend so much time trying to separately identify cause (forcing) and effect (feedback) in our satellite measurements of natural climate variability.

As a result of this inherent uncertainty regarding causation, climate modelers are free to tune their models to produce just about any amount of global warming they want to. It will be difficult to prove them wrong, since there is as yet no unambiguous interpretation of the satellite data in this regard. They can simply assert that there are no natural causes of climate change, and as a result they will conclude that our climate system is precariously balanced on a knife edge. The two go hand-in-hand.

Their science thus enters the realm of faith. Of course, there is always an element of faith in scientific inquiry. Unfortunately, in the arena of climate research the level of faith is unusually high, and I get the impression most researchers are not even aware of its existence.