The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists

April 20th, 2010

Today (April 20) is the official release date of my new book entitled: “The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists“, published by Encounter Books.

About one-half of Blunder is a non-technical description of our new peer reviewed and soon-to-be-published research which supports the opinion that a majority of Americans already hold: that warming in recent decades is mostly due to a natural cycle in the climate system — not to an increase in atmospheric carbon dioxide from fossil fuel burning.

Believe it or not, this potential natural explanation for recent warming has never been seriously researched by climate scientists. The main reason they have ignored this possibility is that they cannot think of what might have caused it.

You see, climate researchers are rather myopic. They think that the only way for global-average temperatures to change is for the climate system to be forced ‘externally’…by a change in the output of the sun, or by a large volcanic eruption. These are events which occur external to the normal, internal operation of the climate system.

But what they have ignored is the potential for the climate system to cause its own climate change. Climate change is simply what the system does, owing to its complex, dynamic, chaotic internal behavior.

As I travel around the country, I find that the public instinctively understands the possibility that there are natural climate cycles. Unfortunately, it is the climate “experts” who have difficulty grasping the concept. This is why I am taking my case to the public in this book. The climate research community long ago took the wrong fork in the road, and I am afraid that it might be too late for them to turn back.

NATURE’S SUNSHADE: CLOUDS
The most obvious way for warming to be caused naturally is for small, natural fluctuations in the circulation patterns of the atmosphere and ocean to result in a 1% or 2% decrease in global cloud cover. Clouds are the Earth’s sunshade, and if cloud cover changes for any reason, you have global warming — or global cooling.

How could the experts have missed such a simple explanation? Because they have convinced themselves that only a temperature change can cause a cloud cover change, and not the other way around. The issue is one of causation. They have not accounted for cloud changes causing temperature changes.

The experts have simply mixed up cause and effect when observing how clouds and temperature vary. The book reveals a simple way to determine the direction of causation from satellite observations of global average temperature and cloud variations. And that new tool should fundamentally change how we view the climate system.

Blunder also addresses a second major mistake that results from ignoring the effect of natural cloud variations on temperature: it results in the illusion that the climate system is very sensitive. The experts claim that, since our climate system is very sensitive, then our carbon dioxide emissions are all that is needed to explain global warming. There is no need to look for alternative explanations.

But I show that the experts have merely reasoned themselves in a circle on this subject. When properly interpreted, our satellite observations actually reveal that the system is quite IN-sensitive. And an insensitive climate system means that nature does not really care whether you travel by jet, or how many hamburgers or steaks you eat.

CARBON DIOXIDE: FRIEND OR FOE?
The supposed explanation that global warming is due to increasing atmospheric carbon dioxide from our burning of fossil fuels turns out to be based upon little more than circumstantial evidence. It is partly a symptom of our rather primitive understanding of how the climate system works.

And I predict that the proposed cure for global warming – reducing greenhouse gas emissions – will someday seem as outdated as using leeches to cure human illnesses.

Nevertheless, despite the fact that scientific knowledge is continually changing, it is increasingly apparent that the politicians are not going to let little things like facts get in their way. For instance, a new draft climate change report was released by the U.S. yesterday (April 19) which, in part, says: “Global warming is unequivocal and primarily human-induced … Global temperature has increased over the past 50 years. This observed increase is due primarily to human-induced emissions of heat-trapping gases.”

You see, the legislative train left the station many years ago, and no amount of new science will slow it down as it accelerates toward its final destination: forcibly reducing greenhouse gas emissions.

But in Blunder I address what other scientists should have the courage to admit: that maybe putting more CO2 in the atmosphere is a good thing. Given that it is necessary for life on Earth, the amount of CO2 in the atmosphere is surprisingly small. We already know that nature is gobbling up 50% of what humanity produces, no matter how fast we produce it. So, it is only logical to address the possibility that nature — that life on Earth — has actually been starved for carbon dioxide.

This should give you some idea of the major themes of my new book. I am under no illusion that the book will settle the scientific debate over global warming.

To the contrary — I am hoping the debate will finally begin.


The Spencer’s Swimming Pool Goes Solar

April 14th, 2010

I have always been intrigued by solar power. Getting free energy from the sun is an attractive idea — if one ignores the fact that the equipment necessary to convert that “free” energy into a useful form can get a little pricey.

So, combining my interest in solar power with my wife’s desire that our swimming pool warm up faster in the spring (and stay warm later in the fall), I had an excuse to finally build a solar heater for our swimming pool.

Now, I could have bought one of the many products on the market for doing this, but what fun would that be? I wanted to build something from scratch, something that would help start conversation when people visit.

And if it actually worked, that would be even better.

And now, after about 6 hours and $260 invested, I have a portable system that is producing “free” solar energy and dumping it into the pool. Yes, I know I could have built it cheaper…but that wasn’t my goal.

I started with the observation that our garden hoses get really hot when sitting in the sun. So, I thought, why not use black garden hoses as the solar collector? I then computed how much area is covered by a 100 foot garden hose…not very much…just over 6 sq. ft. Since I wanted to go with expensive “eco”-rated lead-free rubber hoses, I didn’t want to have to buy too many of those puppies.

So, since I knew that commercial solar collectors had water tubes embedded in black, solar-absorbent sheets of metal (which is where most of the solar energy is absorbed), I decided I would attach the rubber hose to a homemade collector. The collector surface would then transmit that extra solar energy to the water hoses.

I started with a 4×8 foot sheet of Styrofoam insulation board, about 1 inch thick, to keep the collector lightweight and reduce heat losses through the back of the collector surface. I cut the 4×8 sheet in half, so I could make two separate 4×4 foot collectors, permitting easier carrying and storage when not in use.

For the collector surface I bought a 50 foot roll, 20 inches wide, of aluminum flashing used for roofing applications. I cut 4 foot lengths and glued down 3 of them to each Styrofoam sheet with construction adhesive. Aluminum has a very high thermal conductivity, about 9,000 times that of air, which is what you want for a solar collector. You want your materials to conduct most of the heat to the water circulating through the tubes before the surrounding air has a chance of stealing it away.

Now that I had 2 aluminum-covered Styrofoam sheets, the next step was to spray paint them black. Black is always the best solar absorber color…that’s why it’s black! Black reflects virtually no sunlight, and any sunlight that is not reflected from a solid, opaque surface must then be absorbed…which is what you want in a solar collector.

I used 1 can of flat black enamel spray paint for each of the two 4×4 foot collector surfaces. By the time the paint was dry, it was late enough in the morning for the sun to start peeking over the trees behind our house and start warming the collector surfaces. I put my hand on one — OUCH! Too hot to touch!…I thought to myself, “this is a good thing”.

The next step was to lay out 100 feet of black rubber garden hose on each sheet in a uniform spiral pattern, which then results in about 2 inches of collector surface separating each coil. Rubber has a thermal conductivity about 6 times that of air, so it is nowhere near as good a solar collector material as aluminum or copper. Copper tubing would have been a much better choice, with a thermal conductivity over 15,000 times that of air – but it would have also been much more expensive.

Next I needed to attach the hoses to the painted aluminum in such a way that the hot aluminum would efficiently transmit heat to the cooler water-filled hose. I had read somewhere that common silicone caulk has about 10 times the thermal conductivity of air, so my plan was to attach the hoses to the collector surfaces with black caulk.

But first I needed to get the garden hose to stay coiled in place, so I used a hot glue gun to tack it down. I quickly found that the hot collector surface and black hose sitting in the sun was too hot for the hot glue to solidify! How am I going to get around this problem?

I decided I would cool the hose by starting to pump pool water through it before the collectors were finished. I attached the $40 submersible fountain pump I bought at Lowes to one end of the hose, lowered it to the bottom of the pool, then draped the other end of the hose over the edge of the pool for the return flow. I plugged the pump in and, Voila!, my solar collection system was working before it was even assembled!

I proceeded to tack the hose down into position with the hot glue gun, which was a pain since hot glue does not stick to cool rubber worth a darn. I then used about 4 tubes of black silicone caulk on each of the collectors to seal both sides of the hose where it met the aluminum surface. This was the most tedious part of the job, with almost 400 feet caulk applied to almost 200 feet of hose.

As seen in the above photo, I made the collectors so they could be attached in series. That way, I could construct and add as many collectors as I wanted to the system.

After everything was hooked up and running, I checked to see how fast my 35 Watt pump was pumping water through the hose – I measured about 2 gallons per minute, which is 120 gallons per hour (gph). This is much less than the pump’s rating of 300 to 500 gph, but that’s due to the large amount of friction within 200 feet of garden hose.

The above picture was taken at 11:45 a.m. (1 hour before solar noon) on April 14, 2010. At that time, the two collectors together were raising the water temperature from 77 deg. F at the pump inlet to 85 deg. F at the outlet, a temperature increase of 8 degrees. So, every 60 seconds, the collectors together were warming 2 gallons of water by 8 deg. F. When you run the numbers, this ends up being an energy transfer rate of about 2.3 kilowatts.

Since the area of each of the two collectors is about 1.5 sq. meters, this means about 800 Watts per sq. meter of heat flux was being usefully generated by the collectors. I’m guestimating that this would be about 80% efficiency, assuming about 1,000 Watts per sq. meter is falling on my collectors at this time. (The sun’s elevation in the sky was approaching 65 degrees at this time, and I had the collectors tilted toward the sun at about 15 degrees.) By the way, none of the numbers I have come up are here meant to be very accurate.

Due to shading by trees, our pool gets only about 5 ½ hours of direct sunlight each day, between 11 a.m. and 4:30 p.m., with solar noon occurring at 12:45 p.m. Since the elevation of the sun in the sky changes during that time, let’s assume I get the equivalent of 4 hours of solar energy at the rate mentioned above, measured at 11:45 a.m.

Our pool is a rather small fiberglass one, holding 6,600 gallons of water. I compute from the above numbers that the solar collection system adds about an extra 0.7 deg. F of warming on a sunny day, which is a 30% enhancement to the 2 deg. F of warming the pool experiences naturally on a sunny day.

What would it cost to heat the same amount of water with electricity? If I can get 2.3 kilowatts of heat input for 4 hours, that’s 9.2 kilowatt-hours of energy, which at our electric rate of about 9 cents per kwh, is only 83 cents worth of electricity per sunny day.

Hmmm.

For my investment of $260, at a daily savings of 83 cents, I will need to operate the solar collectors for 310 days (!) to reduce the cost per kilowatt-hour to that which I could have gotten from an electric pool heater. If we use the system for 30 sunny days in the spring, and then 30 sunny days in the fall (which seems unlikely), that would take about 5 years. Of course, an electric pool heater would also have cost something to buy, too.

So, maybe this project did not make much sense economically. But, looking on the bright side, what I gain from my investment is (1) a longer swimming season, (2) a conversation starter, and (3) an extra blog posting.


Correction to UAH v5.3 Global Gridpoint Temperature Dataset

April 14th, 2010

The grid-based monthly anomaly satellite temperature files mounted on our server prior to 14 April 2010 were affected by an error in our recent merging of NOAA-18 into the data stream. This was corrected on 13 April and uploaded on 14 April.

The affected files are:

tXXmonamg.YYYY_5.3 where XX is lt, mt and ls, and YYYY is year.

uahncdc.XX where XX is lt, mt and ls.

We are sorry for this problem and thank alert users around the world for spotting the error (which showed up as a step-jump in the difference between land and ocean temperature anomalies) with NOAA-18 in 2005 so quickly (e.g. Alessandro Patrignani and Javier Arroyo)


The Illusion of a Sensitive Climate System: A Stovetop Demonstration

April 10th, 2010

(edited for clarity, 8:15 a.m.CDT April 10)

Whether it is the Earth’s climate system experiencing warming, or a pot of water being placed on a warm stovetop, the fundamental basis of why the temperature of something changes is the same: If energy is being absorbed faster than it is being lost, then warming will result.

This is illustrated in the following two plots, which show that imposing such an “energy imbalance” on a pot of water (or on the Earth’s climate system) causes warming which is at first rapid, but then slows as the system reaches a new state of energy equilibrium where the rate of energy loss by the system increases to match the rate of energy gain. Once these two flows of energy once again become equal, the temperature stops changing.

In the context of global warming, more CO2 added to the atmosphere from humanity’s burning of fossil fuels slightly reduces the Earth’s ability to cool to outer space through infrared energy loss, contributing to the Earth’s so-called ‘greenhouse effect’. This “heat radiation”, by the way, is also one of the energy loss mechanisms for a pot of water on the stove.

In global warming theory, this increase in the greenhouse gas content of the atmosphere causes an energy imbalance, which then causes warming. The warming then increases the rate of infrared energy loss until energy equilibrium is once again restored — but now at a higher temperature. This is the basic mechanism behind the theory of manmade global warming.

THE TEMPERATURE “SENSITIVITY” OF THE SYSTEM
Returning to the above plot, if we know (1) the amount of energy imbalance imposed upon the system — whether a pot of water or the Earth — and we know (2) how much the system then warms as a result, we then know the temperature “sensitivity” of the system to an energy imbalance.

For instance, if a small energy imbalance leads to a large temperature change, that is called high sensitivity. This is how all climate models now behave, and is the basis for Jim Hansen’s and Al Gore’s fear of a global warming Armageddon.

But alternatively, if a large energy imbalance causes only a small temperature change, then that is called low sensitivity, which is how I, MIT’s Dick Lindzen, and a minority of other climate researchers believe the climate system behaves.

THE ILLUSION OF A SENSITIVE CLIMATE SYSTEM
I believe that climate researchers have fooled themselves into believing that the climate system is very sensitive. The reason why is related to a real-world complication to the above simplified example: when we compare a warm year to a cool year in the real climate system, we are NOT looking at the equilibrium response at one of those temperatures, to an energy imbalance imposed at the other temperature. That would be a very special case indeed, and it is one that never happens in the real world.

To see what usually happens in the real climate system, let’s return to the example of a pot of water on the stove. Imagine we keep turning the stove up and down, over and over. This will result in the water warming and cooling as the temperature responds to the ever-changing energy imbalance of the system.

Now imagine we have measured both the energy imbalances and the temperatures over time so that we can compare them. If we compare all of the times when the water was warmer to all of the times that the water was cooler, what we will find is that the difference in energy balance between the warmer and cooler temperature states is very nearly the same.

And if a big temperature difference corresponds to only a small change in energy imbalance, this then ‘looks like’ a highly sensitive system…even if the system has very low sensitivity!

If we just turn the stove up once, and then let the system come to a new state of equilibrium, then we really can measure the sensitivity of the system. But if we keep turning the stove up and down, this is no longer possible.

In the real world, the climate system is almost never in a state of energy equilibrium. Chaotic changes in the average cloud cover of the Earth are like the stove being turned up and down, since the amount of sunlight being absorbed by the climate system is “turned up and down” by the ever-changing cloud cover.

As a result, satellite measurements of the Earth energy imbalance will show that there is, on average, only a small energy imbalance difference between warm years and cool years. This gives the illusion of a sensitive climate system, even if the system is very IN-sensitive.

Again, the illusion arises because we try to measure the sensitivity of the climate system based upon a false assumption: that different temperature states of the Earth correspond to a change from energy dis-equilibrium at one temperature, to energy equilibrium at the other temperature. This is almost never the case…yet it IS the only case in which the sensitivity of the system can be measured! Researchers up to this point have been trying to diagnose climate sensitivity from observations of natural climate variations based upon a false assumption.

This issue will be addressed at length, along with theoretical model demonstrations, in our new research paper which has just been accepted for publication in the Journal of Geophysical Research.


Update: Cloud Forcing Paper Finally Accepted to JGR

April 9th, 2010

After about 2 years of multiple submissions and rewrites, our paper on the contaminating effect of natural, internally-forced global cloud variations on the diagnosis of climate feedbacks has finally been accepted by Journal of Geophysical Research. I do not yet have an estimated publication date, and please don’t ask for a pre-publication copy — I do not want to jeopardize its publication in any way.

The main message of the paper is that feedbacks are, in general, not observable in the real climate system because natural variations in cloud cover obscure them. This is the cause-versus-effect issue I have been harping on for years: You cannot measure cloud FEEDBACK (temperature changes causing cloud changes) unless you can quantify and remove the effect of internal radiative FORCING (cloud changes causing temperature changes). Causation in one direction must be accounted for in order to measure causation in the other direction.

We use a combination of (1) 9 years of global satellite data, (2) a simple forcing-feedback model of climate variability, and (3) output from the IPCC climate models, to demonstrate various aspects of this issue. We also show the only circumstances under which feedback CAN be measured in satellite data…and what that feedback looks like.

What I find fascinating is that, after outright rejection of the paper by reviewers, we had to go back to the very basics in order to convince reviewers of what we were saying, and take them through the whole issue of forcing-versus-feedback one step at a time. For instance, too many researchers have been misled by the simple, hypothetical example of an instantaneous doubling of atmospheric CO2, the warming that results, and the estimation of feedback from that forcing and temperature response. We show why this simple example offers NO USEFUL GUIDANCE for estimating feedbacks in the real climate system, and will seriously mislead us if we do try to use it.

The issue we address in this paper is not even new! Other published papers have included the “internal radiative forcing” term in their forcing-feedback equations…they just never explored the impact that their neglect of that term would have on the diagnosis of feedback.

My hope is that other researchers will read this paper and come to a much better understanding of why our thinking on the subject of diagnosing feedbacks in the real climate system has remained so muddled for so long.


MARCH 2010 UAH Global Temperature Update: +0.65 deg. C

April 5th, 2010


YR MON GLOBE NH SH TROPICS
2009 1 0.252 0.472 0.031 -0.065
2009 2 0.247 0.569 -0.074 -0.044
2009 3 0.191 0.326 0.056 -0.158
2009 4 0.162 0.310 0.013 0.012
2009 5 0.140 0.160 0.120 -0.057
2009 6 0.044 -0.011 0.100 0.112
2009 7 0.429 0.194 0.665 0.507
2009 8 0.242 0.229 0.254 0.407
2009 9 0.504 0.590 0.417 0.592
2009 10 0.361 0.335 0.387 0.381
2009 11 0.479 0.458 0.536 0.478
2009 12 0.283 0.350 0.215 0.500
2010 1 0.649 0.861 0.437 0.684
2010 2 0.603 0.725 0.482 0.792
2010 3 0.653 0.853 0.454 0.726

UAH_LT_1979_thru_Mar_10

The global-average lower tropospheric temperature continues to be quite warm: +0.65 deg. C for March, 2010. This is about the same as January. Global average sea surface temperatures (not shown) remain high.

As a reminder, last month we change to Version 5.3 of our dataset, which accounts for the mismatch between the average seasonal cycle produced by the older MSU and the newer AMSU instruments. This affects the value of the individual monthly departures, but does not affect the year to year variations, and thus the overall trend remains the same as in Version 5.2.

ALSO…we have now added the NOAA-18 AMSU, which provides data since June of 2005. The local observation time of NOAA-18 (now close to 2 p.m., ascending node) is similar to that of NASA’s Aqua satellite (about 1:30 p.m.). The temperature anomalies listed above have changed somewhat as a result of adding NOAA-18.

[NOTE: These satellite measurements are not calibrated to surface thermometer data in any way, but instead use on-board redundant precision platinum resistance thermometers (PRTs) carried on the satellite radiometers. The PRT’s are individually calibrated in a laboratory before being installed in the instruments.]


Direct Evidence that Most U.S. Warming Since 1973 Could Be Spurious

March 16th, 2010

INTRODUCTION
My last few posts have described a new method for quantifying the average Urban Heat Island (UHI) warming effect as a function of population density, using thousands of pairs of temperature measuring stations within 150 km of each other. The results supported previous work which had shown that UHI warming increases logarithmically with population, with the greatest rate of warming occurring at the lowest population densities as population density increases.

But how does this help us determine whether global warming trends have been spuriously inflated by such effects remaining in the leading surface temperature datasets, like those produced by Phil Jones (CRU) and Jim Hansen (NASA/GISS)?

While my quantifying the UHI effect is an interesting exercise, the existence of such an effect spatially (with distance between stations) does not necessarily prove that there has been a spurious warming in the thermometer measurements at those stations over time. The reason why it doesn’t is that, to the extent that the population density of each thermometer site does not change over time, then various levels of UHI contamination at different thermometer sites would probably have little influence on long-term temperature trends. Urbanized locations would indeed be warmer on average, but “global warming” would affect them in about the same way as the more rural locations.

This hypothetical situation seems unlikely, though, since population does indeed increase over time. If we had sufficient truly-rural stations to rely on, we could just throw all the other UHI-contaminated data away. Unfortunately, there are very few long-term records from thermometers that have not experienced some sort of change in their exposure…usually the addition of manmade structures and surfaces that lead to spurious warming.

Thus, we are forced to use data from sites with at least some level of UHI contamination. So the question becomes, how does one adjust for such effects?

As the provider of the officially-blessed GHCN temperature dataset that both Hansen and Jones depend upon, NOAA has chosen a rather painstaking approach where the long-term temperature records from individual thermometer sites have undergone homogeneity “corrections” to their data, mainly based upon (presumably spurious) abrupt temperature changes over time. The coming and going of some stations over the years further complicates the construction of temperature records back 100 years or more.

All of these problems (among others) have led to a hodgepodge of complex adjustments.

A SIMPLER TECHNIQUE TO LOOK FOR SPURIOUS WARMING

I like simplicity of analysis — whenever possible, anyway. Complexity in data analysis should only be added when it is required to elucidate something that is not obvious from a simpler analysis. And it turns out that a simple analysis of publicly available raw (not adjusted) temperature data from NOAA/NESDIS NOAA/NCDC, combined with high-resolution population density data for those temperature monitoring sites, shows clear evidence of UHI warming contaminating the GHCN data for the United States.

I will restrict the analysis to 1973 and later since (1) this is the primary period of warming allegedly due to anthropogenic greenhouse gas emissions; (2) the period having the largest number of monitoring sites has been since 1973; and (3) a relatively short 37-year record maximizes the number of continuously operating stations, avoiding the need to handle transitions as older stations stop operating and newer ones are added.

Similar to my previous posts, for each U.S. station I average together four temperature measurements per day (00, 06, 12, and 18 UTC) to get a daily average temperature (GHCN uses daily max/min data). There must be at least 20 days of such data for a monthly average to be computed. I then include only those stations having at least 90% complete monthly data from 1973 through 2009. Annual cycles in temperature and anomalies are computed from each station separately.

I then compute multi-station average anomalies in 5×5 deg. latitude/longitude boxes, and then compare the temperature trends for the represented regions to those in the CRUTem3 (Phil Jones’) dataset for the same regions. But to determine whether the CRUTem3 dataset has any spurious trends, I further divide my averages into 4 population density classes: 0 to 25; 25 to 100; 100 to 400; and greater than 400 persons per sq. km. The population density data is at a nominal 1 km resolution, available for 1990 and 2000…I use the 2000 data.

All of these restrictions then result in thirteen 24 to 26 5-deg grid boxes over the U.S. having all population classes represented over the 37-year period of record. In comparison, the entire U.S. covers about 31 40 grid boxes in the CRUTem3 dataset. While the following results are therefore for a regional subset (at least 60%) of the U.S., we will see that the CRUTem3 temperature variations for the entire U.S. do not change substantially when all 31 40 grids are included in the CRUTem3 averaging.

EVIDENCE OF A LARGE SPURIOUS WARMING TREND IN THE U.S. GHCN DATA

The following chart shows yearly area-averaged temperature anomalies from 1973 through 2009 for the 13 24 to 26 5-deg. grid squares over the U.S. having all four population classes represented (as well as a CRUTem3 average temperature measurement). All anomalies have been recomputed relative to the 30-year period, 1973-2002.

The heavy red line is from the CRUTem3 dataset, and so might be considered one of the “official” estimates. The heavy blue curve is the lowest population class. (The other 3 population classes clutter the figure too much to show, but we will soon see those results in a more useful form.)

Significantly, the warming trend in the lowest population class is only 47% of the CRUTem3 trend, a factor of two difference.

Also interesting is that in the CRUTem3 data, 1998 and 2006 would be the two warmest years during this period of record. But in the lowest population class data, the two warmest years are 1987 and 1990. When the CRUTem3 data for the whole U.S. are analyzed (the lighter red line) the two warmest years are swapped, 2006 is 1st and then 1998 2nd.

From looking at the warmest years in the CRUTem3 data, one gets the impression that each new high-temperature year supersedes the previous one in intensity. But the low-population stations show just the opposite: the intensity of the warmest years is actually decreasing over time.

To get a better idea of how the calculated warming trend depends upon population density for all 4 classes, the following graph shows – just like the spatial UHI effect on temperatures I have previously reported on – that the warming trend goes down nonlinearly as population density of the stations decrease. In fact, extrapolation of these results to zero population density might produce little warming at all!

This is a very significant result. It suggests the possibility that there has been essentially no warming in the U.S. since the 1970s.

Also, note that the highest population class actually exhibits slightly more warming than that seen in the CRUTem3 dataset. This provides additional confidence that the effects demonstrated here are real.

Finally, the next graph shows the difference between the lowest population density class results seen in the first graph above. This provides a better idea of which years contribute to the large difference in warming trends.

Taken together, I believe these results provide powerful and direct evidence that the GHCN data still has a substantial spurious warming component, at least for the period (since 1973) and region (U.S.) addressed here.

There is a clear need for new, independent analyses of the global temperature data…the raw data, that is. As I have mentioned before, we need independent groups doing new and independent global temperature analyses — not international committees of Nobel laureates passing down opinions on tablets of stone.

But, as always, the analysis presented above is meant more for stimulating thought and discussion, and does not equal a peer-reviewed paper. Caveat emptor.


Urban Heat Island, a US-versus-Them Update

March 11th, 2010

My post from yesterday showed a rather unexpected difference between the United States versus the rest of the world for the average urban heat island (UHI) temperature-population relationship. Updated results shown below have now reduced that discrepancy…but not removed it.

I have now included more station temperature and population data by removing my requirement that two neighboring temperature measurement stations must have similar fractions of water coverage (lakes, coastlines, etc.). The results (shown below, second panel) reveal less of a discrepancy between the U.S. and the rest of the world than in my previous post. The US now shows weak warming at the lowest population densities, rather than cooling as was presented yesterday.

Also, I adjusted the population bin boundaries used for averaging to provide more uniform numbers of station pairs per bin. This has reduced the differences between individual years (top panel), suggesting more robust results. It has also increased the overall UHI warming effect, with about 1.0 deg. C average warming at a population density of 100 persons per sq. km.
ISH-UHI-warming-global-and-US-non-US


Global Urban Heat Island Effect Study: An Update

March 10th, 2010

This is an update to my previous post describing a new technique for estimating the average amount of urban heat island (UHI) warming accompanying an increase in population density. The analysis is based upon 4x per day temperature observations in the NOAA International Surface Hourly (ISH) dataset, and on 1 km population density data for the year 2000.

I’m providing a couple of charts with new results, below. The first chart shows the global yearly average warming-vs-population density increase from each year from 2000 to 2009. They all show clear evidence of UHI warming, even for small population density increases at very low population density. A population density of only 100 persons per sq. km exhibits average warming of about 0.8 deg. C compared to a nearby unpopulated temperature monitoring location.
ISH-UHI-warming-global-by-year

In this analysis, the number of independent temperature monitoring stations having at least 1 neighboring station with a lower population density within 150 km of it, increased from 2,183 in 2000, to 4,290 in 2009…an increase by a factor of 2 in ten years. The number of all resulting station pairs increased from 9,832 in 2000 to 30,761 in 2009, an increase of 3X.

The next chart shows how the results for the U.S. differ from non-US stations. In order to beat down the noise for the US-only results, I included all ten years (2000 thru 2009) in the analysis. The US results are obviously different from the non-US stations, with much less warming with an increase in population density, and even evidence of an actual slight cooling for the lowest population categories.
ISH-UHI-US-vs-nonUS-2000-2009

The cooling signal appeared in 5 of the 10 years, not all of them, a fact I am mentioning just in case someone asks whether it existed in all 10 years. I don’t know the reason for this, but I suspect that a little thought from Anthony Watts, Joe D’Aleo & others will help figure it out.

John Christy has agreed to co-author a paper on this new technique, since he has some experience publishing in this area of research (UHI & land use change effects on thermometer data) than me. We have not yet decided what journal to submit to.


February 2010 UAH Global Temperature Update: Version 5.3 Unveiled

March 5th, 2010

UPDATED: 2:16 p.m. CST March 6, 2010: Added a plot of the differences between v5.3 and v5.2.


YR MON GLOBE NH SH TROPICS
2009 1 0.213 0.418 0.009 -0.119
2009 2 0.220 0.557 -0.117 -0.091
2009 3 0.174 0.335 0.013 -0.198
2009 4 0.135 0.290 -0.020 -0.013
2009 5 0.102 0.109 0.094 -0.112
2009 6 0.022 -0.039 0.084 0.074
2009 7 0.414 0.188 0.640 0.479
2009 8 0.245 0.243 0.247 0.426
2009 9 0.502 0.571 0.433 0.596
2009 10 0.353 0.295 0.410 0.374
2009 11 0.504 0.443 0.565 0.482
2009 12 0.262 0.331 0.190 0.482
2010 1 0.630 0.809 0.451 0.677
2010 2 0.613 0.720 0.506 0.789

UAH_LT_1979_thru_Feb_10

The global-average lower tropospheric temperature remained high, at +0.61 deg. C for February, 2010. This is about the same as January, which in our new Version 5.3 of the UAH dataset was +0.63 deg. C. February was second warmest in the 32-year record, behind Feb 1998 which was itself the second warmest of all months. The El Nino is still the dominant temperature signal; many people living in Northern Hemisphere temperate zones were still experiencing colder than average weather.

The new dataset version does not change the long-term trend in the dataset, nor does it yield revised record months; it does, however, reduce some of the month-to-month variability, which has been slowly increasing over time.

Version 5.3 accounts for the mismatch between the average seasonal cycle produced by the older MSU and the newer AMSU instruments. This affects the value of the individual monthly departures, but does not affect the year to year variations, and thus the overall trend remains the same.

Here is a comparison of v5.2 and v5.3 for global anomalies in lower tropospheric temperature.

YR MON v5.2 v5.3
2009 1 0.304 0.213
2009 2 0.347 0.220
2009 3 0.206 0.174
2009 4 0.090 0.135
2009 5 0.045 0.102
2009 6 0.003 0.022
2009 7 0.411 0.414
2009 8 0.229 0.245
2009 9 0.422 0.502
2009 10 0.286 0.353
2009 11 0.497 0.504
2009 12 0.288 0.262
2010 1 0.721 0.630
2010 2 0.740 0.613

trends since 11/78: +0.132 +0.132 deg. C per decade

The following discussion is provided by John Christy:
As discussed in our running technical comments last July, we have been looking at making an adjustment to the way the average seasonal cycle is removed from the newer AMSU instruments (since 1998) versus the older MSU instruments. At that time, others (e.g. Anthony Watts) brought to our attention the fact that UAH data tended to have some systematic peculiarities with specific months, e.g. February tended to be relatively warmer while September was relatively cooler in these comparisons with other datasets. In v5.2 of our dataset we relied considerably on the older MSUs to construct the average seasonal cycle used to calculated the monthly departures for the AMSU instruments. This created the peculiarities noted above. In v5.3 we have now limited this influence.

UPDATE: The following chart, which differences the v5.3 and v5.2 versions of the dataset clearly illustrates this spurious component to the seasonal cycle which has been removed:
TLT_GL_v.5.2_vs_v5.3

The adjustments are very minor in terms of climate as they impact the relative departures within the year, not the year-to-year variations. Since the errors are largest in February (almost 0.13 C), we believe that February is the appropriate month to introduce v5.3 where readers will see the differences most clearly. Note that there is no change in the long term trend as both v5.2 and v5.3 show +0.132 C/decade. All that happens is a redistribution of a fraction of the anomalies among the months. Indeed, with v5.3 as with v5.2, Jan 2010 is still the warmest January and February 2010 is the second warmest Feb behind Feb 1998 in the 32-year record.

For a more detailed discussion of this issue written last July, email John Christy at christy@nsstc.uah.edu for the document.

[NOTE: These satellite measurements are not calibrated to surface thermometer data in any way, but instead use on-board redundant precision platinum resistance thermometers (PRTs) carried on the satellite radiometers. The PRT’s are individually calibrated in a laboratory before being installed in the instruments.]