Fire & Water: Some Thoughts on Wood Stove Design and Efficiency

February 18th, 2011

Fire & Water: Some Thoughts on Wood Stove Design and Efficiency

Sometimes I have to get away from the climate stuff for awhile. This is one of those times.

Also, each year at this time my wife asks how we can get our swimming pool to warm up quicker this spring. Even after 20 years, global warming hasn’t helped a darn bit.

She also always mentions wood heat as a possibility. I have always discounted the idea as too involved a project.

Well, this year we’re gonna git ‘er done. Last year I built a homemade solar pool heater. This year we are going to add some of that concentrated, carbon-based fuel to our energy portfolio.

After all, we DO have lots of wood available to us behind our house. Mature hardwoods, and the old trees just fall over and rot. I believe one of our white oaks dates to before our country WAS a country.

So, how to make a wood stove that can heat swimming pool water? Over the years, I’ve had enough experience with wood burning fireplaces, free-standing wood stoves, thermodynamics, radiative and convective heat transfer, buoyancy of heated air, etc., that I think I could help come up with a good stove design.

And ‘Uncle Lou’ (my wife’s sister’s husband) up in Sault Sainte Marie, Michigan has a lifetime of building and welding and fixing and fabricating. So, he’s helping me design a stainless steel wood stove with an outer water jacket that I’ll pump pool water through to heat the pool. We will use stainless steel to help keep iron out of the pool water.

Meanwhile, I’ve been reading about the newer EPA-certified stove designs – which is all you can buy anymore — that provide a hotter fire with more complete combustion of wood, rather than losing the gases and smoke out the chimney like the older “smoke dragon” designs do. I had no idea that (dry) wood could be so completely burned that there is little or no smoke at all. Cool!

The modern advance in wood stove technology is, simply put, to create a hotter fire with sufficient oxygen supply to burn all the wood and its byproducts.

To achieve this, the firebox is better insulated, and a pre-heated supply of air is made available in the upper portion of the firebox through perforated stainless steel secondary burn tubes so the wood gases and smoke can burn.

I’m sure many of you have these stoves, which are the only ones sold for inside residential use anymore. The secondary burn tubes produce beautiful, “ghost” flames, helping to ignite the wood gases and smoke that used to just go up the chimney.

So, this got me to thinking about the optimum stove design that would provide maximum efficiency, that is the maximum amount of heat energy from burning the wood transferred into your home (or my swimming pool water).

The goal is pretty simple: burn the wood and its gases as completely as possible and let as little heat escape out the chimney as possible. But even after hundreds of years of experience, people are still debating the best way to accomplish that.

I was thinking about the efficiency of a car engine as an analogy…but it is totally wrong. 100% efficiency for a car engine would be for all of the energy created by burning fuel to go into the mechanical work of pushing the pistons, turning the engine, and creating motion, with zero waste heat.

The wood stove is just the opposite, though. We want to create as much “waste” heat as possible, with as little mechanical energy as possible used to “push” the air through the system.

So, what are the limits to a 100% efficient wood stove?

First, you must recognize that you have to lose SOME heat out the chimney. It is the warm air in the chimney which provides the buoyancy (lift) needed to draw more air into the firebox. But the greater the volume of air flowing out the chimney, and the higher its temperature, the lower the efficiency of the stove for heating purposes.

Next, the higher temperatures required in the firebox for more complete combustion means more insulation, which means a reduction in the rate of heat flow to the room — which is opposite to the whole point of heating with a wood stove in the first place.

Now, I realize a hotter fire which is burning fuel more completely might actually lead to an increase in heat transferred to the room….but, all other things being equal, more insulation MUST, by itself, reduce the rate of heat flow compared to less insulation. Simple thermodynamics.

It’s an interesting dichotomy, trying to increase the efficiency of these stoves. On the one hand you need to MINIMIZE the loss of heat from the firebox in order to attain the high temperatures required for more complete combustion. But you also want to MAXIMIZE the loss of heat by the stove to the room. That’s the whole point of using the stove.

But this really isn’t a dichotomy if you realize that you are only insulating a portion of the stove – the firebox – to achieve the high temperatures and more complete combustion. If you can then route the hot gases leaving the firebox through a different part of the stove before going up the chimney, you then have the opportunity to extract the extra heat you generated from more complete combustion at the higher temperatures created within the (well insulated) firebox.

In other words, the firebox portion of the stove is primarily the energy generation portion of the system, and the rest of the stove that the hot gases pass through is the heat recovery portion of the system.

What is needed is a way to provide the hot gases leaving the firebox a greater opportunity to transfer their heat through the stove to its surroundings. A longer path through the stove, with multiple baffles conducting heat to the outside of the stove, would be one way to accomplish this.

Another would be to have a system of fins inside. Either way, you need to get the hot gas to come in contact with as much stove inner surface as possible, to maximize conduction of the heat to the outside of the stove, before all the heat goes up the chimney.

Now, obviously, you can’t remove so much heat from the exhaust that the air in the chimney is no longer buoyant, because then you will lose the stove’s “sucking” power for the fresh air it needs to burn the wood. An insulated chimney will help keep those gases as warm as possible through the entire path length of the chimney.

The air supply is of particular interest to me. (After all, I am a meteorologist. We know air.) Why doesn’t a bonfire, with an unlimited supply of fresh air, not burn all of the wood gases and smoke completely? It’s because as soon as a flame develops, it gets turbulently mixed with cooler ambient air, reducing the temperature of the mixture below what is necessary to burn the wood gases and smoke.

An analogy is the entrainment of environmental air into a convective cloud, which reduces the clouds ability to produce precipitation…a key component of the atmosphere’s heat engine.

So, in the modern wood stove they put tubes heated by the fire in the firebox to deliver an additional “secondary” source of air – very hot air — to the upper part of the firebox where the hot gases and smoke naturally collect. The pre-heating of the air is necessary for combustion of those gases to occur.

But after thinking and reading about this, I don’t really see the need for a distinction between “primary” and “secondary” air sources for a wood stove. All that is needed is a sufficient supply of pre-heated air to the whole fire. The secondary burn technology seems to me to be a retrofit to fix a problem that could just as easily have been fixed by reworking the primary air supply.

So, Uncle Lou and I have been discussing a way to preheat ALL of the air that enters the firebox, one that includes as its first ‘stop’ the window in the door, since a steady stream of hot fresh air is also needed to keep the window clean.

Of course, this is all in the design phase right now. Unfortunately, as Bert once told Ernie on Sesame Street regarding building a lemonade stand, “It’s easy to have ideas. It’s not so easy to make them work.”

So, if you don’t hear a progress report in a month or two, you’ll know the project was a failure. At least I don’t have to worry about burning the swimming pool down.

So, now the REAL stove experts out there can chime in and tell me where I’m wrong in my newbie analysis of wood stoves. It’s OK…I’m used to it.

Radiative Changes Over the Global Oceans During Warm and Cool Events

February 9th, 2011

In my continuing efforts to use satellite observations to test climate models that predict global warming, I keep trying different ways to analyze the data.

Here I’ll show how the global oceanic radiative budget changes during warm and cool events, which are mostly due to El Niño and La Niña (respectively). By ‘radiative budget’ I am talking about top-of-atmosphere absorbed sunlight and emitted infrared radiation.

I’ve condensed the results down to a single plot, which is actually a pretty good learning tool. It shows how radiative energy accumulates in the ocean-atmosphere system during warming, and how it is then lost again during cooling.

[If you are wondering how radiative ‘feedback’ fits into all this — oh, and I KNOW you are — imbalances in the net radiative flux at the top of the atmosphere can be thought of as some combination of forcing and feedback, which always act to oppose each other. A radiative imbalance of 2 Watts per sq. meter could be due to 3 Watts of forcing and -1 Watt of feedback, or 7 Watts of forcing and -5 Watts of feedback (where ‘feedback’ here includes the direct Planck temperature response of infrared radiation to temperature). Unfortunately, we have no good way of knowing the proportions of forcing and feedback, and it is feedback that will determine how much global warming we can expect from forcing agents like more atmospheric carbon dioxide.]

But for now let’s ignore that conceptual distinction, and just talk about radiative imbalances. This simplifies things since more energy input should be accompanied by a temperature rise, and more energy loss should be accompanied by a temperature fall. Conservation of energy.

And, as we will see from the data, that is exactly what happens.

We analyzed the 20th Century runs from a total of 14 IPCC climate models that Forster & Taylor (2006 J. Climate) also provided a diagnosed long-term climate sensitivity for. In order to isolate the variability in the models on time scales less than ten years or so, I removed the low-frequency variations with a 6th order polynomial fit to the surface temperature and radiative flux anomalies. It’s the short-term variability we can test with short term satellite datasets.

I’ve already averaged the results for the 5 models than had below-average climate sensitivity, and the 9 models that had above-average climate sensitivity.

The curves in the following plot are lag regression coefficients, which can be interpreted as the rate of radiative energy gain (or loss) per degree C of temperature change, at various time lags. A time lag of zero months can be thought of as the month of temperature maximum (or minimum). I actually verified this interpretation by examining composite warm and cold events from the CNRM-CM3 climate model run, which exhibits strong El Niño and La Niña activity.

Also shown are satellite-based results, from detrended HadSST2 global sea surface temperature anomalies and satellite-measured anomalies in radiative fluxes from the Terra CERES instrument, for the 10-year period from March 2000 through June 2010.

The most obvious thing to note is that in the months running up to a temperature maximum (minimum), the global oceans are gaining (losing) extra radiative energy. This is true of all of the climate models, and in the satellite observations.

The above plot is a possibly a more intuitive way to look at the data than the ‘phase space’ plots I’ve been pushing the last few years. One of the KEY things it shows is that doing these regressions only at ZERO time lag (as Dessler recently did in his 2010 cloud feedback paper, and all previous researchers have tried to do) really has very little meaning. Because of the time lags involved in the temperature response to radiative imbalances, one MUST do these analyses taking into account the time lag behavior if one is to have any hope of diagnosing feedback. At zero time lag, there is very little signal at all to analyze.

So, What Does This Tell Us About the Climate Models Used to Predict Global Warming?

Of course, what I am ultimately interested in is whether the satellite data can tell us anything that might allow us to determine which of the climate models are closer to reality in terms of their global warming predictions.

And, as usual, the results shown above do not provide a clear answer to that question.

Now, the satellite observations DO suggest that there are larger radiative imbalances associated with a given surface temperature change than the climate models exhibit. But the physical reason why this is the case cannot be determined without other information.

It could be due to a greater depth of water being involved in temperature changes in the real climate system, versus in climate models, on these time scales. Or, maybe the extra radiative input seen in the satellite data during warming is being offset by greater surface evaporation rates than the models produce.

But remember, conceptually these radiative changes are some combination of forcing and feedback, in unknown amounts. What I call forcing is what some people call “unforced internal variability” – radiative changes not due to feedback (by definition, the direct or indirect result of surface temperature changes). They are probably dominated by quasi-chaotic, circulation-induced variations in cloud cover, but could also be due to changes in free-tropospheric humidity.

Now, if we assume that the radiative changes AFTER the temperature maximum (or minimum) are mostly a feedback response, then one might argue that the satellite data shows more negative feedback (lower climate sensitivity) than the models do. The only trouble with that is that I am showing averages across models in the above plot. One of the MORE sensitive models actually had larger excursions than the satellite data exhibit!

So, while the conclusion might be true…the evidence is not exactly ironclad.

Also, while I won’t show the results here, there are other analyses that can be done. For instance: How much total energy do the models (versus observations) accumulate over time during the warming episodes? During the cooling episodes? and does that tell us anything? So far, based upon the analysis I’ve done, there is no clear answer. But I will keep looking.

In the meantime, you are free to interpret the above graph in any way you want. Maybe you will see something I missed.

UAH Update for January 2011: Global Temperatures in Freefall

February 2nd, 2011

…although this, too, shall pass, when La Nina goes away.

UAH_LT_1979_thru_Jan_2011


YR MON GLOBE NH SH TROPICS
2010 1 0.542 0.675 0.410 0.635
2010 2 0.510 0.553 0.466 0.759
2010 3 0.554 0.665 0.443 0.721
2010 4 0.400 0.606 0.193 0.633
2010 5 0.454 0.642 0.265 0.706
2010 6 0.385 0.482 0.287 0.485
2010 7 0.419 0.558 0.280 0.370
2010 8 0.441 0.579 0.304 0.321
2010 9 0.477 0.410 0.545 0.237
2010 10 0.306 0.257 0.356 0.106
2010 11 0.273 0.372 0.173 -0.117
2010 12 0.181 0.217 0.145 -0.222
2011 1 -0.009 -0.055 0.038 -0.369

LA NINA FINALLY BEING FELT IN TROPOSPHERIC TEMPERATURES
January 2011 experienced a precipitous drop in lower tropospheric temperatures over the tropics, Northern Hemisphere, and Southern Hemisphere. This was not unexpected, since global average sea surface temperatures have been falling for many months, with a head start as is usually the case with La Nina.

This is shown in the following plot (note the shorter period of record, and different zero-baseline):

SO WHY ALL THE SNOWSTORMS?
While we would like to think our own personal experience of the snowiest winter ever in our entire, Methuselah-ian lifespan has some sort of cosmic — or even just global — significance, I would like to offer this plot of global oceanic precipitation variations from the same instrument that measured the above sea surface temperatures (AMSR-E on NASA’s Aqua satellite):

Note that precipitation amounts over the global-average oceans vary by only a few percent. What this means is that when one area gets unusually large amounts of precipitation, another area must get less.

Precipitation is always associated with rising air, and so a large vigorous precipitation system in one location means surrounding regions must have enhanced sinking air (with no precipitation).

In the winter, of course, the relatively warmer oceans next to cold continental air masses leads to snowstorm development in coastal areas. If the cold air mass over the midwest and eastern U.S. is not dislodged by warmer Pacific air flowing in from the west, then the warm oceanic air from the Gulf of Mexico and western Atlantic keeps flowing up and over the cold dome of air, producing more snow and rain. The “storm track” and jet stream location follows that boundary between the cold and warm air masses.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

A Challenge to the Climate Research Community

February 2nd, 2011

I’ve been picking up a lot of chatter in the last few days about the ‘settled science’ of global warming. What most people don’t realize is that the vast majority of published research on the topic simply assumes that warming is manmade. It in no way “proves” it.

If the science really is that settled, then this challenge should be easy:

Show me one peer-reviewed paper that has ruled out natural, internal climate cycles as the cause of most of the recent warming in the thermometer record.

Studies that have suggested that an increase in the total output of the sun cannot be blamed, do not count…the sun is an external driver. I’m talking about natural, internal variability.

The fact is that the ‘null hypothesis’ of global warming has never been rejected: That natural climate variability can explain everything we see in the climate system.

OMG! ANOTHER GLOBAL WARMING SNOWSTORM!!

January 31st, 2011

I really can’t decide whether I should hate Al Gore… or thank him for giving me something to write about.

He has caused the spread of more pseudo-scientific incompetence on the subject of global warming (I’m sorry — climate change) than any climate scientist could possibly have ever accomplished. Who else but a politician could spin so much certainty out of a theory?

As someone who has lived and breathed meteorology and climate for 40 years now, I can assure you that this winter’s storminess in the little 2% patch of the Earth we like to call the ‘United States of America’ has nothing to do with your SUV.

Natural climate variability? Maybe.

But I would more likely chalk it up to something we used to call “WEATHER”.

Let me give you a few factoids:

1) No serious climate researcher — including the ones I disagree with — believes global warming can cause colder weather. Unless they have become delusional as a result of some sort of mental illness. One of the hallmarks of global warming theory is LESS extratropical cyclone activity — not more.

2) If some small region of the Earth is experiencing unusually persistent storminess, you can bet some other region is experiencing unusually quiet weather. You see, in the winter we get these things called ‘storm tracks’….

3) Evidence for point #2 is that we now have many years of global satellite measurements of precipitation which shows that the annual amount of precipitation that falls on the Earth stays remarkably constant from year to year. The AREAS where it occurs just happen to move around a whole lot. Again, we used to call that “weather”.

4) Global average temperature anomalies (departures from seasonal norms) have been falling precipitously for about 12 months now. Gee, maybe these snowstorms are from global cooling! Someone should look into that! (I know…cold and snow from global cooling sounds crazy….I’m just sayin’….)

I could go on and on.

Now, I know I’m not going to change the minds of any of the True Believers…those who read all of Reverend Al’s sermons, and say things like, “You know, global warming can mean warmer OR colder, wetter OR drier, cloudier OR sunnier, windier OR calmer, …”. Can I get an ‘amen’??

But I hope I can still save a few of those out there who are still capable of independent reasoning and thought.

NOW can I go to bed?

UPDATE: Further Evidence of Low Climate Sensitivity from NASA’s Aqua Satellite

January 28th, 2011

After yesterday’s post, I decided to run the simple forcing-feedback model we developed to mimic the Aqua satellite observations of global oceanic temperature and radiative flux variations.

I’ve also perused the comments people have made there, and will try to clarify what I’ve done and why it’s important.

First of all, my (and even the IPCC’s) emphasis on changes in the global radiative budget cannot be overemphasized when we are trying to figure out whether “global warming” is mostly manmade or natural, and how the climate system responds to forcing.

Changes in the global-average radiative budget are about the only way for the Earth to warm or cool on time scales of years or longer (unless there is some sort of change in geothermal heat flux…we won’t even go there.)

What we want to know, ultimately, is how much warming will result from the radiative imbalance caused by adding CO2 to the atmosphere. It is natural to try to answer that question by examining how Mother Nature handles things when there are natural, year-to-year warmings and coolings. I believe that the NASA satellite assets we have in orbit right now are going to go a long way toward providing that answer.

The answer depends upon how clouds, evaporation, water vapor, etc., change IN RESPONSE TO a temperature change, thus further altering the radiative balance and final temperature response. This is called feedback, and it is traditionally referenced to a surface temperature change.

The GOOD news is that we have pretty accurate satellite measurements of the rather small, year-to-year changes in global radiative fluxes over the last 10 years, as well as of the temperature changes that accompanied them.

The BAD news is that, even if those measurements were perfect, determining feedback (temperature causing radiative changes) is confounded by natural forcings (radiative changes causing temperature changes).

This interplay between natural variations in global-average temperature and radiative flux are always occurring, intermingled together, and the goal is to somehow disentangle them to get at the feedback part.

Keep in mind that “feedback” in the climate system is more of a conceptual construct. It isn’t something we can measure directly with an instrument, like temperature. But the feedback concept is useful because we are pretty sure that elements of the climate system (e.g. clouds) WILL change in response to any radiative imbalance imposed upon the system, and those changes will either AMPLIFY or REDUCE the temperature changes resulting from the initial imbalance. (While it might not be exactly the same kind of feedback electrical engineers deal with, there is currently no better term to describe the process…a process which we know must be occurring, and must be understood in order to better predict human-caused global warming.)

More than any other factor, feedbacks will determine whether anthropogenic global warming is something we need to worry about.

An Example from the Kitchen
While this might all seem rather cryptic, ALL of these processes have direct analogs to a pot of water warming on the stove. You can turn the heat up on the stove (forcing), and the water will warm. But if you also remove the lid in proportion to the stove being turned up (negative feedback), you can reduce the warming. It’s all based upon energy conservation concepts, which ordinary people are exposed to every day.

The IPCC believes Mother Nature covers up the pot even more as the stove is turned up, causing even more warming in response to a “forcing”.

I think they are wrong.

NASA Aqua Satellite Observations of the Global Oceans
Similar to what I plotted yesterday, the following plot shows time-lagged regression coefficients between time series of global oceanic radiative flux (from the CERES instrument on Aqua), and sea surface temperature (from AMSR-E on Aqua). Yesterday’s plot also showed the results when I used the Hadley Center’s SST measurements (the dashed line in that graph), and the results were almost identical. But since I’m the U.S. Science Team Leader for AMSR-E, I’ll use it instead. 🙂

The way these regression coefficients can be interpreted is that they quantify the rate at which radiative energy is GAINED by the global ocean during periods when SST is rising, and the rate at which radiative energy is LOST when SST is falling. Conceptually, the vertical line at zero months time lag can be thought of as corresponding to the time of peak SST.

The Simple Model “Best” Match to the Satellite Data
I’ve run our simple forcing-feedback model (originally suggested to us by Isaac Held at Princeton) to try to match the satellite observations. I force the model with quasi-random time variations in the global radiative energy budget — representing, say, natural, quasi-chaotic variations in cloud cover — and then see how the model temperatures respond. The model has been available here for many months now, if you want to play with it.

The model’s response to these radiative forcings depends upon how I set the model’s: (1) ocean mixing depth, which will determine how much the temperature will change for a given energy imbalance imposed upon the model, and (2) feedback parameter, which is what we ultimately want to determine from the satellite data.

I found that a 70 meter deep layer provided about the right RATIO between the satellite-observed monthly radiative variations (0.8 Watts per sq. meter standard deviation) and SST variations (0.08 deg. C standard deviation). At the same time, I had to adjust the magnitude of the radiative forcing to get about the right ABSOLUTE MAGNITUDES for those standard deviation statistics, too.

The “best fit” I got after about an hour of fiddling around with the inputs is represented by the blue curve in the above chart. Importantly, the assumed feedback parameter (5.5) is solidly in “negative feedback” territory. IF this was the true feedback operating in the real climate system on the long time scales of ‘global warming’, it would mean that our worries over anthropogenic global warming have been, for all practical purposes, a false alarm.

The Simple Model Run With the IPCC’s Average Feedback

At this point, a natural question is, How does the simple model behave if I run it with a feedback typical of the IPCC climate models? The average net feedback parameter across the IPCC models is about 1.4 Watts per sq. meter per degree, and the following plot shows the simple model’s response to that feedback value compared to the satellite observations.

A comparison between the 2 charts above would seems to indicate that the satellite data are more consistent with negative feedback (which, if you are wondering, is a net feedback parameter greater than 3.2 W m-2 K-1) than they are with positive feedback. But it could be that feedbacks diagnosed from the IPCC models only over the global oceans will be necessary to provide a more apples-to-apples comparison on this point.

Important Caveat
While it would be tempting to think that the IPCC models are potentially invalidated by this comparison, Dessler (2010) has correctly pointed out that the short-term feedback behavior of the IPCC models appear to have little or no relationship to their long-term climate sensitivity.

In other words, even if short-term feedbacks in the real climate system are strongly negative, this doesn’t prove the long-term global warming in the models is wrong.

In fact, NO ONE HAS YET FOUND A WAY WITH OBSERVATIONAL DATA TO TEST CLIMATE MODEL SENSITIVITY. This means we have no idea which of the climate models projections are more likely to come true.

This dirty little secret of the climate modeling community is seldom mentioned outside the community. Don’t tell anyone I told you.

This is why climate researchers talk about probable ranges of climate sensitivity. Whatever that means!…there is no statistical probability involved with one-of-a-kind events like global warming!

There is HUGE uncertainty on this issue. And I will continue to contend that this uncertainty is a DIRECT RESULT of researchers not distinguishing between cause and effect when analyzing data.

Toward Improved Climate Sensitivity Estimates
As I mentioned yesterday, Dessler (2010) only addressed ZERO-time lag relationships, as did all previous investigators doing similar kinds of work. In contrast, the plots I am presenting here (and in yesterday’s post) show how these regression coefficients vary considerably with time lag. In fact, at zero time lag, the relationships become virtually meaningless. Cause and effect are hopelessly intertwined.

But we CAN measure radiative changes BEFORE a temperature peak is reached, and in the months FOLLOWING the peak. Using such additional “degrees of freedom” in data analysis will be critical if we are to ever determine climate sensitivity from observational data. I know that Dick Lindzen is also advocating the very same point. If you are a lay person who understands this, can i get an “amen”? Because, so far, other climate researchers are keeping their mouths shut.

It is imperative that the time lags (at a minimum) be taken into account in such studies. Our previous paper (Spencer & Braswell, 2010) used phase space plots as a way of illustrating time lag behavior, but it could be that plots like I have presented here would be more readily understood by other scientists.

Unfortunately, the longer the climate community establishment keeps its head in the sand on this issue , the more foolish they will look in the long run.

New Results on Climate Sensitivity: Models vs. Observations

January 27th, 2011

Partly as a result of my recent e-mail debate with Andy Dessler on cloud feedbacks (the variable mostly likely to determine whether we need to worry about manmade global warming), I have once again returned to an analysis of the climate models and the satellite observations.

I have just analyzed the 20th Century runs from the IPCC’s three most sensitive models (those producing the most global warming), and the 3 least sensitive models (those that produce the least global warming), and compared their behavior to the 10 years of global temperature and radiative budget data Dessler analyzed (as did Spencer & Braswell, 2010).

The following plot shows the most pertinent results. While it requires some explanation, an understanding of it will go a long way to better appreciating not only how climate models and the real world differ, but also what happens when the Earth warms and cools from year-to-year…say from El Nino or La Nina.

What the plot shows is (on the vertical axis) how much net loss or gain in radiant energy occurs for a given amount of global-average surface warming, at different time lags relative to that temperature peak (on the horizontal axis). You can click on the graph to get a large version.

All observations are shown with black curves; the climate model relationships are shown in either red (3 models that predict the most global warming during the 21st Century), or blue (the 3 models predicting the least warming). Let’s examine what these curves tell us:

1) RADIATIVE ENERGY ACCUMULATES DURING WARMING IN ADVANCE OF THE TEMPERATURE PEAK: In the months preceding a peak in global temperatures (the left half of the graph), both models and observations show the Earth receives more radiant energy than it loses (try not to be confused by the negative sign). This probably occurs from circulation-induced changes in cloud cover, most likely a decrease in low clouds letting more sunlight in (“SW” means shortwave, i.e. solar)…although an increase in high cloud cover or tropospheric humidity could also be involved, which causes a reduction in the rate if infrared (longwave, or “LW”) energy loss. This portion of the graph supports my (and Lindzen’s) contention that El Nino warming is partly a radiatively-driven phenomenon. [The curves with the much larger excursions are for oceans-only, from instruments on NASA’s Aqua satellite. The larger excursions are likely related to the higher heat capacity of the oceans: it takes more radiative input to cause a given amount of surface warming of the oceans than of the land.]

2) RADIATIVE ENERGY IS LOST DURING COOLING AFTER THE TEMPERATURE PEAK: In the months following a peak in global average temperature, there is a net loss of radiative energy by the Earth. Note that THIS is where there is more divergence in the behavior of the climate models, and the observations. While all the climate models showed about the same amount of radiative input per degree of warming, during the cooling period there is a tendency for the least sensitive climate models (blue curves) to lose more energy than the sensitive models. NOTE that this distinction is NOT apparent at zero time lag, which is the relationship examined by Dessler 2010.

WHAT DOES THE DIVERGENCE BETWEEN THE MODELS DURING THE COOLING PERIOD MEAN?
Why would the climate models that produce less global warming during the 21st Century (blue curves) tend to lose MORE radiant energy for a given amount of surface temperature cooling? The first answer that comes to my mind is that a deeper layer of the ocean is involved during cooling events in these models.

For instance, look that the red curve with the largest dots…the IPCC’s most sensitive model. During cooling, the model gives up much less radiant energy to space than it GAINED during the surface warming phase. The most obvious (though not necessarily correct) explanation for this is that this model (MIROC-Hires) tends to accumulate energy in the ocean over time, causing a spurious warming of the deep ocean.

These results suggest that much more can be discerned about the forcing and feedback behavior of the climate system when time lags between temperature and radiative changes are taken into account. This is why Spencer & Braswell (2010) examined phase space plots of the data, and why Lindzen is emphasizing time lags in 2 papers he is currently struggling to get through the peer review cycle.

SO WHICH OF THE CLIMATE MODELS IS MORE LIKELY TO BE CORRECT?

This is a tough one. The above plot seems to suggest that the observations favor a low climate sensitivity…maybe even less than any of the models. But the results are less than compelling.

For instance, at 3 months after the temperature peak, the conclusion seems clear: the satellite data show a climate system less sensitive than even the least sensitivie model. But by 9 months after the temperature peak, the satellite observations show the same relationship as one of the most sensitive climate models.

So, I’m sure that you can look at this chart and see all kinds of relationships that support your view of climate change, and that’s fine. But *MY* contention is that we MUST move beyond the simplistic statistics of the past (e.g., regressions only at zero time lag) if we are to get ANY closer to figuring out whether the observed behavior of the real climate system supports either (1) a resilient climate system virtually immune to the activities of humans, or (2) a climate system that is going to punish our use of fossil fuels with a global warming Armageddon.

The IPCC is no nearer to answering that question than they were 20 years ago. Why?

Dessler-Spencer Cloud Feedback Debate Update

January 20th, 2011

The e-mail debate I have been having with Andy Dessler over his recent paper purporting to show positive cloud feedback in 10 years of satellite data appears to have reached an impasse.

Dick Lindzen has chimed in on my side in recent days, but Andy continues to claim that – at least during the 2000-2010 period in question — I have provided no evidence that clouds cause climate variations.

This is remarkably similar to how Kevin Trenberth rebutted my last congressional testimony…”clouds don’t cause climate change”, is approximately what I recall Kevin saying.

So, let’s return to Andy Dessler’s main piece of evidence, which is Fig. 2 from his paper, showing how monthly, global-average changes in (1) clouds and (2) surface temperature relate to each other, in the satellite observations (top panel), and in the ECHAM climate model (bottom panel, click for large version):

Andy has fitted regression lines to the data, and both have a slope approaching zero (for some reason, I can’t even find correlation coefficients in his paper). He claims these regression slopes support positive cloud feedback, in both the satellite observations and the climate model.

Now, why do I (and Dick Lindzen) disagree with this interpretation of the data? Because, while feedback is — by definition — temperature change (the horizontal axis) causing a cloud-induced radiative change (the vertical axis), NO ACCOUNTING HAS BEEN MADE FOR CAUSATION IN THE OPPOSITE DIRECTION.

And as shown most recently by Spencer & Braswell (2010, SB2010), any non-feedback source of cloud variations will (necessarily) cause a temperature response that is highly DE-correlated…just as we see in the satellite data! In fact, we showed that a near-zero regression slope can occur with even strongly NEGATIVE cloud feedback.

The bottom line is that, you can not use simple regression to infer cloud feedbacks from data like those seen in Dessler’s data plots.

This is not a new claim…there have been earlier papers cautioning against inferring cloud feedback (a specific kind of causation) from such data. The first two papers that come to mind are Aries & Rossow (2003 QJRMS), and Stephens (2005 J Climate). Nevertheless, researchers continue to use such statistics to try to justify the claimed reality of continuing climate model projections of strong global warming.

I’m sorry, but finding some statistical relationship with a near-zero correlation in BOTH the satellite data AND in the climate model behavior is (in my opinion) nowhere near proving that climate models are useful for long-term predictions of the climate system.

If that makes me a “denier”, so be it.

Dec. 2010 UAH Global Temperature Update: +0.18 deg. C

January 3rd, 2011

UPDATE #1(1/3/10, 2:50 p.m. CST): Graph fixed…it was missing Dec. 2010.

UPDATE #2(1/3/10, 3:25 p.m. CST): Appended global sea surface temperature anomalies from AMSR-E.

NEW 30-YEAR BASE PERIOD IMPLEMENTED!


YR MON GLOBE NH SH TROPICS
2010 1 0.542 0.675 0.410 0.635
2010 2 0.510 0.553 0.466 0.759
2010 3 0.554 0.665 0.443 0.721
2010 4 0.400 0.606 0.193 0.633
2010 5 0.454 0.642 0.265 0.706
2010 6 0.385 0.482 0.287 0.485
2010 7 0.419 0.558 0.280 0.370
2010 8 0.441 0.579 0.304 0.321
2010 9 0.477 0.410 0.545 0.237
2010 10 0.306 0.257 0.356 0.106
2010 11 0.273 0.372 0.173 -0.117
2010 12 0.180 0.213 0.147 -0.221


UAH_LT_1979_thru_Dec_10

NEW 30-YEAR BASE PERIOD IMPLEMENTED!
Sorry for yelling like that, but if you have been following our global tropospheric temperature updates every month, you will have to re-calibrate your brains because we have just switched from a 20 year base period (1979 – 1998) to a more traditional 30 year base period (1981-2010) like that NOAA uses for climate “normals”.

This change from a 20 to a 30 year base period has 2 main impacts:

1) because the most recent decade averaged somewhat warmer than the previous two decades, the anomaly values will be about 0.1 deg. C lower than they used to be. This does NOT affect the long-term trend of the data…it only reflects a change in the zero-level, which is somewhat arbitrary.

2) the 30-year average annual cycle shape will be somewhat different, and more representative of “normal” of the satellite record than with 20 years; as a result, the month-to-month changes in the anomalies might be slightly less “erratic” in appearance. (Some enterprising person should check into that with the old versus new anomaly datasets).

Note that the tropics continue to cool as a result of the La Nina still in progress, and the Northern Hemisphere also cooled in December, more consistent with the anecdotal evidence. 🙂

I will provide a global sea surface temperature update later today.

WHO WINS THE RACE FOR WARMEST YEAR?
As far as the race for warmest year goes, 1998 (+0.424 deg. C) barely edged out 2010 (+0.411 deg. C), but the difference (0.01 deg. C) is nowhere near statistically significant. So feel free to use or misuse those statistics to your heart’s content.

THE DISCOVER WEBSITE: NOAA-15 PROBLEMS STARTING IN MID-DECEMBER
For those tracking our daily updates of global temperatures at the Discover website, remember that only 2 “channels” can be trusted for comparing different years to each other, both being the only ones posted there from NASA’s Aqua satellite:

1) only ch. 5 data should be used for tracking tropospheric temperatures,
2) the global-average “sea surface” temperatures are from AMSR-E on Aqua, and should be accurate.

The rest of the channels come from the AMSU on the 12 year old NOAA-15 satellite, WHICH IS NOW EXPERIENCING LARGE AMOUNTS OF MISSING DATA AS OF AROUND DECEMBER 20, 2010. This is why some of you have noted exceptionally large temperature changes in late December. While we wait for NOAA to investigate, it seems like more than coincidence that the NOAA-15 AMSU status report had a December 17 notice that the AMSU scan motor position was being reported incorrectly due to a bit error.

The notice says that problem has been sporadic, but increasing over time as has the amount of missing data I have seen during my processing. At this early stage, I am guessing that the processing software cannot determine which direction the instrument is pointing when making its measurements, and so the data from the radiometer are not being processed.

The daily NOAA-15 AMSU imagery available at the Discover website shows that the data loss is much more in the Northern Hemisphere than the Southern Hemisphere, which suggests that the temperature of the instrument is probably involved in the bit error rate. But at this point, this is all my speculation, based upon my past experience studying how the temperature of these instruments vary throughout the orbit as the solar illumination of the spacecraft varies.

SST UPDATE FROM AMSR-E

The following plot shows global average sea surface temperatures from the AMSR-E instrument over the lifetime of the Aqua satellite, through Dec 31, 2010. The SSTs at the end of December suggest that the tropospheric temperatures in the previous graph (above) still have a ways to fall in the coming months to catch up to the ocean, which should now be approaching its coolest point if it follows the course of previous La Nina’s.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

Why Most Published Research Findings are False

January 3rd, 2011

Those aren’t my words — it’s the title of a 2005 article, brought to my attention by Cal Beisner, which uses probability theory to “prove” that “…most claimed research findings are false”. While the article comes from the medical research field, it is sufficiently general that some of what it discusses can be applied to global warming research as well.

I would argue that the situation is even worse for what I consider to the central theory of the climate change debate: that adding greenhouse gases to the atmosphere causes significant warming of the climate system. Two corollaries of that theory are that (1) the warming we have seen in recent decades is human-caused, and (2) significant warming will continue into the future as we keep using fossil fuels.

The first problem I see with scientifically determining whether the theory of anthropogenic global warming (AGW) is likely to be true is that it is a one-of-a-kind event. This immediately reduces our scientific confidence in pinpointing the cause of warming. The following proxy reconstruction of temperature variations over the last 2,000 years suggests global warming (and cooling), are the rule, not the exception, and so greenhouse gas increases in the last 100 years occurring during warming might be largely a coincidence.

Twice I have testified in congress that unbiased funding on the subject of the causes of warming would be much closer to a reality if 50% of that money was devoted to finding natural reasons for climate change. Currently, that kind of research is almost non-existent.

A second, related problem is that we cannot put the Earth in the laboratory to run controlled experiments on. Now, we CAN determine in the laboratory that certain atmospheric constituents (water vapor, water droplets, carbon dioxide, methane) absorb and emit infrared energy…the physical basis for the so-called greenhouse effect. But the ultimate uncertainty over atmospheric feedbacks — e.g. determining whether cloud changes with warming reduce or amplify that warming — cannot be tested with any controlled experiment.

A third problem is the difficulty in separating cause from effect. Determining whether atmospheric feedbacks are positive or negative requires analysis of entire, quasi-global atmospheric circulation systems. Just noticing that more clouds tend to form over warm regions does not tell you anything useful about whether cloud feedbacks are positive or negative. Atmospheric and oceanic circulation systems involve all kinds of interrelated processes in which cause and effect must surely be operating. But separating cause from effect is something else entirely.

For example, just establishing that years experiencing global warmth have less cloud cover letting more sunlight in does not prove positive cloud feedback…simply because the warming could have been the result of — rather than the cause of — fewer clouds. This is the subject that Andy Dessler and I have been debating recently, and I consider it to be the Achilles heel of AGW theory.

After all, it is not the average role of clouds in the climate system that is being debated — we already know it is a cooling effect. It’s instead how clouds will change as a result of warming that we are interested in. Maybe they are the same thing (which is what I’m betting)…but so far, no one has found a way to prove or disprove it. And I believe cause-versus-effect is at the heart of that uncertainty.

A fourth problem with determining whether AGW theory is true or not is closely related to a similar problem medical research has — the source of funding. This has got to be one of the least appreciated sources of bias in global warming research. In pharmaceutical research, experimentally demonstrating the efficacy of some new drug might be influenced by the fact that the money for the research came from the company that developed the drug in the first place. This is partly why double-blind studies involving many participants (we have only one: Earth) were developed.

But in global warming research, there is a popular misconception that oil industry-funded climate research actually exists, and has skewed the science. I can’t think of a single scientific study that has been funded by an oil or coal company.

But what DOES exist is a large organization that has a virtual monopoly on global warming research in the U.S., and that has a vested interest in AGW theory being true: the U.S. Government. The idea that government-funded climate research is unbiased is laughable. The push for ever increasing levels of government regulation and legislation, the desire of government managers to grow their programs, the dependence of congressional funding of a problem on the existence of a “problem” to begin with, and the U.N.’s desire to find reasons to move toward global governance, all lead to inherent bias in climate research.

At least with medical research, there will always be funding because disease will always exist. But human-caused warming could end up to be little more than a false alarm…as well as a black eye for the climate research community. And lest we forget, possibly the biggest funding-related source of bias in climate research is that research community of scientists. Everyone knows that if the AGW “problem” is no longer a problem, their source of research funding will disappear.

Sometimes I get accused of being a conspiracy nut for believing these things. Well, whoever accuses me of that has obviously not worked in government or spent much time dealing with program managers in Washington. There is no conspiracy, because these things are not done in secret. The U.N.’s Agenda 21 is there for all to read.

The bottom line is that there could scarcely be a more ill-posed scientific question than whether global warming is human-caused: a one of a kind event, the Earth can’t be put into a laboratory to study, cause and effect are intermingled, and the political and financial sources of bias in the resulting research are everywhere.

So, when some scientist says we “know” that warming is human-caused, I cringe at the embarrassing abundance of scientific ignorance on display. No wonder the public doesn’t trust scientific predictions — just as suggested by the 2005 study I mentioned at the outset, those predictions have almost always been wrong!