Yes, Virginia, Cooler Objects Can Make Warmer Objects Even Warmer Still

July 23rd, 2010

Probably as the result of my recent post explaining in simple terms my “skepticism” about global warming being mostly caused by carbon dioxide emissions, I’m getting a lot of e-mail traffic from some nice folks who are trying to convince me that the physics of the so-called Greenhouse Effect are not physically possible.

More specifically, that adding CO2 to the atmosphere is not physically capable of causing warming.

These arguments usually involve claims that “back radiation” can not flow from the cooler upper layers of the atmosphere to the warmer lower layers. This back radiation is a critical component of the theoretical explanation for the Greenhouse Effect.

Sometimes the Second Law of Thermodynamics, or Kirchoff’s Law of Thermal Radiation, are invoked in these arguments against back radiation and the greenhouse effect.

One of the more common statements is, “How can a cooler atmospheric layer possibly heat a warmer atmospheric layer below it?” The person asking the question obviously thinks the hypothetical case represented by their question is so ridiculous that no one could disagree with them.

Well, I’m going to go ahead and say it: THE PRESENCE OF COOLER OBJECTS CAN, AND DO, CAUSE WARMER OBJECTS TO GET EVEN HOTTER.

In fact, this is happening all around us, all the time. The reason why we might be confused by the apparent incongruity of the statement is that we don’t spend enough time thinking about why the temperature of something is what it is.

How Cooler Objects Make Warmer Objects Even Hotter

One way to demonstrate the concept is with the following thought experiment, which I will model roughly after the Earth suspended in the cold of outer space. Even my oldest daughter, a realtor who has an aversion to things scientific, got the right answer when I used this example on her.

Imagine a heated plate in a cooled vacuum chamber, as in the first illustration, below. These chambers are used to test instruments and satellites that will be flown in space. Let’s heat the plate continuously with electricity. The plate can lose energy only through infrared (heat) radiation emitted toward the colder walls of the chamber, since there is no air in the vacuum chamber to conduct the heat away from the plate. (Similarly, there is no air in outer space to conduct heat away from the Earth in the face of solar heating.)

The plate will eventually reach a constant temperature (let’s say 150 deg. F.) where the rate of energy gain by the plate from electricity equals the rate of energy loss by infrared radiation to the cooled chamber walls.

Now, let’s put a second plate next to the first plate. The second plate will begin to warm in response to the infrared energy being emitted by the heated plate. Eventually the second plate will also reach a state of equilibrium, where its average temperature (let’s say 100 deg. F) stays constant with time. This is shown in the next illustration:

But what will happen to the temperature of the heated plate in the process? It will end up even hotter than it was before the cooler plate was placed next to it. This is because the second plate reduced the rate at which the first plate was losing energy.

(If you are unconvinced of this, then imagine that the second plate completely surrounds the heated plate. Will the heated plate remain at 150 deg., and not warm at all?)

Since the temperature of an object is a function of both energy gain AND energy loss, the temperature of the plate (or anything else) can be raised in 2 basic ways: (1) increase the rate of energy gain, or (2) decrease the rate of energy loss. The temperature of everything is determined by energy flows in and out, and one needs to know both to determine whether the temperature will go up or down. This is a consequence of the 1st Law of Thermodynamics involving conservation of energy.

Note that the above example involving 2 plates, one hotter than the other, is apparently where the greenhouse effect deniers (sorry, I couldn’t help myself) would claim the “physically impossible” has occurred: The presence of a colder object has caused a warmer object to become even hotter. Again, the reason the heated plate became even hotter is that the second plate has, in effect, “insulated” the first plate from its cold surroundings, keeping it warmer than if the second plate was not there.

The only way I know of to explain this is that it isn’t just the heated plate that is emitting IR energy, but also the second plate….as well as the cold walls of the vacuum chamber. The following illustration zooms in on the plates from our previous illustration:

What happens is that the second plate is heated by IR radiation being emitted by the first plate, raising its temperature. The second plate, in turn, cannot cool to the temperature of the vacuum chamber walls (0 deg. F) because it is not in direct contact with the refrigerant being used…it can only lose IR at a rate which increases with temperature, so it achieves some intermediate temperature.

Meanwhile, the cooler plate is emitting more radiation toward the hot plate than the cold walls of the vacuum chamber would have emitted. This changes the energy budget of the hot plate: despite a constant flow of energy into the plate from the electric heater, it has now lost some of its ability to cool through IR radiation. Its temperature then rises until it, once again, is emitting IR radiation at the same rate as it is receiving energy from its surroundings (and the electric heater).

As we will see, below, in the case of the Earth being heated by the sun, the vacuum chamber “wall” (outer space) is close to absolute zero in temperature. Putting anything between that (essentially infinite) heat sink and the Earth’s surface will cause the surface to warm.

Examples are All Around Us

Examples of objects with lower temperatures causing objects with higher temperatures to become even higher still are all around us.

For instance, in terms of these most basic heating and cooling concepts (energy gain and energy loss), the same thing happens when you put a blanket over yourself when it is cold. The blanket stays cooler than your skin, but it nevertheless makes your skin warmer than if the cooler blanket was not there. Even though the direction of flow of heat never changes (it is always from warmer to cooler objects), a cooler object can still make a warm object even hotter.

It doesn’t matter what the mechanisms of energy transfer are….if the presence of a cooler object keeps a warmer object from losing energy as rapidly as before, the warm object will become even hotter.

But if you insist on another real-world example involving infrared radiation, rather than heat conduction, let’s use clouds at night. Almost everyone has experienced the fact that cloudy nights tend to be warmer than clear nights.

The most dramatic effect I’ve seen of this is in the winter, on a cold clear night with snow cover. The temperature will drop rapidly. But if a cloud layer moves in, the temperature will either stop dropping, or even warm dramatically.

This warming occurs because the cloud radiates much more IR energy downward than does a clear, dry atmosphere. This changes the energy budget of the surface dramatically, often causing warming — even though the cloud is usually at a lower temperature than the ground is. Even high altitude cirrus clouds at a temperature well below than of the surface, can cause warming.

So, once again, we see that the presence of a colder object can cause a warmer object to become warmer still.

Extending the Concept to the Atmosphere

As mentioned above, in the case of the cold depths of outer space surrounding the Earth’s solar-heated surface, ANY infrared absorber that gets between the Earth’s surface and space will cause the surface to warm.

This radiative insulating function occurs in the atmosphere because of the presence of greenhouse gases, that is, gases that absorb and emit significant amounts of infrared energy…(mostly water vapor, CO2, and methane). Clouds also contribute to the Greenhouse Effect.

Kirchoff’s Law of thermal radiation says (roughly), that a good infrared absorber is an equally good infrared emitter. So, each layer of the atmosphere is continuously absorbing IR, as well as emitting it. This is what makes the Greenhouse Effect so much more difficult to understand conceptually than solar heating of the Earth. While the sun is a single source, and most of the energy absorbed by the Earth is at a single level (the surface of the ground), in the case of infrared energy, every layer becomes both as source of energy and an absorber of energy.

It also helps that our eyes are much more sensitive to solar radiation than they (or even our skin) are to infrared radiation. It’s more difficult to conceptualize that which you can’t see.

Our intuition begins to fail us when presented with this complexity. The following illustration shows some of these energy flows: just the IR being emitted upward and downward by different atmospheric layers. If I included arrows representing the IR energy being absorbed by those layers, too, it would become hopelessly indecipherable.

As a result of the atmosphere’s ability to radiatively insulate the Earth’s surface from losing infrared energy directly to the “cold” depths of outer space, the surface warms to a higher average temperature than it would have if the atmosphere was not there. The no-atmosphere, global average surface temperature has been theoretically calculated to be around 0 deg. F.

This, then, constitutes the basic mechanism of the Greenhouse Effect. Greenhouse gases represent a “radiative blanket” that keeps the Earth’s surface warmer than it would otherwise be without those gases present.

In fact, research published in the 1960s showed that, if the current atmosphere suddenly became still – with no wind, evaporation, and convective overturning transporting excess energy from the surface to the upper atmosphere – the average surface temperature of the Earth would warm dramatically, from 0 deg. F with no greenhouse gases, to about 140 deg. F. That the real world temperature is much lower, around 59 deg. F, is due to the cooling effects of weather transporting heat from the surface to the upper atmosphere through convective air currents.

Weather as we know it would not even exist without the greenhouse effect continuously destabilizing the vertical temperature profile of the atmosphere. Vertical air currents associated with weather act to stabilize the atmospheric temperature profile, but it is the greenhouse effect that keeps the process going by warming the lower atmosphere, and cooling the upper atmosphere, to the point where convection must occur.

What About Kirchoff’s Law?
One of the statements of Kirchoff’s Law is:

At thermal equilibrium, the emissivity of a body (or surface) equals its absorptivity.

Many well-meaning people think that one of the consequences of Kirchoffs Law of radiation is that an individual layer of the atmosphere that absorbs infrared energy at a certain rate must also emit energy at the same rate. This is NOT true.

The rate of emission becoming the same as the rate of absorption occurs in the very special case where (1) the temperature has reached thermal equilibrium, and (2) that equilibrium is the result of only those two radiative flows, in and out of the object.

Interestingly, this condition of a layer emitting the same amount of IR as it is absorbing is virtually never met anywhere in the atmosphere. This is because of the vertical, convective flows which are also transporting energy between layers.

In the global average, air below about 5,000 feet in altitude is absorbing more infrared energy than it emits, while air above that altitude (up to the top of the troposphere, the 80% of the atmosphere where weather occurs) is losing infrared energy faster than it is gained.

The reason why these two regions stay at roughly a constant temperature, despite very different rates of infrared loss and gain, is convective heat transport by weather: air heated by sunlight absorbed at the Earth’s surface has its excess energy transported to the upper troposphere, where a lack of water vapor (Earth’s main greenhouse gas) allows that energy to escape more rapidly to space.

The 2nd Law of Thermodynamics: Can Energy “Flow Uphill”?
In the case of radiation, the answer to that question is, “yes”. While heat conduction by an object always flows from hotter to colder, in the case of thermal radiation a cooler object does not check what the temperature of its surroundings is before sending out infrared energy. It sends it out anyway, no matter whether its surroundings are cooler or hotter.

Yes, thermal conduction involves energy flow in only one direction. But radiation flow involves energy flow in both directions.

Of course, in the context of the 2nd Law of Thermodynamics, both radiation and conduction processes are the same in the sense at the NET flow of energy is always “downhill”, from warmer temperatures to cooler temperatures.

But, if ANY flow of energy “uphill” is totally repulsive to you, maybe you can just think of the flow of IR energy being in only one direction, but with it’s magnitude being related to the relative temperature difference between the two objects. The result will still be the same: The presence of a cooler object can STILL cause a warmer object to become even hotter.

Anyway, that’s my story, and I’m sticking to it. Until someone convinces me otherwise.

So, let the flaming begin! No, really, have fun…but if you want your comments to remain available for others to read, please keep it civil.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

SUGGESTION FROM ROY (7:50 a.m. Monday, July 26): If you want to add intelligently to this discussion, you need to actually read (1) what I have said, and (2) what others have said. Chances are, your point has already been made and discussed.

Can Climate Feedbacks be Diagnosed from Satellite Data? Comments on the Murphy & Forster (2010) Critique of Spencer & Braswell (2008)

July 19th, 2010

There is a new paper in press at the Journal of Climate that we were made aware of only a few days ago (July 14, 2010). It specifically addresses our (Spencer & Braswell, 2008, hereafter SB08) claim that previous satellite-based diagnoses of feedback have substantial low biases, due to natural variations in cloud cover of the Earth.

This is an important issue. If SB08 are correct, then the climate system could be substantially more resistant to forcings – such as increasing atmospheric CO2 concentrations — than the IPCC “consensus” claims it is. This would mean that manmade global warming could be much weaker than currently projected. This is an issue that Dick Lindzen (MIT) has been working on, too.

But if the new paper (MF10) is correct, then current satellite estimates of feedbacks – despite being noisy – still bracket the true feedbacks operating in the climate system…at least on the relatively short (~10 years) time scales of the satellite datasets. Forster and Gregory (2006) present some of these feedback estimates, based upon older ERBE satellite data.

As we will see, and as is usually the case, some of the MF10 criticism of SB08 is deserved, and some is not.

First, a Comment on Peer Review at the Journal of Climate

It is unfortunate that the authors and/or an editor at Journal of Climate decided that MF10 would be published without asking me or Danny Braswell to be reviewers.

Their paper is quite brief, and is obviously in the class of a “Comments on…” paper, yet it will appear as a full “Article”. But a “Comments on…” classification would then have required the Journal of Climate to give us a chance to review MF10 and to respond. So, it appears that one or more people wanted to avoid any inconvenient truths.

Thus, since it will be at least a year before a response study by us could be published – and J. Climate seems to be trying to avoid us – I must now respond here, to help avoid some of the endless questions I will have to endure once MF10 is in print.

On the positive side, though, MF10 have forced us to go back and reexamine the methodology and conclusions in SB08. As a result, we are now well on the way to new results which will better optimize the matching of satellite-observed climate variability to the simple climate model, including a range of feedback estimates consistent with the satellite data. It is now apparent to us that we did not do a good enough job of that in SB08.

I want to emphasize, though, that our most recent paper now in press at JGR (Spencer & Braswell, 2010: “On the Diagnosis of Radiative Feedback in the Presence of Unknown Radiative Forcing”, hereafter SB10), should be referenced by anyone interested in the latest published evidence supporting our claims. It does not have the main shortcomings I will address below.

But for those who want to get some idea of how we view the specific MF10 criticisms of SB08, I present the following. Keep in mind this is after only three days of analysis.

There are 2 Big Picture Questions Addressed by SB08 & MF10

There are two overarching scientific questions addressed by our SB08 paper, and MF10’s criticisms of it:

(1) Do significant low biases exist in current, satellite-based estimates of radiative feedbacks in the climate system (which could suggest high biases in inferred climate sensitivity)?

(2) Assuming that low biases do exist, did we (SB08) do an adequate job of demonstrating their probable origin, and how large those biases might be?

I will address question 1 first.

Big Picture Question #1: Does a Problem Even Exist in Diagnosing Feedbacks from Satellite Data?

MF10 conclude their paper with the claim that standard regression techniques can be applied to satellite data to get useful estimates of climate feedback, an opinion we strongly disagree with.

Fortunately, it is easy to demonstrate that a serious problem does exist. I will do this using MF10’s own method: analysis of output from coupled climate models. But rather than merely referencing a previous publication which does not even apply to the problem at hand, I will show actual evidence from 18 of the IPCC’s coupled climate models.

The following plot shows the final 10 years of data from the 20th Century run of the FGOALS model, output from which is archived at PCMDI (Meehl et al., 2007). The plot is global and 3-month averaged net radiative flux anomalies (reflected solar plus emitted infrared) versus the corresponding surface temperature anomalies produced by the model.

This represents the kind of data which are used to diagnose feedbacks from satellite data. The basic question we are trying to answer with such a plot is: “How much more radiant energy does the Earth lose in response to warming?” The answer to that question would help determine how strongly (or weakly) the climate system might respond to increasing CO2 levels.

It is the slope of the red regression fit to the 3-month data points in the above figure that is the question: Is that slope an estimate of the net radiative feedback operating in the climate model, or not?

MF10 would presumably claim it is. We claim it is not, and furthermore that it will usually be biased low compared to the true feedback operating in the climate system. SB08 was our first attempt to demonstrate this with a simple climate model.

Well, the slope of 0.77 W m-2 K-1 in the above plot would correspond to a climate sensitivity in response to a doubling of atmospheric carbon dioxide (2XCO2) of (3.8/0.77=) 4.9 deg. C of global warming. [This assumes the widely accepted value near 3.8 W m-2 K-1 for the radiative energy imbalance of the Earth in response to 2XCO2].

But 4.9 deg. C of warming is more than DOUBLE the known sensitivity of this model, which is 2.0 to 2.2 deg. C (Forster & Taylor, J. Climate, 2006, hereafter FT06). This is clearly a large error in the diagnosed feedback.

As a statistician will quickly ask, though, does this error represent a bias common to most models, or is it just due to statistical noise?

To demonstrate this is no statistical outlier, the following plot shows regression-diagnosed versus “true” feedbacks diagnosed for 18 IPCC AR4 coupled climate models. We analyzed the output from the last 50 years of the 20th Century runs archived at PCMDI, computing average regression slopes in ten 5-year subsets of each model’s run, with 3-month average anomalies, then averaging those ten regression slopes for each model. Resulting climate sensitivities based upon those average regression slopes are shown separately for the 18 models in the next figure:

As can be seen, most models exhibit large biases – as much as 50 deg. C! — in feedback-inferred climate sensitivity, the result of low biases in the regression-diagnosed feedback parameters. Only 5 of the 18 IPCC AR4 models have errors in regression-inferred sensitivity less than 1 deg. C, and that is after beating down some noise with ten 5-year periods from each model! We can’t do that with only 10 years of satellite data.

Now, note that as long as such large inferred climate sensitivities (50+ deg.!?) can be claimed to be supported by the satellite data, the IPCC can continue to warn that catastrophic global warming is a real possibility.

The real reason why such biases exist, however, is addressed in greater depth in our new paper, (Spencer and Braswell, 2010). The low bias in diagnosed feedback (and thus high bias in climate sensitivity) is related to the extent to which time-varying radiative forcing, mostly due to clouds, contaminates the radiative feedback signal responding to temperature changes.

It is easy to get confused on the issue of using regression to estimate feedbacks because linear regression was ALSO used to get the “true” feedbacks in the previous figure. The difference is that, in order to do so, Forster and Taylor removed the large, transient CO2 radiative forcing imposed on the models in order to better isolate the radiative feedback signal. Over many decades of model run time, this radiative feedback signal then beats down the noise from non-feedback natural cloud variations.

Thus, diagnosing feedback accurately is fundamentally a signal-to-noise problem. Either any time-varying radiative forcing in the data must be relatively small to begin with, or it must be somehow estimated and then removed from the data.

It would be difficult to over-emphasize the importance of understanding the last paragraph.

So, Why Does the Murphy & Forster Example with the HadSM2 Model Give Accurate Feedbacks?

To support their case that there is no serious problem in diagnosing feedbacks from satellite data, MF10 use the example of Gregory et al. (2004 GRL, “A New Method for Diagnosing Radiative Forcing and Climate Sensitivity”). Gregory et al. analyzed the output of a climate model, HadSM3, and found that an accurate feedback could be diagnosed from the model output at just about any point during the model integration.

But the reason why Gregory et al. could do this, and why it has no consequence for the real world, is so obvious that I continue to be frustrated that so many climate experts still do not understand it.

The Gregory et al. HadSM3 model experiment used an instantaneous quadrupling (!) of the CO2 content of the model atmosphere. In such a hypothetical situation, there will be rapid warming, and thus a strong radiative feedback signal in response to that warming.

But this hypothetical situation has no analog in the real world. The only reason why one could accurately diagnose feedback in such a case is because the 4XCO2 radiative forcing is kept absolutely constant over time, and so the radiative feedback signal is not contaminated by it.

Again I emphasize, instantaneous and then constant radiative forcing has no analog in the real world. Experts using such unrealistic cases has led to much confusion regarding the diagnosis of feedbacks from satellite data. In nature, ever-evolving time-varying radiative forcings (what some call “unforced natural variability”) are almost always overpowering radiative feedback.

But does that mean that Spencer & Braswell (2008) did a convincing job of demonstrating how large the resulting errors in feedback diagnosis could be in response to such time-varying radiative forcing? Probably not.

Big Picture Question #2: Did Spencer & Braswell (2008) Do An Adequate Job of Demonstrating Why Feedback Biases Occur?

MF10 made two changes in our simple climate model which had large consequences: (1) they change the averaging time of the model output to be consistent with the relatively short satellite datasets we have to compare to, and (2) they increase the assumed depth of the tropical ocean mixed layer from 50 meters to 100 meters in the simple model.

The first change, we agree, is warranted, and it indeed results in less dramatic biases in feedbacks diagnosed from the simple model. We have independently checked this with the simple model by comparing our new results to those of MF10.

The second change, we believe, is not warranted, and it pushes the errors to even smaller values. If anything, we think we can show that even 50 meters is probably too deep a mixed layer for the tropical ocean (what we addressed) on these time scales.

Remember, we are exploring why feedbacks diagnosed from satellite-observed, year-to-year climate variability are biased low, and on those short time scales, the equivalent mixing depths are pretty shallow. As one extends the time to many decades, the depth of ocean responding to a persistent warming mechanism increases to 100’s of meters, consistent with MF10’s claim. But for diagnosing feedbacks from satellite data, the time scales of variability affecting the data are 1 to only a few years.

But we have also discovered a significant additional shortcoming in SB08 (and MF10) that has a huge impact on the answer to Question #2: In addition to just the monthly standard deviations of the satellite-measured radiative fluxes and sea surface temperatures, we should have included (at least) one more important satellite statistic: the level of decorrelation of the data.

Our SB10 paper actually does this (which is why it should be referenced for the latest evidence supporting our claims). After accounting for the decorrelation in the data (which exists in ALL of the IPCC models, see the first figure, above, for an example) the MF10 conclusion that the ratio of the noise to signal (N/S) in the satellite data is only around 15% can not be supported.

Unfortunately, SB08 did not adequately demonstrate this with the satellite data. SB10 does…but does not optimize the model parameters that best match the satellite data. That is now the focus of our new work on the subject.

Since this next step was not obvious to us until MF10 caused us to go back and reexamine the simple model and its assumptions, this shows the value of other researchers getting involved in this line of research. For that we are grateful.

Final Comments

While the above comments deal with the “big picture” issues and implications of SB08, and MF10’s criticism of it, there are also a couple of errors and misrepresentations in MF10 that should be addressed, things that could have been caught had we been allowed to review their manuscript.

1) MF10 claim to derive a “more correct” analytical expression for the error in feedback error than SB08 provided. If anything, it is ours that is more correct. Their expression (the derivation of which we admit is impressive) is only correct for an infinite time period, which is irrelevant to the issue at hand, and will have errors for finite time periods. In contrast, our expression is exactly correct for a finite time series of data, which is what we are concerned with in the real world.

2) MF10 remove “seasonal cycles” from the randomly forced model data time series. Why would this be necessary for a model that has only random daily forcing? Very strange.

Despite the shortcomings, MF10 do provide some valuable insight, and some of what they present is indeed useful for advancing our understanding of what causes variations in the radiative energy budget of the Earth.

References
Forster, P. M., and J. M. Gregory (2006), The climate sensitivity and its components diagnosed from Earth Radiation Budget data, J. Climate, 19, 39-52.

Forster, P.M., and K.E. Taylor (2006), Climate forcings and climate sensitivities diagnosed from coupled climate model integrations, J. Climate, 19, 6181-6194.

Gregory, J.M., W. J. Ingram, M.A. Palmer, G.S. Jones, P.A. Stott, R.B. Thorpe, J.A. Lowe, T.C Johns, and K.D. Williams (2004), A new method for diagnosing radiative forcing and climate sensitivity, Geophys. Res. Lett., 31, L03205, doi:10.1029/2003GL018747.

Meehl, G. A., C. Covey, T. Delworth, M. Latif, B. McAvaney, J. F. B. Mitchell, R. J. Stouffer, and K. E. Taylor (2007), The WCRP CMIP3 multi-model dataset: A new era in climate change research, Bull. Am. Meteorol. Soc., 88, 1383-1394.

Murphy, D.M., and P. M. Forster (2010), On the Accuracy of Deriving Climate Feedback Parameters from Correlations Between Surface Temperature and Outgoing Radiation. J. Climate, in press. [PDF currently available to AMS members].

Spencer, R.W., and W.D. Braswell (2008), Potential biases in cloud feedback diagnosis: A simple model demonstration, J. Climate, 21, 5624-5628. PDF.

Spencer, R. W., and W. D. Braswell (2010), On the Diagnosis of Radiative Feedback in the Presence of Unknown Radiative Forcing, J. Geophys. Res., doi:10.1029/2009JD013371, in press. [PDF currently available to AGU members] (accepted 12 April 2010)

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

My Global Warming Skepticism, for Dummies

July 17th, 2010

I receive many e-mails, and a recurring complaint is that many of my posts are too technical to understand. This morning’s installment arrived with the subject line, “Please Talk to Us”, and suggested I provide short, concise, easily understood summaries and explanations “for dummies”.

So, here’s a list of basic climate change questions, and brief answers based upon what I know today. I might update them as I receive suggestions and comments. I will also be adding links to other sources, and some visual aids, as appropriate.

Deja vu tells me I might have done this once before, but I’m too lazy to go back and see. So, I’ll start over from scratch. (Insert smiley)

It is important to understand at the outset that those of us who are skeptical of mankind’s influence on climate have a wide variety of views on the subject, and we can’t all be right. In fact, in this business, it is really easy to be wrong. It seems like everyone has a theory of what causes climate change. But it only takes one of us to be right for the IPCC’s anthropogenic global warming (AGW) house of cards to collapse.

As I like to say, taking measurements of the climate system is much easier than figuring out what those measurements mean in terms of cause and effect. Generally speaking, it’s not the warming that is in dispute…it’s the cause of the warming.

If you disagree with my views on something, please don’t flame me. Chances are, I’ve already heard your point of view; very seldom am I provided with new evidence I haven’t already taken into account.

1) Are Global Temperatures Rising Now? There is no way to know, because natural year-to-year variability in global temperature is so large, with warming and cooling occurring all the time. What we can say is that surface and lower atmospheric temperature have risen in the last 30 to 50 years, with most of that warming in the Northern Hemisphere. Also, the magnitude of recent warming is somewhat uncertain, due to problems in making long-term temperature measurements with thermometers without those measurements being corrupted by a variety of non-climate effects. But there is no way to know if temperatures are continuing to rise now…we only see warming (or cooling) in the rearview mirror, when we look back in time.

2) Why Do Some Scientists Say It’s Cooling, while Others Say that Warming is Even Accelerating?
Since there is so much year-to-year (and even decade-to-decade) variability in global average temperatures, whether it has warmed or cooled depends upon how far back you look in time. For instance, over the last 100 years, there was an overall warming which was stronger toward the end of the 20th Century. This is why some say “warming is accelerating”. But if we look at a shorter, more recent period of time, say since the record warm year of 1998, one could say that it has cooled in the last 10-12 years. But, as I mentioned above, neither of these can tell us anything about whether warming is happening “now”, or will happen in the future.

3) Haven’t Global Temperatures Risen Before? Yes. In the longer term, say hundreds to thousands of years, there is considerable indirect, proxy evidence (not from thermometers) of both warming and cooling. Since humankind can’t be responsible for these early events, this is evidence that nature can cause warming and cooling. If that is the case, it then opens up the possibility that some (or most) of the warming in the last 50 years has been natural, too. While many geologists like to point to much larger temperature changes are believed to have occurred over millions of years, I am unconvinced that this tells us anything of use for understanding how humans might influence climate on time scales of 10 to 100 years.

4) But Didn’t the “Hockey Stick” Show Recent Warming to be Unprecedented? The “hockey Stick” reconstructions of temperature variations over the last 1 to 2 thousand years have been a huge source of controversy. The hockey stick was previously used by the IPCC as a veritable poster child for anthropogenic warming, since it seemed to indicate there have been no substantial temperature changes over the last 1,000 to 2,000 years until humans got involved in the 20th Century. The various versions of the hockey stick were based upon limited amounts of temperature proxy evidence — primarily tree rings — and involved questionable statistical methods. In contrast, I think the bulk of the proxy evidence supports the view that it was at least as warm during the Medieval Warm Period, around 1000 AD. The very fact that recent tree ring data erroneously suggests cooling in the last 50 years, when in fact there has been warming, should be a warning flag about using tree ring data for figuring out how warm it was 1,000 years ago. But without actual thermometer data, we will never know for sure.

5) Isn’t the Melting of Arctic Sea Ice Evidence of Warming? Warming, yes…manmade warming, no. Arctic sea ice naturally melts back every summer, but that meltback was observed to reach a peak in 2007. But we have relatively accurate, satellite-based measurements of Arctic (and Antarctic) sea ice only since 1979. It is entirely possible that late summer Arctic Sea ice cover was just as low in the 1920s or 1930s, a period when Arctic thermometer data suggests it was just as warm. Unfortunately, there is no way to know, because we did not have satellites back then. Interestingly, Antarctic sea ice has been growing nearly as fast as Arctic ice has been melting over the last 30+ years.

6) What about rising sea levels? I must confess, I don’t pay much attention to the sea level issue. I will say that, to the extent that warming occurs, sea levels can be expected to also rise to some extent. The rise is partly due to thermal expansion of the water, and partly due to melting or shedding of land-locked ice (the Greenland and Antarctic ice sheets, and glaciers). But this says nothing about whether or not humans are the cause of that warming. Since there is evidence that glacier retreat and sea level rise started well before humans can be blamed, causation is — once again — a major source of uncertainty.

7) Is Increasing CO2 Even Capable of Causing Warming? There are some very intelligent people out there who claim that adding more carbon dioxide to the atmosphere can’t cause warming anyway. They claim things like, “the atmospheric CO2 absorption bands are already saturated”, or something else very technical. [And for those more technically-minded persons, yes, I agree that the effective radiating temperature of the Earth in the infrared is determined by how much sunlight is absorbed by the Earth. But that doesn’t mean the lower atmosphere cannot warm from adding more greenhouse gases, because at the same time they also cool the upper atmosphere]. While it is true that most of the CO2-caused warming in the atmosphere was there before humans ever started burning coal and driving SUVs, this is all taken into account by computerized climate models that predict global warming. Adding more “should” cause warming, with the magnitude of that warming being the real question. But I’m still open to the possibility that a major error has been made on this fundamental point. Stranger things have happened in science before.

8 ) Is Atmospheric CO2 Increasing? Yes, and most strongly in the last 50 years…which is why “most” climate researchers think the CO2 rise is the cause of the warming. Our site measurements of CO2 increase from around the world are possibly the most accurate long-term, climate-related, measurements in existence.

9) Are Humans Responsible for the CO2 Rise? While there are short-term (year-to-year) fluctuations in the atmospheric CO2 concentration due to natural causes, especially El Nino and La Nina, I currently believe that most of the long-term increase is probably due to our use of fossil fuels. But from what I can tell, the supposed “proof” of humans being the source of increasing CO2 — a change in the atmospheric concentration of the carbon isotope C13 — would also be consistent with a natural, biological source. The current atmospheric CO2 level is about 390 parts per million by volume, up from a pre-industrial level estimated to be around 270 ppm…maybe less. CO2 levels can be much higher in cities, and in buildings with people in them.

10) But Aren’t Natural CO2 Emissions About 20 Times the Human Emissions? Yes, but nature is believed to absorb CO2 at about the same rate it is produced. You can think of the reservoir of atmospheric CO2 as being like a giant container of water, with nature pumping in a steady stream into the bottom of the container (atmosphere) in some places, sucking out about the same amount in other places, and then humans causing a steady drip-drip-drip into the container. Significantly, about 50% of what we produce is sucked out of the atmosphere by nature, mostly through photosynthesis. Nature loves the stuff. CO2 is the elixir of life on Earth. Imagine the howls of protest there would be if we were destroying atmospheric CO2, rather than creating more of it.

11) Is Rising CO2 the Cause of Recent Warming? While this is theoretically possible, I think it is more likely that the warming is mostly natural. At the very least, we have no way of determining what proportion is natural versus human-caused.

12) Why Do Most Scientists Believe CO2 is Responsible for the Warming? Because (as they have told me) they can’t think of anything else that might have caused it. Significantly, it’s not that there is evidence nature can’t be the cause, but a lack of sufficiently accurate measurements to determine if nature is the cause. This is a hugely important distinction, and one the public and policymakers have been misled on by the IPCC.

13) If Not Humans, What could Have Caused Recent Warming? This is one of my areas of research. I believe that natural changes in the amount of sunlight being absorbed by the Earth — due to natural changes in cloud cover — are responsible for most of the warming. Whether that is the specific mechanism or not, I advance the minority view that the climate system can change all by itself. Climate change does not require an “external” source of forcing, such as a change in the sun.

14) So, What Could Cause Natural Cloud Changes? I think small, long-term changes in atmospheric and oceanic flow patterns can cause ~1% changes in how much sunlight is let in by clouds to warm the Earth. This is all that is required to cause global warming or cooling. Unfortunately, we do not have sufficiently accurate cloud measurements to determine whether this is the primary cause of warming in the last 30 to 50 years.

15) How Significant is the Climategate Release of E-Mails? While Climategate does not, by itself, invalidate the IPCC’s case that global warming has happened, or that humans are the primary cause of that warming, it DOES illustrate something I emphasized in my first book, “Climate Confusion”: climate researchers are human, and prone to bias.

16) Why Would Bias in Climate Research be Important? I thought Scientists Just Follow the Data Where It Leads Them When researchers approach a problem, their pre-conceived notions often guide them. It’s not that the IPCC’s claim that humans cause global warming is somehow untenable or impossible, it’s that political and financial pressures have resulted in the IPCC almost totally ignoring alternative explanations for that warming.

17) How Important Is “Scientific Consensus” in Climate Research? In the case of global warming, it is nearly worthless. The climate system is so complex that the vast majority of climate scientists — usually experts in variety of specialized fields — assume there are more knowledgeable scientists, and they are just supporting the opinions of their colleagues. And among that small group of most knowledgeable experts, there is a considerable element of groupthink, herd mentality, peer pressure, political pressure, support of certain energy policies, and desire to Save the Earth — whether it needs to be saved or not.

18) How Important are Computerized Climate Models? I consider climate models as being our best way of exploring cause and effect in the climate system. It is really easy to be wrong in this business, and unless you can demonstrate causation with numbers in equations, you are stuck with scientists trying to persuade one another by waving their hands. Unfortunately, there is no guarantee that climate models will ever produce a useful prediction of the future. Nevertheless, we must use them, and we learn a lot from them. My biggest concern is that models have been used almost exclusively for supporting the claim that humans cause global warming, rather than for exploring alternative hypotheses — e.g. natural climate variations — as possible causes of that warming.

19) What Do I Predict for Global Temperature Changes in the Future? I tend to shy away from long-term predictions, because there are still so many uncertainties. When pressed, though, I tend to say that I think cooling in our future is just as real a possibility as warming. Of course, a third possibility is relatively steady temperatures, without significant long-term warming or cooling. Keep in mind that, while you will find out tomorrow whether your favorite weather forecaster is right or wrong, no one will remember 50 years from now a scientist today wrongly predicting we will all die from heat stroke by 2060.

Concluding Remarks

Climate researchers do not know nearly as much about the causes of climate change as they profess. We have a pretty good understanding of how the climate system works on average…but the reasons for small, long-term changes in climate system are still extremely uncertain.

The total amount of CO2 humans have added to the atmosphere in the last 100 years has upset the radiative energy budget of the Earth by only 1%. How the climate system responds to that small “poke” is very uncertain. The IPCC says there will be strong warming, with cloud changes making the warming worse. I claim there will be weak warming, with cloud changes acting to reduce the influence of that 1% change. The difference between these two outcomes is whether cloud feedbacks are positive (the IPCC view), or negative (the view I and a minority of others have).

So far, neither side has been able to prove their case. That uncertainty even exists on this core issue is not appreciated by many scientists!

Again I will emphasize, some very smart people who consider themselves skeptics will disagree with some of my views stated above, particularly when it involves explanations for what has caused warming, and what has caused atmospheric CO2 to increase.

Unlike the global marching army of climate researchers the IPCC has enlisted, we do not walk in lockstep. We are willing to admit, “we don’t really know”, rather than mislead people with phrases like, “the warming we see is consistent with an increase in CO2”, and then have the public think that means, “we have determined, through our extensive research into all the possibilities, that the warming cannot be due to anything but CO2”.

Skeptics advancing alternative explanations (hypotheses) for climate variability represent the way the researcher community used to operate, before politics, policy outcomes, and billions of dollars got involved.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

June 2010 UAH Global Temperature Update: +0.44 deg. C

July 1st, 2010


YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506
2010 1 0.648 0.860 0.436 0.681
2010 2 0.603 0.720 0.486 0.791
2010 3 0.653 0.850 0.455 0.726
2010 4 0.501 0.799 0.203 0.633
2010 5 0.534 0.775 0.292 0.708
2010 6 0.436 0.552 0.321 0.475

UAH_LT_1979_thru_June_10

The global-average lower tropospheric temperature remains warm, +0.44 deg. C for June, 2010, but it appears the El Nino warmth is waning as a La Nina approaches.

For those keeping track of whether 2010 ends up being a record warm year, 1998 still leads with the daily average for 1 Jan to 30 June being +0.64 C in 1998 compared with +0.56 C for 2010. (John Christy says that the difference is not statistically significant.) As of 30 June 2010, there have been 181 days in the year. From our calibrated daily data, we find that 1998 was warmer than 2010 on 122 (two-thirds) of them.

As a reminder, four months ago we changed to Version 5.3 of our dataset, which accounts for the mismatch between the average seasonal cycle produced by the older MSU and the newer AMSU instruments. This affects the value of the individual monthly departures, but does not affect the year to year variations, and thus the overall trend remains the same as in Version 5.2. ALSO…we have added the NOAA-18 AMSU to the data processing in v5.3, which provides data since June of 2005. The local observation time of NOAA-18 (now close to 2 p.m., ascending node) is similar to that of NASA’s Aqua satellite (about 1:30 p.m.). The temperature anomalies listed above have changed somewhat as a result of adding NOAA-18.

[NOTE: These satellite measurements are not calibrated to surface thermometer data in any way, but instead use on-board redundant precision platinum resistance thermometers (PRTs) carried on the satellite radiometers. The PRT’s are individually calibrated in a laboratory before being installed in the instruments.]


Revisiting the Pinatubo Eruption as a Test of Climate Sensitivity

June 23rd, 2010


The eruption of Mt. Pinatubo in the Philippines on June 15, 1991 provided a natural test of the climate system to radiative forcing by producing substantial cooling of global average temperatures over a period of 1 to 2 years. There have been many papers which have studied the event in an attempt to determine the sensitivity of the climate system, so that we might reduce the (currently large) uncertainty in the future magnitude of anthropogenic global warming.

In perusing some of these papers, I find that the issue has been made unnecessarily complicated and obscure. I think part of the problem is that too many investigators have tried to approach the problem from the paradigm most of us have been misled by: believing that sensitivity can be estimated from the difference between two equilibrium climate states, say before the Pinatubo eruption, and then as the climate system responds to the Pinatubo aerosols. The trouble is that this is not possible unless the forcing remains constant, which clearly is not the case since most of the Pinatubo aerosols are gone after about 2 years.

Here I will briefly address the pertinent issues, and show what I believe to be the simplest explanation of what can — and cannot — be gleaned from the post-eruption response of the climate system. And, in the process, we will find that the climate system’s response to Pinatubo might not support the relatively high climate sensitivity that many investigators claim.

Radiative Forcing Versus Feedback
I will once again return to the simple model of the climate system’s average change in temperature from an equilibrium state. Some call it the “heat balance equation”, and it is concise, elegant, and powerful. To my knowledge, no one has shown why such a simple model can not capture the essence of the climate system’s response to an event like the Pinatubo eruption. Increased complexity does not necessarily ensure increased accuracy.

The simple model can be expressed in words as:

[system heat capacity] x[temperature change with time] = [Radiative Forcing] – [Radiative Feedback],

or with mathematical symbols as:

Cp*[dT/dt] = F – lambda*T .

Basically, this equation says that the temperature change with time [dT/dt] of a climate system with a certain heat capacity [Cp, dominated by the ocean depth over which heat is mixed] is equal to the radiative forcing [F] imposed upon the system minus any radiative feedback [lambda*T] upon the resulting temperature change. (The left side is also equivalent to the change in the heat content of the system with time.)

The feedback parameter (lambda, always a positive number if the above equation is expressed with a negative sign) is what we are interested in determining, because its reciprocal is the climate sensitivity. The net radiative feedback is what “tries” to restore the system temperature back to an equilibrium state.

Lambda represents the combined effect of all feedbacks PLUS the dominating, direct infrared (Planck) response to increasing temperature. This Planck response is estimated to be 3.3 Watts per sq. meter per degree C for the average effective radiating temperature of the Earth, 255K. Clouds, water vapor, and other feedbacks either reduce the total “restoring force” to below 3.3 (positive feedbacks dominate), or increase it above 3.3 (negative feedbacks dominate).

Note that even though the Planck effect behaves like a strong negative feedback, and is even included in the net feedback parameter, for some reason it is not included in the list of climate feedbacks. This is probably just to further confuse us.

If positive feedbacks were strong enough to cause the net feedback parameter to go negative, the climate system would potentially be unstable to temperature changes forced upon it. For reference, all 21 IPCC climate models exhibit modest positive feedbacks, with lambda ranging from 0.8 to 1.8 Watts per sq. meter per degree C, so none of them are inherently unstable.

This simple model captures the two most important processes in global-average temperature variability: (1) through energy conservation, it translates a global, top-of-atmosphere radiative energy imbalance into a temperature change of a uniformly mixed layer of water; and (2) a radiative feedback restoring forcing in response to that temperature change, the value of which depends upon the sum of all feedbacks in the climate system.


Modeling the Post-Pinatubo Temperature Response

So how do we use the above equation together with measurements of the climate system to estimate the feedback parameter, lambda? Well, let’s start with 2 important global measurements we have from satellites during that period:

1) ERBE (Earth Radiation Budget Experiment) measurements of the variations in the Earth’s radiative energy balance, and

2) the change in global average temperature with time [dT/dt] of the lower troposphere from the satellite MSU (Microwave Sounding Unit) instruments.

Importantly — and contrary to common beliefs – the ERBE measurements of radiative imbalance do NOT represent radiative forcing. They instead represent the entire right hand side of the above equation: a sum of radiative forcing AND radiative feedback, in unknown proportions.

In fact, this net radiative imbalance (forcing + feedback) is all we need to know to estimate one of the unknowns: the system net heat capacity, Cp. The following two plots show for the pre- and post-Pinatubo period (a) the ERBE radiative balance variations; and (b) the MSU tropospheric temperature variations, along with 3 model simulations using the above equation. [The ERBE radiative flux measurements are necessarily 72-day averages to match the satellite’s orbit precession rate, so I have also computed 72-day temperature averages from the MSU, and run the model with a 72-day time step].

As can be seen in panel b, the MSU-observed temperature variations are consistent with a heat capacity equivalent to an ocean mixed layer depth of about 40 meters.

So, What is the Climate Model’s Sensitivity, Roy?
I think this is where confusion usually enters the picture. In running the above model, note that it was not necessary to assume a value for lambda, the net feedback parameter. In other words, the above model simulation did not depend upon climate sensitivity at all!

Again, I will emphasize: Modeling the observed temperature response of the climate system based only upon ERBE-measured radiative imbalances does not require any assumption regarding climate sensitivity. All we need to know was how much extra radiant energy the Earth was losing [or gaining], which is what the ERBE measurements represent.

Conceptually, the global-average ERBE-measured radiative imbalances measured after the Pinatubo eruption are some combination of (1) radiative forcing from the Pinatubo aerosols, and (2) net radiative feedback upon the resulting temperature changes opposing the temperature changes resulting from that forcing– but we do not know how much of each. There are an infinite number of combinations of forcing and feedback that would be able to explain the satellite observations.

Nevertheless, we do know ONE difference in how forcing and feedback are expressed over time: Temperature changes lag the radiative forcing, but radiative feedback is simultaneous with temperature change.

What we need to separate the two is another source of information to sort out how much forcing versus feedback is involved, for instance something related to the time history of the radiative forcing from the volcanic aerosols. Otherwise, we can not use satellite measurements to determine net feedback in response to radiative forcing.

Fortunately, there is a totally independent satellite estimate of the radiative forcing from Pinatubo.

SAGE Estimates of the Pinatubo Aerosols
For anyone paying attention back then, the 1991 eruption of Pinatubo produced over one year of milky skies just before sunrise and just after sunset, as the sun lit up the stratospheric aerosols, composed mainly of sulfuric acid. The following photo was taken from the Space Shuttle during this time:

There are monthly stratospheric aerosol optical depth (tau) estimates archived at GISS, which during the Pinatubo period of time come from the SAGE (Stratospheric Aerosol and Gas Experiment). The following plot shows these monthly optical depth estimates for the same period of time we have been examining.

Note in the upper panel that the aerosols dissipated to about 50% of their peak concentration by the end of 1992, which is 18 months after the eruption. But look at the ERBE radiative imbalances in the bottom panel – the radiative imbalances at the end of 1992 are close to zero.

But how could the radiative imbalance of the Earth be close to zero at the end of 1992, when the aerosol optical depth is still at 50% of its peak?

The answer is that net radiative feedback is approximately canceling out the radiative forcing by the end of 1992. Persistent forcing of the climate system leads to a lagged – and growing – temperature response. Then, the larger the temperature response, the greater the radiative feedback which is opposing the radiative forcing as the system tries to restore equilibrium. (The climate system never actually reaches equilibrium, because it is always being perturbed by internal and external forcings…but, through feedback, it is always trying).

A Simple and Direct Feedback Estimate
Previous workers (e.g. Hansen et al., 2002) have calculated that the radiative forcing from the Pinatubo aerosols can be estimated directly from the aerosol optical depths measured by SAGE: the forcing in Watts per sq. meter is simply 21 times the optical depth.

Now we have sufficient information to estimate the net feedback. We simply subtract the SAGE-based estimates of Pinatubo radiative forcings from the ERBE net radiation variations (which are a sum of forcing and feedback), which should then yield radiative feedback estimates. We then compare those to the MSU lower tropospheric temperature variations to get an estimate of the feedback parameter, lambda. The data (after I have converted the SAGE monthly data to 72 day averages), looks like this:

The slope of 3.66 Watts per sq. meter per degree corresponds to weakly negative net feedback. If this corresponded to the feedback operating in response to increasing carbon dioxide concentrations, then doubling of atmosphere CO2 (2XCO2) would cause only 1 deg. C of warming. This is below the 1.5 deg. C lower limit the IPCC is 90% sure the climate sensitivity will not be below.

The Time History of Forcing and Feedback from Pinatubo
It is useful to see what two different estimates of the Pinatubo forcing looks like: (1) the direct estimate from SAGE, and (2) an indirect estimate from ERBE minus the MSU-estimated feedbacks, using our estimate of lambda = 3.66 Watts per sq. meter per deg. C. This is shown in the next plot:

Note that at the end of 1992, the Pinatubo aerosol forcing, which has decreased to about 50% of its peak value, almost exactly offsets the feedback, which has grown in proportion to the temperature anomaly. This is why the ERBE-measured radiative imbalance is close to zero…radiative feedback is canceling out the radiative forcing.

The reason why the ‘indirect’ forcing estimate looks different from the more direct SAGE-deduced forcing in the above figure is because there are other, internally-generated radiative “forcings” in the climate system measured by ERBE, probably due to natural cloud variations. In contrast, SAGE is a limb occultation instrument, which measures the aerosol loading of the cloud-free stratosphere when the instrument looks at the sun just above the Earth’s limb.

Discussion
I have shown that Earth radiation budget measurements together with global average temperatures can not be used to infer climate sensitivity (net feedback) in response to radiative forcing of the climate system. The only exception would be from the difference between two equilibrium climate states involving radiative forcing that is instantaneously imposed, and then remains constant over time. Only in this instance is all of the radiative variability due to feedback, not forcing.

Unfortunately, even though this hypothetical case has formed the basis for many investigations of climate sensitivity, this exception never happens in the real climate system

In the real world, some additional information is required regarding the time history of the forcing — preferably the forcing history itself. Otherwise, there are an infinite number of combinations of forcing and feedback which can explain a given set of satellite measurements of radiative flux variations and global temperature variations.

I currently believe the above methodology, or something similar, is the most direct way to estimate net feedback from satellite measurements of the climate system as it responds to a radiative forcing event like the Pinatubo eruption. The method is not new, as it is basically the same one used by Forster and Taylor (2006 J. of Climate) to estimate feedbacks in the IPCC AR4 climate models. Forster and Taylor took the global radiative imbalances the models produced over time (analogous to our ERBE measurements of the Earth), subtracted the radiative forcings that were imposed upon the models (usually increasing CO2), and then compared the resulting radiative feedback estimates to the corresponding temperature variations, just as I did in the scatter diagram above.

All I have done is apply the same methodology to the Pinatubo event. In fact, Forster and Gregory (also 2006 J. Climate) performed a similar analysis of the Pinatubo period, but for some reason got a feedback estimate closer to the IPCC climate models. I am using tropospheric temperatures, rather than surface temperatures as they did, but the 30+ year satellite record shows that year-to-year variations in tropospheric temperatures are larger than the surface temperatures variations. This means the feedback parameter estimated here (3.66) would be even larger if scaled to surface temperature. So, other than the fact that the ERBE data have relatively recently been recalibrated, I do not know why their results should differ so much from my results.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

The Global Warming Inquisition Has Begun

June 22nd, 2010


A new “study” has been published in the Proceedings of the National Academy of Sciences (PNAS) which has examined the credentials and publication records of climate scientists who are global warming skeptics versus those who accept the “tenets of anthropogenic climate change”.

Not surprisingly, the study finds that the skeptical scientists have fewer publications or are less credentialed than the marching army of scientists who have been paid hundreds of millions of dollars over the last 20 years to find every potential connection between fossil fuel use and changes in nature.

After all, nature does not cause change by itself, you know.

The study lends a pseudo-scientific air of respectability to what amounts to a black list of the minority of scientists who do not accept the premise that global warming is mostly the result of you driving your SUV and using incandescent light bulbs.

There is no question that there are very many more scientific papers which accept the mainstream view of global warming being caused by humans. And that might account for something if those papers actually independently investigated alternative, natural mechanisms that might explain most global warming in the last 30 to 50 years, and found that those natural mechanisms could not.

As just one of many alternative explanations, most of the warming we have measured in the last 30 years could have been caused by a natural, 2% decrease in cloud cover. Unfortunately, our measurements of global cloud cover over that time are nowhere near accurate enough to document such a change.

But those scientific studies did not address all of the alternative explanations. They couldn’t, because we do not have the data to investigate them. The vast majority of them simply assumed global warming was manmade.

I’m sorry, but in science a presupposition is not “evidence”.

Instead, anthropogenic climate change has become a scientific faith. The fact that the very first sentence in the PNAS article uses the phrase “tenets of anthropogenic climate change” hints at this, since the term “tenet” is most often used when referring to religious doctrine, or beliefs which cannot be proved to be true.

So, since we have no other evidence to go on, let’s pin the rap on humanity. It just so happens that’s the position politicians want, which is why politics played such a key role in the formation of the IPCC two decades ago.

The growing backlash against us skeptics makes me think of the Roman Catholic Inquisition, which started in the 12th Century. Of course, no one (I hope no one) will be tried and executed for not believing in anthropogenic climate change. But the fact that one of the five keywords or phrases attached to the new PNAS study is “climate denier” means that such divisive rhetoric is now considered to be part of our mainstream scientific lexicon by our country’s premier scientific organization, the National Academy of Sciences.

Surely, equating a belief in natural climate change to the belief that the Holocaust slaughter of millions of Jews and others by the Nazis never occurred is a new low for science as a discipline.

The new paper also implicitly adds most of the public to the black list, since surveys have shown dwindling public belief in the consensus view of climate change.

At least I have lots of company.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

Global Average Sea Surface Temperatures Continue their Plunge

June 18th, 2010

Sea Surface Temperatures (SSTs) measured by the AMSR-E instrument on NASA’s Aqua satellite continue their plunge as a predicted La Nina approaches. The following plot, updated through yesterday (June 17, 2010) shows that the cooling in the Nino34 region in the tropical east Pacific is well ahead of the cooling in the global average SST, something we did not see during the 2007-08 La Nina event (click on it for the large, undistorted version):

The rate at which the Nino34 SSTs are falling is particularly striking, as seen in this plot of the SST change rate for that region:

To give some idea of what is causing the global-average SST to fall so rapidly, I came up with an estimate of the change in reflected sunlight (shortwave, or SW flux) using our AMSR-E total integrated cloud water amounts. This was done with a 7+ year comparison of those cloud water estimates to daily global-ocean SW anomalies computed from the CERES radiation budget instrument, also on Aqua:

What this shows is an unusually large increase in reflected sunlight over the last several months, probably due to an increase in low cloud cover.

At this pace of cooling, I suspect that the second half of 2010 could ruin the chances of getting a record high global temperature for this year. Oh, darn.


FAQ #271: If Greenhouse Gases are such a Small Part of the Atmosphere, How Do They Change Its Temperature?

June 17th, 2010

NOTE posted 9:20 a.m. CDT 21 June, 2010: Upon reading the comments here, its obvious some have misinterpreted what I am discussing here. It’s NOT why greenhouse gases act to warm the lower atmosphere, it’s why a given parcel of air containing a very small fraction of greenhouse gases can be thoroughly warmed (or cooled, if in the upper atmosphere) by them.

Some of the questions I receive from the public tend to show up repeatedly. One of those more common questions I receive arrived once again yesterday, from a airplane pilot, who asked “If greenhouse gases are such a small proportion of the atmosphere,” (only 39 out of every 100,000 molecules are CO2), “how can they heat or cool all the rest of the air?”

The answer comes from the “kinetic theory of gases”. In effect, each CO2 molecule is a tiny heater (or air conditioner) depending on whether it is absorbing more infrared photons than it is emitting, or vice versa.

When the radiatively active molecules in the atmosphere — mainly water vapor, CO2, and methane — are heated by infrared radiation, even though they are a very small fraction of the total, they are moving very fast and do not have to travel very far before they collide with other molecules of air…that’s when they transfer part of their thermal energy to another molecule. That transfer is in the form of momentum from the molecule’s mass and its speed.

That molecule then bumps into others, those bump into still more, and on and on ad infinitum.

To give some idea of how fast all this happens, consider:

1) there are 26,900,000,000,000,000,000,000,000 molecules in 1 cubic meter of air at sea level.

2) at room temperature, each molecule is traveling at a very high speed, averaging 1,000 mph for heavier molecules like nitrogen, over 3,000 mph for the lightest molecule, hydrogen, etc.

3) the average distance a molecule travels before hitting another molecule (called the “mean free path”) is only 0.000067 of a millimeter

So, there are so many molecules traveling so fast, and so close to one another, that the radiatively active molecules almost instantly transfer any extra thermal energy (their velocity is proportional to the square root of their temperature) to other molecules. Or, if they happen to be cooling the air, the absorb extra momentum from the other air molecules.

From the above numbers we can compute that a single nitrogen molecule (air is mostly nitrogen) undergoes over 7 billion collisions every second.

All of this happens on extremely small scales, with gazillions of the radiatively active molecules scattered through a very small volume of air.

It is rather amazing that these relatively few “greenhouse” gases are largely responsible for the temperature structure of the atmosphere. Without them, the atmosphere would have no way of losing the heat energy that it gains from the Earth’s surface in response to solar heating.

Such an atmosphere would eventually become the same temperature throughout its depth, called an “isothermal” atmosphere. All vertical air motions would stop in such an atmosphere, which means there would be no weather either.

Now, I will have to endure the rash of e-mails I always get from those who do not believe that greenhouse gases do all of this. But that issue will have to be the subject of a later FAQ.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

Update on the Role of the Pacific Decadal Oscillation in Global Warming

June 17th, 2010

UPDATE: more edits & enhancements for clarity made at 3:35 CDT, June 17, 2010.

I’ve returned to the issue of determining to what extent the Pacific Decadal Oscillation (PDO) can at least partly explain global average temperature variations, including warming, during the 20th Century. We tried publishing a paper on this over a year ago and were immediately and swiftly rejected in a matter of days by a single (!) reviewer.

Here I use a simple forcing-feedback model, combined with satellite estimates of cloud changes caused by the PDO, to demonstrate the ability of the model to explain the temperature variations. This time, though, I am going to use Jim Hansen’s (GISS) record of yearly radiative forcings of the global climate system since 1900 to demonstrate more convincingly the importance of the PDO…not only for explaining the global temperature record of the past, but for the estimation of the sensitivity of the climate system and thus project the amount of future global warming (er, I mean climate change).

What follows is not meant to be publishable in a peer-reviewed paper. It is to keep the public informed, to stimulate discussion, to provide additional support for the claims in my latest book, and to help me better understand what I know at this point in my research, what I don’t know, and what direction I should go next.

The Simple Climate Model
I’m still using a simple forcing feedback-model of temperature variations, but have found that more than a single ocean layer is required to mimic both the faster time scales (e.g. 5-year) temperature fluctuations, while allowing a slower temperature response on multi-decadal time scales as heat diffuses from the upper ocean to the deeper ocean. The following diagram shows the main components of the model.

For forcing, I am assuming the GISS record of yearly-average forcing, the values of which I have plotted for the period since 1900 in the following graph:

I will simply assume these forcings are correct, and will show what happens in the model when I use: (1) all the GISS forcings together; (2) all GISS forcings except tropospheric aerosols, and (3) all the GISS forcings, but replacing the tropospheric aerosols with the satellite-derived PDO forcings.

Internal Radiative Forcing from the PDO
As readers here are well aware, I believe that there are internal modes of climate variability which can cause “internal radiative forcing” of the climate system. These would most easily be explained as circulation-induced changes in cloud cover. My leading candidate for this mechanism continues to be the Pacific Decadal Oscillation.

We have estimated the radiative forcing associated with the PDO by comparing yearly global averages of them to similar averages of CERES radiative flux variations over the Terra CERES period of record, 2000-2009. But since the CERES-measured radiative imbalances are a combination of forcing and feedback, we must remove an estimate of the feedback to get at the PDO forcing. [This step is completely consistent with, and analogous to, previous investigators removing known radiative forcings from climate model output in order to estimate feedbacks in those models].

Our new JGR paper (still awaiting publication) shows evidence that, for year-to-year climate variability at least, net feedback is about 6 Watts per sq. meter per degree C. After removal of the feedback component with our AMSU-based tropospheric temperature anomalies, the resulting relationship between yearly-running 3-year average PDO index versus radiative forcing looks like this:

This internally-generated radiative forcing is most likely due to changes in global average cloud cover associated with the PDO. If we apply this relationship to yearly estimates of the PDO index, we get the following estimate of “internal radiative forcing” from the PDO since 1900:

As can be seen, these radiative forcings – if they existed during the 20th Century– are comparable to the magnitude of the GISS forcings.

Model Simulations

The model has 7 free parameters that must be estimated to not only make a model run, but to then meaningfully compare that model run’s temperature “predictions” to the observed record of surface temperature variations. We are especially interested in what feedback parameter, when inserted in the model, best explains past temperature variations, since this determines the climate system’s sensitivity to increasing greenhouse gas concentrations.

Given some assumed history of radiative forcings like those shown above, these 7 model free parameters include:
1) An assumed feedback parameter
2) Total ocean depth that heat is stored/lost from.
3) Fraction of ocean depth contained in the upper ocean layer.
4) Ocean diffusion coefficient (same units as feedback parameter)
5) Initial temperature for 1st ocean layer
6) Initial temperature for 2nd ocean layer
7) Temperature offset for the observed temperature record

While the net feedback in the real climate system is likely dominated by changes in the atmosphere (clouds, water vapor, temperature profile), the model does not have an atmospheric layer per se. On the time scales we are considering here (1 to 5 years an longer), atmospheric temperature variations can be assumed to vary in virtual lock-step with the upper ocean temperature variations. So, the atmosphere can simply be considered to be a small (2 meter) part of the first ocean layer, which is the amount of water that has the same heat capacity as the entire atmosphere.

The last parameter, a temperature offset for the observed temperature record, is necessary because the model assumes some equilibrium temperature state of the climate system, a “preferred” temperature state that the model “tries” to relax to through the temperature feedback term in the model equations. This zero-point might be different from the zero-point chosen for display of observed global temperature anomalies, which the thermometer data analysts have chosen somewhat arbitrarily when compiling the HadCRUT3 dataset.

In order to sweep at least 10 values for every parameter, and run the model for all possible combinations of those parameters, there must be millions of computer simulations performed. Each simulation’s reconstructed history of temperatures can then be automatically compared to the observed temperature record to see how closely it matches.

So far, I have only run the model manually in an Excel spreadsheet, one run at a time, and have found what I believe to be the ranges over which the model free parameters provide the best match to global temperature variations since 1900. I expect that the following model fits to the observed temperature record will improve only slightly when we do full “Monte Carlo” set of millions of simulations.

All of the following simulation results use yearly running 5-year averages for the forcings for the period 1902 through 2007, with a model time step of 1 year.

CASE #1: All GISS Forcings
First let’s examine the best fit I found when I included all of the GISS forcings in the model runs. The following model best fit has a yearly RMS error of 0.0763 deg. C:

The above “best” model simulation preferred a total ocean depth of 550 meters, 10% of which (55 meters) was contained in the upper layer. (Note that since the Earth is 70% ocean, and land has negligible heat capacity, this corresponds to a real-Earth ocean depth of 550/0.7 = 786 meters).

The offset added to the HadCRUT3 temperature anomalies was very small, only -0.01 deg. C. The heat diffusion coefficient was 7 Watts per sq. meter per deg. C difference between the upper and lower ocean layers. The best initial temperatures of the first and second ocean layers at the start of the model integration were the same as the temperature observations for the first layer (0.41 deg. C below normal), and 0.48 deg. C below normal for the deeper layer.

What we are REALLY interested in, though, is the optimum net feedback parameter for the model run. In this case, it was 1.25 Watts per sq. meter per deg. C. This corresponds to about 3 deg. C of warming for a doubling of atmospheric carbon dioxide (2XCO2, based upon an assumed radiative forcing of 3.7 Watts per sq. meter for 2XCO2). This is in approximate agreement with the IPCC’s best estimate for warming from 2XCO2, and supports the realism of the simple forcing-feedback model for determining climate sensitivity.

But note that the above simulation has 2 shortcomings: 1) it does not do a very good job of mimicking the warming up to 1940 and subsequent slight cooling to the 1970s; and (2) other than the major volcanic eruptions (e.g. Pinatubo in 1991), it does not mimic the sub-decadal temperature variations.

CASE #2: All GISS Forcings except Tropospheric Aerosols
Since the tropospheric aerosols have the largest uncertainty, it is instructive to see what the previous simulation would look like if we remove all 3 tropospheric aerosol components (aerosol reflection, black carbon, and aerosol indirect effect on clouds).

In that case an extremely similar fit to Case #1 is obtained, which has only a slightly degraded RMS error of 0.0788 deg. C.

This reveals that the addition of the tropospheric aerosols in the first run improved the model fit by only 3.2% compared to the run without tropospheric aerosols. Yet, what is particularly important is that the best fit feedback has now increased from 1.25 to 3.5 Watts per sq. meter per deg. C, which then reduces the 2XCO2 climate sensitivity from 3.0 deg. C to about 1.1 deg. C! This is below the 1.5 deg. C lower limit the IPCC has ‘very confidently” placed on that warming.

This illustrates the importance of assumed tropospheric aerosol pollution to the IPCC’s global warming arguments. Since the warming during the 20th Century was not as strong as would some expected from increasing greenhouse gases, an offsetting source of cooling had to be found – which, of course, was also manmade.

But even with those aerosols, the model fit to the observations was not very good. That’s where the PDO comes in.

CASE #3: PDO plus all GISS Forcings except Tropospheric Aerosols
For our third and final case, let’s see what happens when we replace the GISS tropospheric aerosol forcings – which are highly uncertain – with our satellite-inferred record of internal radiative forcing from the PDO.

The following plot shows that more of the previously unresolved temperature variability during the 20th Century is now captured; I have also included the “all GISS forcings” model fit for comparison:

Using the satellite observed PDO forcing of 0.6 Watts per sq. meter per unit change in the PDO index, the RMS error of the model fit improves by 25.4%, to 0.0588 deg. C; this can be compared to the much smaller 3.2% improvement from adding the GISS tropospheric aerosols.

If we ask what PDO-related forcing the model “prefers” to get a best fit, the satellite-inferred value of 0.6 is bumped up to around 1 Watt per sq. meter per unit change in the PDO index, with an RMS fit improvement of over 30% (not shown).

In this last model simulation, note the smaller temperature fluctuations in the HadCRUT3 surface temperature record are now better captured during the 20th Century. This is evidence that the PDO causes its own radiative forcing of the climate system.

And of particular interest, the substitution of the PDO forcing for the tropospheric aerosols restores the low climate sensitivity, with a preferred feedback parameter of 3.6, which corresponds to a 2XCO2 climate sensitivity of only 1.0 deg. C.

If you are wondering, including BOTH the GISS tropospheric aerosols and the PDO forcing made it difficult to get the model to come close to the observed temperature record. The best fit for this combination of forcings will have to wait till the full set of Monte Carlo computer simulations are made.

Conclusions

It is clear (to me, at least) that the IPCC’s claim that the sensitivity of the climate is quite high is critically dependent upon (1) the inclusion of very uncertain aerosol cooling effects in the last half of the 20th Century, and (2) the neglect of any sources of internal radiative forcing on long time scales, such as the 30-60 year time scale of the PDO.

Since we now have satellite measurements that such natural forcings do indeed exist, it would be advisable for the IPCC to revisit the issue of climate sensitivity, taking into account these uncertainties.

It would be difficult for the IPCC to fault this model because of its simplicity. For global average temperature changes on these time scales, the surface temperature variations are controlled by (1) radiative forcings, (2) net feedbacks, and (3) heat diffusion to the deeper ocean. In addition, the simple model’s assumption of a preferred average temperature is exactly what the IPCC implicitly claims! After all, they are the ones who say climate change did not occur until humans started polluting. Think hockey stick.

Remember, in the big picture, a given amount of global warming can be explained with either (1) weak forcing of a sensitive climate system, or (2) strong forcing of an insensitive climate system. By ignoring natural sources of warming – which are understandably less well known than anthropogenic sources — the IPCC biases its conclusions toward high climate sensitivity. I have addressed only ONE potential natural source of radiative forcing — the PDO. Of course, there could be others as well. But the 3rd Case presented above is already getting pretty close to the observed temperature record, which has its own uncertainties anyway.

This source of uncertainty — and bias — regarding the role of past, natural climate variations to the magnitude of future anthropogenic global warming (arghh! I mean climate change) is something that most climate scientists (let alone policymakers) do not yet understand.

WeatherShop.com Gifts, gadgets, weather stations, software and more...click here!

Evidence of Elevated Sea Surface Temperatures Under the BP Oil Slick

June 15th, 2010

(NOTE: minor edits made at 10:00 a.m. CDT, June 15, 2010)

As summer approaches, sea surface temperatures (SSTs) in the Gulf of Mexico increase in response to increased solar insolation (intensity of sunlight). Limiting the SST increase is evaporation, which increases nonlinearly with SST and approximately linearly with increased wind speed. It is important to realize that the primary heat loss mechanism by far for water bodies is evaporation.

By late summer, SSTs in the Gulf peak near 86 or 87 deg. F as these various energy gain and energy loss mechanisms approximately balance one another.

But yesterday, buoy 42040, moored about 64 nautical miles south of Dauphin Island, AL, reported a peak SST of 96 deg. F during very low wind conditions. Since the SST measurement is made about 1 meter below the sea surface, it is likely that even higher temperatures existed right at the surface…possibly in excess of 100 deg. F.

A nice global analysis of the day-night cycle in SSTs was published in 2003 by members of our NASA AMSR-E Science Team, which showed the normal range of this daytime warming, which increases at very low wind speed. But 96 deg. F is truly exceptional, especially for a measurement at 1 meter depth.

The following graph shows the last 45 days of SST measurements from this buoy, as well as buoy 42039 which is situated about 120 nautical miles to the east of buoy 42040.

The approximate locations of these buoys are shown in the following MODIS image from the Aqua satellite from 3 days ago (June 12, 2010); the oil slick areas are lighter colored patches, swirls and filaments, and can only be seen on days when the MODIS angle of view is near the point of sun glint (direct reflection of the sun’s image off the surface):

The day-night cycle in SSTs can be clearly seen on most days in the SST plot above, and it becomes stronger at lower wind speeds, as can be seen by comparing those SSTs to the measured wind speeds at these two buoys seen in the next plot:

Since buoy 42040 has been near the most persistent area of oil slick coverage seen by the MODIS instruments on NASA’s Terra and Aqua satellites, I think it is a fair supposition that these very high water temperatures are due to reduced evaporation from the oil film coverage on the sea surface.

OIL SLICK IMPACT ON GULF HURRICANES?
Despite the localized high SSTs, I do not believe that the oil slick will have an enhancement effect on the strength of hurricanes. The depth of water affected is probably pretty shallow, and restricted to areas with persistent oil sheen or slick that has not been disrupted by wind and wave activity.

As any hurricane approaches, higher winds will rapidly break up the oil on the surface, and mix the warmer surface layer with cooler, deeper layers. (Contrary to popular perception, the oil does not make the surface of the ocean darker and thereby absorb more sunlight…the ocean surface is already very dark and absorbs most of the sunlight that falls upon it — over 90%.)

Also, in order for any extra thermal energy to be available for a hurricane to use as fuel, it must be “converted” to more water vapor. Yes, hurricanes are on average strengthened over waters with higher SST, but only to the extent that the overlying atmosphere has its humidity enhanced by those higher SSTs. Evidence of reduced evaporation at buoy 42040 is seen in the following plot which shows the atmospheric temperature and dewpoint, as well as SST, for buoys 42040 (first plot), and 42039 (second plot).


Despite the elevated SSTs at buoy 42040 versus buoy 42039 in recent days, the dewpoint has not risen above what is being measured at buoy 42039 — if anything, it has remained lower.

Nevertheless, I suspect the issue of enhanced sea surface temperatures will be the subject of considerable future research, probably with computer modeling of the impact of such oil slicks on tropical cyclone intensity. I predict the effect will be very small.