The Version 6.0 global average lower tropospheric temperature (LT) anomaly for May, 2019 was +0.32 deg. C, down from the April, 2019 value of +0.44 deg. C:
Various regional LT departures from the 30-year (1981-2010) average for the last 17 months are:
I had an op-ed published at Foxnews.com yesterday describing the reason why we have had so many tornadoes this year. The answer is the continuing cold weather stretching from Michigan through Colorado to California. A persistent cold air mass situated north and west of the usual placement of warm and humid Gulf air in the East is what is required for rotating thunderstorms to be embedded in a strong wind shear environment.
The temperature departures from normal so far this month show evidence of this cold:
In fact, in terms of departures from normal, so far this year the Northern Plains has been the “coldest place on Earth”, averaging 5-10 deg. F below normal:
The strong wind shear and warm advection provided at the “tightened” boundary between the warm and cold air masses is the usual missing ingredient in tornado formation, unlike Alexandria Ocasio-Cortez’s claim that a New Jersey tornado warning was somehow tied to global warming.
As has been pointed out elsewhere, a trend line fit to the number of strong to violent U.S. tornadoes has gone down from 60 in 1954 to 30 in 2018. In other words, the number of most damaging tornadoes has, on average, been cut in half since U.S. statistics started to be compiled:
Or, phrased another way, the last half of the 65-year U.S. tornado record had 40% fewer strong to violent tornadoes than the first half.
To claim that global warming is causing more tornadoes is worse than speculative; it is directly opposite to the clear observational evidence.
A major uncertainty in figuring out how much of recent warming has been human-caused is knowing how much nature has caused. The IPCC is quite sure that nature is responsible for less than half of the warming since the mid-1900s, but politicians, activists, and various green energy pundits go even further, behaving as if warming is 100% human-caused.
The fact is we really don’t understand the causes of natural climate change on the time scale of an individual lifetime, although theories abound. For example, there is plenty of evidence that the Little Ice Age was real, and so some of the warming over the last 150 years (especially prior to 1940) was natural — but how much?
The answer makes as huge difference to energy policy. If global warming is only 50% as large as is predicted by the IPCC (which would make it only 20% of the problem portrayed by the media and politicians), then the immense cost of renewable energy can be avoided until we have new cost-competitive energy technologies.
The recently published paper Recent Global Warming as Confirmed by AIRS used 15 years of infrared satellite data to obtain a rather strong global surface warming trend of +0.24 C/decade. Objections have been made to that study by me (e.g. here) and others, not the least of which is the fact that the 2003-2017 period addressed had a record warm El Nino near the end (2015-16), which means the computed warming trend over that period is not entirely human-caused warming.
If we look at the warming over the 19-year period 2000-2018, we see the record El Nino event during 2015-16 (all monthly anomalies are relative to the 2001-2017 average seasonal cycle):
We also see that the average of all of the CMIP5 models’ surface temperature trend projections (in which natural variability in the many models is averaged out) has a warmer trend than the observations, despite the trend-enhancing effect of the 2015-16 El Nino event.
So, how much of an influence did that warm event have on the computed trends? The simplest way to address that is to use only the data before that event. To be somewhat objective about it, we can take the period over which there is no trend in El Nino (and La Nina) activity, which happens to be 2000 through June, 2015 (15.5 years):
Note that the observed trend in HadCRUT4 surface temperatures is nearly cut in half compared to the CMIP5 model average warming over the same period, and the UAH tropospheric temperature trend is almost zero.
One might wonder why the UAH LT trend is so low for this period, even though in Fig. 1 it is not that far below the surface temperature observations (+0.12 C/decade versus +0.16 C/decade for the full period through 2018). So, I examined the RSS version of LT for 2000 through June 2015, which had a +0.10 C/decade trend. For a more apples-to-apples comparison, the CMIP5 surface-to-500 hPa layer average temperature averaged across all models is +0.20 C/decade, so even RSS LT (which usually has a warmer trend than UAH LT) has only one-half the warming trend as the average CMIP5 model during this period.
So, once again, we see that the observed rate of warming — when we ignore the natural fluctuations in the climate system (which, along with severe weather events dominate “climate change” news) — is only about one-half of that projected by climate models at this point in the 21st Century. This fraction is consistent with the global energy budget study of Lewis & Curry (2018) which analyzed 100 years of global temperatures and ocean heat content changes, and also found that the climate system is only about 1/2 as sensitive to increasing CO2 as climate models assume.
It will be interesting to see if the new climate model assessment (CMIP6) produces warming more in line with the observations. From what I have heard so far, this appears unlikely. If history is any guide, this means the observations will continue to need adjustments to fit the models, rather than the other way around.
I present comparisons between both the UAH and RSS global lower troposphere (LT) temperature variations and LT computed from the vertical temperature profiles retrieved from the NASA AIRS instrument flying on the Aqua satellite. This follows up on the recent newsworthy announcement by NASA researchers, Recent Global Warming as Confirmed by AIRS published in Environmental Research Letters in which it was claimed the AIRS surface skin temperature retrievals validated the GISTEMP record of surface air temperatures during 2003-2017.
The data I use are the AIRS Version 6 monthly average gridpoint retrievals covering September 2002 through March 2019 (16.6 years, NASA registration required). To compute LT from the AIRS profiles I have taken into account the somewhat different vertical profiles of sensitivity in the UAH and RSS LT weighting functions, as well as the different southern extent of the “global” domains (UAH extends to 82.5 deg. S, while RSS is to 70 deg. S) in the global averages.
First up is the comparison of UAH LT versus LT computed from the AIRS profiles:
Note that El Nino and La Nina variations dominate this short period of record, and much of the warming trend (+0.15 C/decade) is due to this activity. The agreement is very good, with nearly identical trends in UAH and AIRS and an explained variance of 88.9%.
The agreement is rather remarkable given that the AIRS is an infrared instrument with much more serious cloud contamination effects than the UAH LT which is totally microwave-based from the Advanced Microwave Sounding Units (AMSUs) flying on 5 different satellites during this period. The AIRS cloud effects are removed in processing through a “cloud clearing” algorithm when scattered clouds are present, but temperatures in the lower troposphere cannot be measured in extensive cloud regions.
Next let’s look at a similar comparison for the RSS LT product. It also shows very good agreement with AIRS (87.3% explained variance), but with a somewhat greater trend compared to AIRS (by 0.07 C/decade). Again, the record is rather short (16.6 years) and so this trend difference should not be assumed to apply to the whole 40-year satellite record of RSS LT:
Again, the LT layers measured by UAH and RSS are somewhat different. The UAH LT layer is deeper, and so it picks up more of the enhanced warming in the 250-300 mb layer, as seen in the vertical profile of AIRS global temperature trends:
Given the rather high level of agreement between the microwave and infrared measures of global-average tropospheric temperatures, I see no reason why the AIRS data should not be used as a way to do periodic checks on the UAH and RSS LT global temperature variations.
Finally, since J. Susskind and Gavin Schmidt have proclaimed AIRS as confirming the GISTEMP record of substantial surface warming (“Recent Global Warming as Confirmed by AIRS”), I am similarly going to proclaim Fig. 1 as evidence that AIRS also validates the UAH LT record of only modest tropospheric warming.
The Version 6.0 global average lower tropospheric temperature (LT) anomaly for April, 2019 was +0.44 deg. C, up from the March, 2019 value of +0.34 deg. C:
Various regional LT departures from the 30-year (1981-2010) average for the last 16 months are:
I have previously addressed the NASA study that concluded the AIRS satellite temperatures “verified global warming trends“. The AIRS is an infrared temperature sounding instrument on the NASA Aqua satellite, providing data since late 2002 (over 16 years). All results in that study, and presented here, are based upon infrared measurements alone, with no microwave temperature sounder data being used in these products.
That reported study addressed only the surface “skin” temperature measurements, but the AIRS is also used to retrieve temperature profiles throughout the troposphere and stratosphere — that’s 99.9% of the total mass of the atmosphere.
Since AIRS data are also used to retrieve a 2 meter temperature (the traditional surface air temperature measurement height), I was curious why that wasn’t used instead of the surface skin temperature. Also, AIRS allows me to compare to our UAH tropospheric deep-layer temperature products.
So, I downloaded the entire archive of monthly average AIRS temperature retrievals on a 1 deg. lat/lon grid (85 GB of data). I’ve been analyzing those data over various regions (global, tropical, land, ocean). While there are a lot of interesting results I could show, today I’m going to focus just on the United States.
Because the Aqua satellite observes at nominal local times of 1:30 a.m. and 1:30 p.m., this allows separation of data into “day” and “night”. It is well known that recent warming of surface air temperatures (both in the U.S. and globally) has been stronger at night than during the day, but the AIRS data shows just how dramatic the day-night difference is… keeping in mind this is only the most recent 16.6 years (since September 2002):
The AIRS surface skin temperature trend at night (1:30 a.m.) is a whopping +0.57 C/decade, while the daytime (1:30 p.m.) trend is only +0.15 C/decade. This is a bigger diurnal difference than indicated by the NOAA Tmax and Tmin trends (triangles in the above plot). Admittedly, 1:30 a.m. and 1:30 pm are not when the lowest and highest temperatures of the day occur, but I wouldn’t expect as large a difference in trends as is seen here, at least at night.
Furthermore, these day-night differences extend up through the lower troposphere, to higher than 850 mb (about 5,000 ft altitude), even showing up at 700 mb (about 12,000 ft. altitude).
This behavior also shows up in globally-averaged land areas, and reverses over the ocean (but with a much weaker day-night difference). I will report on this at some point in the future.
If real, these large day-night differences in temperature trends is fascinating behavior. My first suspicion is that it has something to do with a change in moist convection and cloud activity during warming. For instance more clouds would reduce daytime warming but increase nighttime warming. But I looked at the seasonal variations in these signatures and (unexpectedly) the day-night difference is greatest in winter (DJF) when there is the least convective activity and weakest in summer (JJA) when there is the most convective activity.
One possibility is that there is a problem with the AIRS temperature retrievals (now at Version 6). But it seems unlikely that this problem would extend through such a large depth of the lower troposphere. I can’t think of any reason why there would be such a large bias between day and night retrievals when it can be seen in the above figure that there is essentially no difference from the 500 mb level upward.
It should be kept in mind that the lower tropospheric and surface temperatures can only be measured by AIRS in the absence of clouds (or in between clouds). I have no idea how much of an effect this sampling bias would have on the results.
Finally, note how well the AIRS low- to mid-troposphere temperature trends match the bulk trend in our UAH LT product. I will be examining this further for larger areas as well.
NOTE: See the update from John Christy below, addressing the use of RATPAC radiosonde data.
This post has two related parts. The first has to do with the recently published study of AIRS satellite-based surface skin temperature trends. The second is our response to a rather nasty Twitter comment maligning our UAH global temperature dataset that was a response to that study.
The AIRS Study
NASA’s Atmospheric InfraRed Sounder (AIRS) has thousands of infrared channels and has provided a large quantity of new remote sensing information since the launch of the Aqua satellite in early 2002. AIRS has even demonstrated how increasing CO2 in the last 15+ years has reduced the infrared cooling to outer space at the wavelengths impacted by CO2 emission and absorption, the first observational evidence I am aware of that increasing CO2 can alter — however minimally — the global energy budget.
The challenge for AIRS as a global warming monitoring instrument is that it is cloud-limited, a problem that worsens as one gets closer to the surface of the Earth. It can only measure surface skin temperatures when there are essentially no clouds present. The skin temperature is still “retrieved” in partly- (and even mostly-) cloudy conditions from other channels higher up in the atmosphere, and with “cloud clearing” algorithms, but these exotic numerical exercises can never get around the fact that the surface skin temperature can only be observed with satellite infrared measurements when no clouds are present.
Then there is the additional problem of comparing surface skin temperatures to traditional 2 meter air temperatures, especially over land. There will be large biases at the 1:30 a.m./p.m. observation times of AIRS. But I would think that climate trends in skin temperature should be reasonably close to trends in air temperature, so this is not a serious concern with me (although Roger Pielke, Sr. disagrees with me on this).
The new paper by Susskind et al. describes a 15-year dataset of global surface skin temperatures from the AIRS instrument on NASA’s Aqua satellite. ScienceDaily proclaimed that the study “verified global warming trends“, even though the period addressed (15 years) is too short to say much of anything much of value about global warming trends, especially since there was a record-setting warm El Nino near the end of that period.
Furthermore, that period (January 2003 through December 2017) shows significant warming even in our UAH lower tropospheric temperature (LT) data, with a trend 0.01 warmer than the “gold standard” HadCRUT4 surface temperature dataset (all deg. C/decade):
I’m pretty sure the Susskind et al. paper was meant to prop up Gavin Schmidt’s GISTEMP dataset, which generally shows greater warming trends than the HadCRUT4 dataset that the IPCC tends to favor more. It remains to be seen whether the AIRS skin temperature dataset, with its “clear sky bias”, will be accepted as a way to monitor global temperature trends into the future.
What Satellite Dataset Should We Believe?
Of course, the short period of record of the AIRS dataset means that it really can’t address the pre-2003 adjustments made to the various global temperature datasets which significantly impact temperature trends computed with 40+ years of data.
What I want to specifically address here is a public comment made by Dr. Scott Denning on Twitter, maligning our (UAH) satellite dataset. He was responding to someone who objected to the new study, claiming our UAH satellite data shows minimal warming. While the person posting this objection didn’t have his numbers right (and as seen above, our trend even agrees with HadCRUT4 over the 2003-2017 period), Denning took it upon himself to take a swipe at us (see his large-font response, below):
First of all, I have no idea what Scott is talking about when he lists “towers” and “aircraft”…there has been no comprehensive comparisons of such data sources to global satellite data, mainly because there isn’t nearly enough geographic coverage by towers and aircraft.
Secondly, in the 25+ years that John Christy and I have pioneered the methods that others now use, we made only one “error” (found by RSS, and which we promptly fixed, having to do with an early diurnal drift adjustment). The additional finding by RSS of the orbit decay effect was not an “error” on our part any more than our finding of the “instrument body temperature effect” was an error on their part. All satellite datasets now include adjustments for both of these effects.
Nevertheless, as many of you know, our UAH dataset is now considered the “outlier” among the satellite datasets (which also include RSS, NOAA, and U. of Washington), with the least amount of global-average warming since 1979 (although we agree better in the tropics, where little warming has occurred). So let’s address the remaining claim of Scott Denning’s: that we disagree with independent data.
The only direct comparisons to satellite-based deep-layer temperatures are from radiosondes and global reanalysis datasets (which include all meteorological observations in a physically consistent fashion). What we will find is that RSS, NOAA, and UW have remaining errors in their datasets which they refuse to make adjustments for.
From late 1998 through 2004, there were two satellites operating: NOAA-14 with the last of the old MSU series of instruments on it, and NOAA-15 with the first new AMSU instrument on it. In the latter half of this overlap period there was considerable disagreement that developed between the two satellites. Since the older MSU was known to have a substantial measurement dependence on the physical temperature of the instrument (a problem fixed on the AMSU), and the NOAA-14 satellite carrying that MSU had drifted much farther in local observation time than any of the previous satellites, we chose to cut off the NOAA-14 processing when it started disagreeing substantially with AMSU. (Engineer James Shiue at NASA/Goddard once described the new AMSU as the “Cadillac” of well-calibrated microwave temperature sounders).
Despite the most obvious explanation that the NOAA-14 MSU was no longer usable, RSS, NOAA, and UW continue to use all of the NOAA-14 data through its entire lifetime and treat it as just as accurate as NOAA-15 AMSU data. Since NOAA-14 was warming significantly relative to NOAA-15, this puts a stronger warming trend into their satellite datasets, raising the temperature of all subsequent satellites’ measurements after about 2000.
But rather than just asserting the new AMSU should be believed over the old (drifting) MSU, let’s look at some data. Since Scott Denning mentions weather balloon (radiosonde) data, let’s look at our published comparisons between the 4 satellite datasets and radiosondes (as well as global reanalysis datasets) and see who agrees with independent data the best:
Clearly, the RSS, NOAA, and UW satellite datasets are the outliers when it comes to comparisons to radiosondes and reanalyses, having too much warming compared to independent data.
But you might ask, why do those 3 satellite datasets agree so well with each other? Mainly because UW and NOAA have largely followed the RSS lead… using NOAA-14 data even when its calibration was drifting, and using similar strategies for diurnal drift adjustments. Thus, NOAA and UW are, to a first approximation, slightly altered versions of the RSS dataset.
Maybe Scott Denning was just having a bad day. In the past, he has been reasonable, being the only climate “alarmist” willing to speak at a Heartland climate conference. Or maybe he has since been pressured into toeing the alarmist line, and not being allowed to wander off the reservation.
In any event, I felt compelled to defend our work in response to what I consider (and the evidence shows) to be an unfair and inaccurate attack in social media of our UAH dataset.
UPDATE from John Christy (11:10 CDT April 26, 2019):
In response to comments about the RATPAC radiosonde data having more warming, John Christy provides the following:
The comparison with RATPAC-A referred to in the comments below is unclear (no area mentioned, no time frame). But be that as it may, if you read our paper, RATPAC-A2 was one of the radiosonde datasets we used. RATPAC-A2 has virtually no adjustments after 1998, so contains warming shifts known to have occurred in the Australian and U.S. VIZ sondes for example. The IGRA dataset used in Christy et al. 2018 utilized 564 stations, whereas RATPAC uses about 85 globally, and far fewer just in the tropics where this comparison shown in the post was made. RATPAC-A warms relative to the other radiosonde/reanalyses datasets since 1998 (which use over 500 sondes), but was included anyway in the comparisons in our paper. The warming bias relative to 7 other radiosonde and reanalysis datasets can be seen in the following plot:
SUMMARY:A simple model of the CO2 concentration of the atmosphere is presented which fairly accurately reproduces the Mauna Loa observations 1959 through 2018. The model assumes the surface removes CO2 at a rate proportional to the excess of atmospheric CO2 above some equilibrium value. It is forced with estimates of yearly CO2 emissions since 1750, as well as El Nino and La Nina effects. The residual effects of major volcanic eruptions (not included in the model) are clearly seen. Two interesting finding are that (1) the natural equilibrium level of CO2 in the atmosphere inplied by the model is about 295 ppm, rather than 265 or 270 ppm as is often assumed, and (2) if CO2 emissions were stabilized and kept constant at 2018 levels, the atmospheric CO2 concentration would eventually stabilize at close to 500 ppm, even with continued emissions.
A recent e-mail discussion regarding sources of CO2 other than anthropogenic led me to revisit a simple model to explain the history of CO2 observations at Mauna Loa since 1959. My intent here isn’t to try to prove there is some natural source of CO2 causing the recent rise, as I think it is mostly anthropogenic. Instead, I’m trying to see how well a simple model can explain the rise in CO2, and what useful insight can be deduced from such a model.
The model uses the Boden et al. (2017) estimates of yearly anthropogenic CO2 production rates since 1750, updated through 2018. The model assumes that the rate at which CO2 is removed from the atmosphere is proportional to the atmospheric excess above some natural “equilibrium level” of CO2 concentration. A spreadsheet with the model is here.
Here’s the assumed yearly CO2 inputs into the model:
I also added in the effects of El Nino and La Nina, which I calculate cause a 0.47 ppm yearly change in CO2 per unit Multivariate ENSO Index (MEI) value (May to April average). This helps to capture some of the wiggles in the Mauna Loa CO2 observations.
The resulting fit to the Mauna Loa data required an assumed “natural equilibrium” CO2 concentration of 295 ppm, which is higher than the usually assumed 265 or 270 ppm pre-industrial value:
Click on the above plot and notice just how well even the little El Nino- and La Nina-induced changes are captured. I’ll address the role of volcanoes later.
The next figure shows the full model period since 1750, extended out to the year 2200:
Interestingly, note that despite continued CO2 emissions, the atmospheric concentration stabilizes just short of 500 ppm. This is the direct result of the fact that the Mauna Loa observations support the assumption that the rate at which CO2 is removed from the atmosphere is directly proportional to the amount of “excess” CO2 in the atmosphere above a “natural equilibrium” level. As the CO2 content increases, the rate or removal increases until it matches the rate of anthropogenic input.
We can also examine the removal rate of CO2 as a fraction of the anthropogenic source. We have long known that only about half of what is emitted “shows up” in the atmosphere (which isn’t what’s really going on), and decades ago the IPCC assumed that the biosphere and ocean couldn’t keep removing excess CO2 at such a high rate. But, in fact, the fractional rate of removal has actually been increasing, not decreasing.And the simple model captures this:
The up-and-down variations in Fig. 4 are due to El Nino and La Nina events (and volcanoes, discussed next).
Finally, a plot of the difference between the model and Mauna Loa observations reveals the effects of volcanoes. After a major eruption, the amount of CO2 in the atmosphere is depressed, either because of a decrease in natural surface emissions or an increase in surface uptake of atmospheric CO2:
What is amazing to me is that a model with such simple but physically reasonable assumptions can so accurately reproduce the Mauna Loa record of CO2 concentrations. I’ll admit I am no expert in the global carbon cycle, but the Mauna Loa data seem to support the assumption that for global, yearly averages, the surface removes a net amount of CO2 from the atmosphere that is directly proportional to how high the CO2 concentration goes above 295 ppm. The biological and physical oceanographic reasons for this might be complex, but the net result seems to follow a simple relationship.
A near-repeat of March’s “bomb cyclone” will bring up to 30 inches of snow this week to portions of Minnesota and South Dakota, with blizzard conditions and a threat of severe thunderstorms.
Roughly the same area that experienced flooding rains in March — and still trying to dry out enough to plant corn and soybeans — will see another round of heavy rain and heavy snow. The forecast location of the intense cyclone as of Thursday morning April 11 shows it taking a similar path to the record-setting March storm:
Forecast snowfall totals by midday Friday April 12 indicate the heaviest snowfall (up to 30 inches) over southern Minnesota, with 12-16 inches for Minneapolis:
The European ECMWF forecast model adds similarly heavy (~30 inches) snow totals in eastern South Dakota. Much of Wisconsin and northern Michigan are forecast to receive 6 to 12 inches.
The energy for such intense cyclones comes from the strong temperature contrast between two air masses. For example, by late Wednesday the temperatures in Nebraska will range from the 70s in the southeast to the 20s in the northwest, simultaneously feeding both blizzard conditions and a severe thunderstorm threat within the state.
Famed Arctic and aurora photographer Ole C Salomonsen has reported in the last hour strange lights over Tromso, Norway. Ole says the sight is the “weirdest stuff I’ve seen”.
I’ve taken the liberty of increasing the brightness of two of the images he posted:
I can’t imagine what this is, but I suspect it’s related to some sort of rocket-borne experiment. But the spatial distribution of the lights is very strange. I assume Ole will update us with time lapse photography in the near future.
UPDATE: Frank Olsen, also in Norway, posted the following photo, and said that this was indeed rocket-borne experiments containing special chemicals: