Archive for the ‘Blog Article’ Category

Climate Extremism in the Age of Disinformation

Monday, November 18th, 2019

Do the global warming wars ever change anyone’s mind?

I suppose there are a few people whose minds have been changed. As I recall, Judith Curry has said Climategate (now “celebrating” its 10 year anniversary) was her wake-up call that institutionalized climate science might not be all it claims to be. She is now a well-informed and unabashed skeptic of the modern tendency to blame every bad weather event on humans.

While I’m sure there are other examples, the unfortunate truth is that fewer and fewer people actually care about the truth.

The journalist who broke the Climategate story, James Delingpole, yesterday posted an article entitled The Bastards Have Got Away with It!, James concludes with,

“Climategate was the event when, just for a moment, it seemed we’d got the climate scamsters bang to rights, that the world’s biggest scientific (and economic) con trick had been exposed and that the Climate Industrial Complex would be dismantled before it could do any more damage to our freedom and our prosperity. But the truth, it would seem, is no match for big money, dirty politics and madness-of-crowds groupthink. We’ve lost this one, I think, my friends. And the fact that all those involved in this scam will one day burn in Hell is something, I’m afraid, which gives me all too little consolation.”

You see, it does not really matter whether a few bad actors (even if they are leaders of the climate movement) conspired to hide data and methods, and strong-arm scientific journal editors into not publishing papers that might stand in the way of the United Nations Intergovernmental Panel on Climate Change (IPCC) mission to pin climate change on humans, inflate its seriousness, and lay the groundwork for worldwide governmental efforts to reduce humanity’s access to affordable energy.

The folks were simply trying to Save the Earth™, and we all know that the ends justifies the means, right? So what if they cheated? Boys will be boys, you know. The science is sound, and besides, 97% of all scientists agree that… something.

The Roots of Polarization

One would think that the practice of science would be objective. I once believed this, too. As a fresh post-doc at the University of Wisconsin, when I discovered something new in satellite data, I was surprised to encounter NASA employees who tried to keep my work from being published because they feared it would interfere with a new satellite mission they were working toward. I eventually got it published as a cover article in the prestigious journal, Nature.

But the subject I was dealing with did not have the profound financial, political, policy, and even religious import that climate change would end up having. Furthermore, 35 years ago things were different than today. People were less tribal. There is an old saying that one should not discuss politics or religion in polite company, but it turns out that social media is far from polite company.

From a practical standpoint, what we do (or don’t do) about human-caused climate change supports either (1) a statist, top-down governmental control over human affairs that involves a more socialist political framework, or (2) an unconstrained individual-freedom framework where capitalism reigns supreme. So, one could easily be a believer (or non-believer) in the ‘climate emergency’ based upon their political leanings. While I know a few socialists who are skeptical of human-caused climate change being a serious issue, this is the exception rather than the rule. The same is true of capitalists who think that we must transition away from fossil fuels to wind and solar energy (unless they stand to make money off the transition through subsidies, in which case they are financially rather than ideologically driven).

Or, on a spiritual level, a human who desires to worship something must ultimately choose between the Creation or the Creator. There is no third option. I find that most Earth scientists are nature worshipers (showing various levels of fervor) and consider the Earth to be fragile. In contrast, those who believe the Earth was created for the purpose of serving humanity tend to view nature as being resilient and less sensitive to lasting damage. Both of these views have equally religious underpinnings since “fragile” and “resilient” are emotive and qualitative, rather than scientific, terms.

So, I would argue it really does not matter that much to most alarmists or skeptics what the evidence shows. As long as 8 billion people on the planet have some, non-zero effect on climate — no matter how small or unmeasurable — the alarmist can still claim that ‘we shouldn’t be interfering with the climate system’. As a counter example, the skeptical environmentalist Bjorn Lomborg actually believes the alarmist science from the IPCC, but claims that economics tells us it’s better to live in and adapt to a warmer world until we have more cost-effective substitutes for fossil fuels. For this stance regarding policy, he is labeled a global warming denier despite fully believing in human-caused climate change.

The Role of the Disinformation Superhighway

Baylor Professor Alan Jacobs has an interesting essay entitled On Lost Causes regarding the tendency for people to believe anything they see on the internet if it supports their biases.

He mentions a recent novel in which a high-tech billionaire, fed up with the disinformation he sees on the Web, concocts an elaborate online story that Moab, Utah has been obliterated by a nuclear explosion. He has CGI video, actors, witnesses, and an elaborate (but fake) social media presence to support the story.

The plan is to then show the world how easily they were duped, so that people would become less credulous when digesting information.

But instead, people cling to their belief. Even after many years, the ‘Moab truthers’ claim that anyone who disputes that Moab was destroyed are trolls or paid shills. People could actually travel to Moab to see for themselves, but virtually no one does.

In the climate wars, I see this behavior from both skeptics and alarmists. The alarmists point to increasing storms, heat waves, wildfires, etc. as evidence that humans are making weather worse. When they are shown evidence from a century of more of data that, no, things are not getting worse, these ‘storm truthers’ still bitterly cling to their beliefs while calling us skeptics “deniers”.

On the flip side, I routinely engage skeptics who claim that there is no such thing as the greenhouse effect, and that it is physically impossible for the cold atmosphere to make the surface warmer by increasing its CO2 content, anyway. No matter how many different ways I try to show how they are wrong, they never change their stance.

As a result, despite being a skeptic on the subject of humans having a serious effect on global climate, I’ve had to block more fellow skeptics from commenting on my blog than I have blocked alarmists. So, I get attacked from people on both sides of the issue.

I partly blame the public education system for the current state of affairs. Students are increasingly taught what to think, rather than how to think. Also to blame is the (probably unavoidable) funding of science by government, which President Eisenhower warned would cause science to become corrupted by a handful of powerful elites who did not have the advancement of scientific knowledge as the central goal.

When politicians have control over the purse strings, is it any wonder that politicians would preferentially fund the science which benefits certain policy outcomes, usually involving more government control over the lives of citizens? There have been innumerable funding programs to explore the human influence on climate (spoiler alert: every change we see is human-caused), yet almost no money goes to understanding natural sources of climate change.

Both Delingpole (describing the failure of Climategate to change attitudes) and Jacobs (describing the tendency of people to believe anything that supports their tribal beliefs) end their articles on a sour note. I have already quoted Delingpole’s conclusion, above. Here’s how Jacobs end his essay:

“..if at this stage of the game, given what we know about how social media work and about the incentives of the people who make TV, you’re still getting your dopamine rush by recycling TV-news clips and shouting at people on the Internet, you’re about as close to beyond hope as a human being gets. There is no point talking to you, trying to reason with you, giving you facts and the sources of those facts. You have made yourself invulnerable to reason and evidence. You’re a Moab truther in the making. So, though I do not in theory write anyone off, in practice I do. It’s time to give you up as a lost cause and start figuring out how to prevent the next generation from becoming like you.”

Delingpole and Jacobs come to sobering — even depressing — conclusions. Unfortunately, like these two authors I do not have much reason to be hopeful that things will get better anytime soon.

Rare frost flowers bloom in Deep South’s deep freeze

Wednesday, November 13th, 2019

With temperatures in the Deep South dipping as low as the 20s in Florida this morning, the early deep freeze provided the best opportunity in several years for frostweed (Verbesina virginica) to bloom. Here in north Alabama the 17 deg. F temperatures so early in the season provided the necessary chill to cause water from the still-warm soil to wick up through the stems and then freeze into cotton candy-shapes. Most people who live here are not even aware of these cold weather creations because they form so rarely.

Here are several photos I took after dawn this morning as these frosty blooms continued to grow.

Comments by Ross McKitrick on the Continuation of Climate Model Failure

Tuesday, November 12th, 2019

The following is a re-posting of an article by Dr. Ross McKitrick, University of Guelph, published yesterday, November 11, 2019. I have a comment that follows his post.

Climate Models vs Observations: 2019 Update

Back around 2014 many people, me included, were commenting on the discrepancy between climate models and observations. In a report for the Fraser Institute I showed the following graph:

The HadCRUT4 series (black) was then dipping below the 95% lower bound of the model distribution. The IPCC itself in the 5th Assessment Report (2013) noted that out of 114 model runs, 111 had overstated observed warming since the late 1990s. That same year, Hans von Storch told Der Spiegel that:

“If things continue as they have been, in five years, at the latest, we will need to acknowledge that something is fundamentally wrong with our climate models. A 20-year pause in global warming does not occur in a single modeled scenario. But even today, we are finding it very difficult to reconcile actual temperature trends with our expectations.”

But before 2018 came along, the modelers were saved by the El.

El Nino, that is. The powerful 2015-16 El Nino caused temperatures to surge, apparently erasing the discrepancy. It was just in the nick of time. In 2018 the US National Assessment came out, using data sets ending in 2017, as did the Canadian counterpart, and they were able to declare that a lot of warming had occurred, more or less in line with model projections. Blog articles about the 30th anniversary of James Hansen’s predictions did the same.

Well it’s a couple of years later and the El Nino heat has mostly gone from the climate system. What does the model-observational comparison look like now?

This graph, like the earlier one above, compares the HadCRUT4 surface temperature average (black line) against the CMIP5 mean (red line). The pink band shows the 1-sigma (67%) distribution and the tan band extends out to the 2-sigma (95%) distribution. The outer yellow bands show the lower and upper 2.5th percentiles. The lines are positioned so all models and observations are centered on a 1961-1990 zero mean. The model runs follow the RCP4.5 scenario and extend out to 2050.

Let’s zoom in on the post-1950 interval.

The HadCRUT4 series ends in 2018, which is the last complete year. Temperatures in 2018 (+0.60C) are back down to about where they were in 2014 (+0.58C). We’ll know in February or March where 2019 ends up.

The worry back in 2014 was that the Hadley (black) line had dropped below the 97.5th percentile envelope of the CMIP5 model runs. The El Nino pushed it almost all the way up to the mean, but only temporarily. It’s now back to the edge of the yellow band, meaning it’s skirting the bottom of the 95 percent confidence interval.

The big issue is not whether warming has “paused” or not, it’s how it compares to model projections. RCP4.5 is considered a medium, plausible projection. But it’s already pulling away from the observations.

I have indicated 2030 on the graph. That’s the year we all die, or something. But I think it’s more likely that will be the year by which the HadCRUT4 line drops out below the bottom of the CMIP5 RCP4.5 ensemble once and for all. The El Nino disguised the model-observational discrepancy for a few years, but it’s coming back.

There are other versions of this graph that don’t show such a discrepancy. Zeke Hausfather, for example, prefers to use a different set of CMIP5 outputs in which water surface temperatures rather than air temperatures from the (modeled) oceans are used to correspond to the sampling method
in HadCRUT4. The result is that the model temperatures tilt down a bit towards observations. That’s fine, but when governments draw scary charts of future warming those aren’t the model runs they show us, instead they show charts like the one I’ve drawn, so I’m more interested in seeing how it compares to observations.

I referred above to the Der Spiegel interview with Hans von Storch back in 2013. I very much appreciate another of his comments:

“Unfortunately, some scientists behave like preachers, delivering sermons to people. What this approach ignores is the fact that there are many threats in our world that must be weighed against one another. If I’m driving my car and find myself speeding toward an obstacle, I can’t simply yank the wheel to the side without first checking to see if I’ll instead be driving straight into a crowd of people. Climate researchers cannot and should not take this process of weighing different factors out of the hands of politics and society.”

That is very well put.

Roy W. Spencer comment: With the new CMIP6 models coming out suggesting even more warming than the CMIP5 models did, I fear we will see continuing “adjustments” of the instrumental temperature record to produce even more warming. This is the only way that the models can retain credibility in the face of real-world evidence that warming has been modest, at best.

On the 1998 Apparent Step-Up in UAH Land-minus-Ocean Lower Tropospheric Temperatures

Thursday, November 7th, 2019

A follower of our UAH global lower tropospheric temperature (LT) dataset named “JJ” emailed me asking about what might be considered a spurious feature in the dataset.

The feature is most easily seen if you plot the monthly global time series of Land-minus-Ocean (hereafter “L-O”) temperature anomalies. The result seems to show a step-up of about 0.16 deg. C in May of 1998.

Fig. 1. Difference between the UAH lower tropospheric (LT) land and ocean temperature anomalies between January 1979 and August 2019, showing an apparent step-up in the difference occurring in May 1998. The dashed lines show the average values before and after that date, while the curve is a 5th order polynomial fit to the data.

The year 1998 is key for our dataset because that is when the first (NOAA-15) Advanced Microwave Sounding Unit (AMSU) came online, which initiated the transition from the older Microwave Sounding Units (MSU, the last of which was on the NOAA-14 satellite).

AMSU did not have exactly the same channel frequency selection as the MSU, so the nominal layers of the atmosphere sensed were slightly different. Most importantly, the AMSU channel 5 has a weighting function that senses somewhat more of the surface and lower troposphere than MSU channel 2. If one did not account for this fact, the AMSU’s greater surface sensitivity would produce higher temperatures over land and lower temperatures over the ocean (after a global-average intercalibration between MSU and AMSU was performed). [The reason why is that these channel frequencies are not sensitive to changes in sea surface temperature, because the microwave emissivity decreases as SST increases. The effect is small, but measurable.]

But since these are through-nadir scanners, each view angle relative to the local vertical measures a slightly different layer anyway, which allows us to match the AMSU and MSU measurements. When we developed Version 6 of the dataset, we found that the 50-60 GHz oxygen absorption theory used to find the view angle from AMSU5 that best matches MSU2, the resulting temperature anomalies over land were still too warm relative to the oceans. This meant that we had to perform an empirical (data-dependent) rather than theoretical matching of the AMSU and MSU view angles.

The way we gauged the match between MSU and AMSU is how the temperature anomaly patterns transition across coastlines: we required that there should be little discernible change in that pattern. Before our optimized matching, the land anomalies were noticeable warmer than the ocean anomalies as features crossed coastlines. But after optimization in our Version 6 dataset, here’s the LT anomaly map for last month (October 2019), which shows no evidence for land-vs-ocean artifacts.

Fig. 2. October 2019 LT temperature anomalies relative to the 1981-2010 average annual cycle. Note the anomalies have a smooth transition between land and ocean, as would be expected for deep-layer tropospheric temperatures (but not necessarily surface temperatures).

Nevertheless, adjustments like these are never perfect. So, the question remains: Is there a spurious change in the L-O temperature difference occurring in 1998?

Evidence that the L-O change in 1998 is real

There are a few lines of evidence that suggest the May 1998 step-up in L-O temperatures is real.

First, if the effect was due to the introduction of AMSU in 1998, it would have occurred in August, not in May (3 months earler). Also, the effect should have been gradual since for almost 4 years after August 1998 the LT dataset is half MSU (NOAA-14) and half AMSU (NOAA-15), after which it becaume 100% AMSU.

But a more important piece of evidence is the effect of El Nino and La Nina on L-O. During El Nino, the ocean airmasses warm more than the land airmasses (especially in the tropics), so that L-O tends to be more negative. Up until the 1997-98 super El Nino a period of greater El Nino activity existed, after which a shift to more La Nina activity occurred. (This is probably also what caused the extended global warming ‘hiatus’ after that El Nino event.)

I statistically regressed the L-O values in Fig. 1 against 3-month running averages of the Multivariate ENSO Index (MEI), and removed that estimate of the ENSO influence from the data. The resulting ENSO-adjusted time series in shown in Fig. 3.

Fig. 3. As in Fig. 1, but with the average influence of El Nino and La Nina (ENSO) subtracted out. Note the evidence for a “break” in 1998 is much weaker.

Note the step-up in mid-1998 is much less evident, and the 5th order polynomial fit to the data is smoother with a more gradual transition in L-O over the 41-year satellite record.

But that’s not the only thing going on during this period that affects the L-O values. There were two major volcanic eruptions (El Chichon in early 1982, and especially Pinatubo in mid-1991) that caused more cooling over land than ocean, causing temporarily enhanced negative values in L-O. Since these events are not as easily correlated with an index like MEI is with ENSO, I simply removed the data from 1982-83 and 1992-93 in Fig. 3 and replotted the results in Fig. 4.

Fig. 4. As in Fig. 3, but with the data influenced by major volcanoes El Chichon and Pinatubo removed.

Now we see that the 5th order polynomial fit to the data comes quite close to the linear trend (dashed gray line), which suggests that the step-up in 1998 in L-O was real, and related mostly to a change in ENSO activity before versus after the 1997-98 super El Nino, and with the major volcanic eruptions in 1982 and 1991 contributing to the seemingly spurious feature.

The remaining upward trend in L-O is simply the land airmasses warming faster than the ocean, as would be expected for any warming trend, whether natural or human-caused.

There remains what might be a spurious feature during 1980-81 in Fig. 4, which would most likely be related to our ad hoc correction for MSU channel 3 drift during that time. This, however, should have little influence on the land and ocean trends as evidenced by the trend line fit (dashed gray line) in Fig. 4.

California Wildfires, Climate Change, and the Hot-Dry-Windy Fire Weather Index

Friday, November 1st, 2019

Summer and early Fall are fire season in California. It has always been this way. Most summers experience virtually no precipitation over much of California, which means that the vegetation that grows during the cool, wet Winter becomes fuel for wildfires in Summer.

When you add the increasing population, risky forest management practices, and lack of maintenance of power lines, it should be little wonder that wildfire activity there has increased.

Few news reports of wildfires can avoid mentioning some nebulous connection of wildfires to human-caused climate change. This is a little odd from a meteorological perspective, however.

First of all, most of the historically significant wildfire events occur when COOL and DRY Canadian high pressure areas move south over the Great Basin region, causing strong downslope easterly winds (Santa Ana winds, Diablo winds). Global warming, in contrast, is supposed to result in WARMER and MOISTER air.

Secondly, the argument I’ve seen that excessive vegetation growth from a previous winter with abundant precipitation produces more fuel is opposite of the observation that fewer wildfires typically follow an unusually wet winter in California. They can’t have it both ways.

You might ask, why do SoCal temperatures sometimes rise so high before wildfire events if the source of the air is “cool” high pressure? It’s because the cooler high-altitude air over the Great Basin warms by compression as the air descends down the mountain slopes. Almost without exception (i.e., a super-adiabatic lapse rate), air at a higher altitude that is forced to descent to a low altitude will have a warmer temperature (and lower humidity) than the air it is displacing at low altitude. (While the warmth and dryness is widespread during these events, the high winds tend to be more localized to canyons and downslope areas.)

The dryness of this sinking air can be seen in this plot of the dewpoint temperature at LAX airport (Los Angeles) as dry air moved in from the east on December 4 with strong high pressure positioned over Nevada, and seven major wildfires developed and spread from the hot, dry, and locally windy conditions.

Hourly dewpoint temperatures at LAX airport from November 1 through December 31, 2017. Rapid drying is seen late on December 4, which is when the first of seven major wildfires (the Thomas fire) ignited.

But have such fire-enhancing weather events increased in, say, the last 50 years or more? And even if they have, was the cause due to greenhouse gas emissions from fossil fuels? While blaming some portion of recent global average warming on increasing CO2 is somewhat easier, blaming a change in regional or local weather patterns on it is much more difficult.

In the process of looking around for an answer to this question, I found some interesting recent work that would allow someone to analyze the appropriate meteorological station data, if it hasn’t already been done.

The Hot-Dry-Windy (HDW) Fire Weather Index

In 2018, a paper was published by a university research meteorologist and U.S. Forest Service (USFS) employees from three different USFS offices that describes a simple meteorological index related to wildfire risk. They call it the Hot-Dry-Windy (HDW) index, which is simply the product of (1) the surface wind speed times (2) the water vapor pressure deficit. The vapor pressure deficit uses the same information as relative humidity (temperature and dewpoint temperature), but it is a difference rather than a ratio, which better measures the potential of air to rapidly remove moisture from dead vegetation. For example a 10% relative humidity at 40 deg. F will have low drying potential, while 10% RH at 100 deg. F will have very high drying potential.

What is especially useful is that they used 30 years of weather forecast model (GFS) data to build a website that gives daily-updated forecasts of the HDW index across the United States. For example, here’s today’s forecast.

Importantly, the HDW index does not measure the actual fire danger, which must include how dry the vegetation currently is. It only shows whether the current weather will be conducive to the rapid spread of fire if a fire is started.

If you go to that website and click on a specific location, you get a time series plot of the HDW index values from 10 days ago up through the forecast for the coming days.

Unfortunately, the website does not provide any time series of the data over the last 30 years. But I can see the technique being applied to weather station data that goes back 50 years or more, for instance the formatted weather station data available here (which is where I got the Los Angeles airport data plotted above).

Until someone does this (if they haven’t already), I think it is a mistake to blame increased wildfire activity on “climate change”, when we don’t even know if there has been a change in the meteorological events most associated with major California wildfires: the intrusion of cool Canadian high pressure areas into the U.S. Southwest.

UAH Global Temperature Update for October 2019: +0.46 deg. C

Friday, November 1st, 2019

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for October, 2019 was +0.46 deg. C, down from the September value of +0.61 deg. C.

The linear warming trend since January, 1979 remains at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1981-2010) average for the last 22 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2018 01 +0.29 +0.52 +0.06 -0.10 +0.70 +1.39 +0.52
2018 02 +0.25 +0.28 +0.21 +0.05 +0.99 +1.22 +0.35
2018 03 +0.28 +0.43 +0.12 +0.08 -0.19 -0.32 +0.76
2018 04 +0.21 +0.32 +0.09 -0.14 +0.06 +1.02 +0.84
2018 05 +0.16 +0.38 -0.05 +0.01 +1.90 +0.14 -0.24
2018 06 +0.20 +0.33 +0.06 +0.12 +1.11 +0.77 -0.41
2018 07 +0.30 +0.38 +0.22 +0.28 +0.41 +0.24 +1.49
2018 08 +0.18 +0.21 +0.16 +0.11 +0.02 +0.11 +0.37
2018 09 +0.13 +0.14 +0.13 +0.22 +0.89 +0.23 +0.27
2018 10 +0.20 +0.27 +0.12 +0.30 +0.20 +1.08 +0.43
2018 11 +0.26 +0.24 +0.28 +0.45 -1.16 +0.68 +0.55
2018 12 +0.25 +0.35 +0.15 +0.30 +0.25 +0.69 +1.20
2019 01 +0.38 +0.35 +0.41 +0.36 +0.53 -0.15 +1.15
2019 02 +0.37 +0.47 +0.28 +0.43 -0.02 +1.04 +0.05
2019 03 +0.34 +0.44 +0.25 +0.41 -0.55 +0.97 +0.58
2019 04 +0.44 +0.38 +0.51 +0.54 +0.50 +0.92 +0.91
2019 05 +0.32 +0.30 +0.35 +0.39 -0.61 +0.98 +0.38
2019 06 +0.47 +0.42 +0.52 +0.64 -0.64 +0.91 +0.35
2019 07 +0.38 +0.33 +0.44 +0.45 +0.11 +0.33 +0.87
2019 08 +0.38 +0.38 +0.39 +0.42 +0.17 +0.44 +0.24
2019 09 +0.61 +0.64 +0.59 +0.60 +1.14 +0.75 +0.57
2019 10 +0.46 +0.64 +0.27 +0.30 -0.03 +0.99 +0.50

The UAH LT global anomaly image for October, 2019 should be available in the next few days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

Does the Climate System Have a Preferred Average State? Chaos and the Forcing-Feedback Paradigm

Friday, October 25th, 2019

NOTE: I have written on this subject before, but it is important enough that we need to keep thinking about it. It is also related to the forcing-feedback paradigm of climate change, which I usually defend — but which I will here take a skeptical view toward in the context of long-term climate change.

1575 Winter Landscape with Snowfall near Antwerp by Lucas van Valckenborch.Städel Museum/Wikimedia Commons

The UN IPCC scientists who write the reports which guide international energy policy on fossil fuel use operate under the assumption that the climate system has a preferred, natural and constant average state which is only deviated from through the meddling of humans. They construct their climate models so that the models do not produce any warming or cooling unless they are forced to through increasing anthropogenic greenhouse gases, aerosols, or volcanic eruptions.

This imposed behavior of their “control runs” is admittedly necessary because various physical processes in the models are not known well enough from observations and first principles, and so the models must be tinkered with until they produce what might be considered to be the “null hypothesis” behavior, which in their worldview means no long-term warming or cooling.

What I’d like to discuss here is NOT whether there are other ‘external’ forcing agents of climate change, such as the sun. That is a valuable discussion, but not what I’m going to address. I’d like to address the question of whether there really is an average state that the climate system is constantly re-adjusting itself toward, even if it is constantly nudged in different directions by the sun.

If there is such a preferred average state, then the forcing-feedback paradigm of climate change is valid. In that system of thought, any departure of the global average temperature from the Nature-preferred state is resisted by radiative “feedback”, that is, changes in the radiative energy balance of the Earth in response to the too-warm or too-cool conditions. Those radiative changes would constantly be pushing the system back to its preferred temperature state.

But what if there isn’t only one preferred state?

I am of the opinion that the F-F paradigm does indeed apply for at least year-to-year fluctuations, because phase space diagrams of the co-variations between temperature and radiative flux look just like what we would expect from a F-F perspective. I touched on this in yesterday’s post.

Where the F-F paradigm might be inapplicable is in the context of long-term climate changes which are the result of internal fluctuations.

Chaos in the Climate System

Everyone agrees that the ocean-atmosphere fluid flows represent a non-linear dynamical system. Such systems, although deterministic (that is, can be described with known physical equations) are difficult to predict the future behavior of because of their sensitive dependence on the current state. This is called “sensitive dependence on initial conditions”, and it is why weather cannot be forecast more than a week or so in advance.

The reason why most climate researchers do not think this is important for climate forecasting is that they are dealing with how the future climate might differ from today’s climate in a time-averaged sense... due not to changes in initial conditions, but in the “boundary conditions”, that is, increasing CO2 in the atmosphere. Humans are slightly changing the rules by which the climate system operates — that is, the estimated ~1-2% change in the rate of cooling of the climate system to outer space as a result of increasing CO2.

There are still chaotic variations in the climate system, which is why any given climate model forced with the same amount of increasing CO2 but initialized with different initial conditions in 1760 will produce a different globally-averaged temperature in, say, 2050 or 2060.

But what if the climate system undergoes its own, substantial chaotic changes on long time scales, say 100 to 1,000 years? The IPCC assumes this does not happen. But the ocean has inherently long time scales — decades to millennia. An unusually large amount of cold bottom water formed at the surface in the Arctic in one century might take hundreds or even thousands of years before it re-emerges at the surface, say in the tropics. This time lag can introduce a wide range of complex behaviors in the climate system, and is capable of producing climate change all by itself.

Even the sun, which we view as a constantly burning ball of gas, produces an 11-year cycle in sunspot activity, and even that cycle changes in strength over hundreds of years. It would seem that every process in nature organizes itself on preferred time scales, with some amount of cyclic behavior.

This chaotic climate change behavior would impact the validity of the forcing-feedback paradigm as well as our ability to determine future climate states and the sensitivity of the climate system to increasing CO2. If the climate system has different, but stable and energy-balanced, states, it could mean that climate change is too complex to predict with any useful level of accuracy.

El Nino / La Nina as an Example of a Chaotic Cycle

Most climate researchers view the warm El Nino and cool La Nina episodes conceptually as departures from an average climate state. But I believe that they are more accurately viewed as a bifurcation in the chaotic climate system. In other words, during Northern Hemisphere winter, there are two different climate states (El Nino or La Nina) that the climate system tends toward. Each has its own relatively stable configuration of Pacific trade winds, sea surface temperature patterns, cloudiness, and global-average temperature.

So, in a sense, El Nino and La Nina are different climate states which Earth has difficulty choosing between each year. One is a globally warm state, the other globally cool. This chaotic “bifurcation” behavior has been described in the context of even extremely simple systems of nonlinear equations, vastly simpler than the equations describing the time-evolving real climate system.

The Medieval Warm Period and Little Ice Age

Most historical records and temperature proxy evidence point to the Medieval Warm Period and Little Ice Age as real, historical events. I know that most people try to explain these events as the response to some sort of external forcing agent, say indirect solar effects from long-term changes in sunspot activity. This is a natural human tendency… we see a change, and we assume there must be a cause external to the change.

But a nonlinear dynamical system needs no external forcing to experience change. I’m not saying that the MWP and LIA were not externally forced, only that their explanation does not necessarily require external forcing.

There could be internal modes of chaotic fluctuations in the ocean circulation which produce their own stable climate states which differ in global-average temperature by, say, 1 deg. C. One possibility is that they would have slightly different sea surface temperature patterns or oceanic wind speeds, which can cause slightly different average cloud amounts, thus altering the planetary albedo and so the amount of sunlight the climate system has to work with. Or, the precipitation systems produced by the different climate states could have slightly different precipitation efficiencies, which then would affect the average amount of the atmosphere’s main greenhouse gas, water vapor.

Chaotic Climate Change and the Forcing-Feedback Paradigm

If the climate system has multiple, stable climate states, each with its own set of slightly different energy flows that still produce global energy balance and relatively constant temperatures (whether warmer or cooler), then the “forcing-feedback framework” (FFF, as my Australian friend Christopher Game likes to call it) would not apply to these climate variations, because there is no normal, average climate state to which ‘feedback’ is constantly nudging the system back toward.

Part of the reason for this post is the ongoing discussion I have had over the years with Christopher on this issue, and I want him to know that I am not totally deaf to his concerns about the FFF. As I described yesterday, we do see forcing-feedback type behavior in short-term climate fluctuations, but I agree that the FFF might not be applicable to longer-term fluctuations. In this sense, I believe Christopher Game is correct.

The UN IPCC Will Not Address This Issue

It is clear that the UN IPCC, by its very charter, is primarily focused on human-caused climate change. As a result of political influence (related to the desire of governmental regulation over the private sector) it will never seriously address the possibility that long-term climate change might be part of nature. Only those scientists who are supportive of this anthropocentric climate view are allowed to play in the IPCC sandbox.

Substantial chaos in the climate system injects a large component of uncertainty into all predictions of future climate change, including our ability to determine climate sensitivity. It reduces the practical value of climate modelling efforts, which cost billions of dollars and support the careers of thousands of researchers. While I am generally supportive of climate modeling, I am appropriately skeptical of the ability of current climate models to provide enough confidence to make high-cost energy policy decisions.

Comments on the Gregory et al. Climate Sensitivity Paper and Nic Lewis’s Criticism

Thursday, October 24th, 2019

NOTE: Comments for this post have all been flagged as pending for some reason. I’m testing the spam blocker to see what the problem might be. Until it is fixed, I might have to manually approve comments as I have time during the day.

A recent paper by Jonathan Gregory and co-authors in Climate Dynamics entitled How accurately can the climate sensitivity to CO2 be estimated from historical climate change? addresses in considerable detail the issues which limit our ability to determine that global warming holy grail, “equilibrium climate sensitivity” (ECS, the eventual global average surface warming response to a doubling of atmospheric CO2). Despite decades of research, climate models still exhibit climate sensitivities that range over a factor of three (about 1.5 to 4.5 deg. C for 2XCO2), and a minority of us believe the true sensitivity could be less than 1.5 deg. C.

Obviously, if one could confidently determine the climate sensitivity from observations, then the climate modelers could focus their attention on adjusting their models to reproduce that known sensitivity. But so far, there is no accepted way to determine climate sensitivity from observations. So, instead the climate modeling groups around the world try different approaches to modeling the various physical processes affecting climate change and get a rather wide range of answers for how much warming occurs in response to increasing atmospheric CO2.

One of the problems is that increasing CO2 as a climate forcing is unique in the modern instrumental record. Even if we can measure radiative feedbacks in specific situations (e.g., month to month changes in tropical convection) there is no guarantee that these are the same feedbacks that determine long-term sensitivity to increasing CO2. [If you are one of those who believe the feedback paradigm should not be applied to climate change — you know who you are — you might want to stop reading now to avoid being triggered.]

The Lewis Criticism

The new paper uses climate models as a surrogate for the real climate system to demonstrate the difficulty in measuring the “net feedback parameter” which in turn determines climate sensitivity. While I believe this is a worthwhile exercise, Nic Lewis has objected (originally here, then reposted here and here) to one of the paper’s claims regarding errors in estimating feedbacks through statistical regression techniques. It is a rather obscure point buried in the very long and detailed Gregory et al. paper, but it is nonetheless important to the validity of Lewis and Curry (2018) published estimates of climate sensitivity based upon energy budget considerations. Theirs is not really a statistical technique (which the new paper criticizes), but a physically-based technique applied to the IPCC’s own estimates of the century time scale changes in global radiative forcing, ocean heat storage, and surface temperature change.

From what I can tell, Nic’s objection is valid. Even though it applies to only a tiny portion of the paper, it has significant consequences because the new paper appears to be an effort to de-legitimize any observational estimates of climate sensitivity. I am not questioning the difficulty and uncertainty in making such estimates with current techniques, and I agree with much of what the paper says on the issue (as far as it goes, see the Supplement section, below).

But the authors appear to have conflated those difficulties with the very specific and more physics-based (not statistics-based) climate sensitivity estimates of the Lewis and Curry (2018) paper. Based upon the history of the UN IPCC process of writing its reports, the Gregory et al. paper could now be invoked to claim that the Lewis & Curry estimates are untrustworthy. The fact is that L&C assumes the same radiative forcing as the IPCC does and basically says, the century time scale warming that has occurred (even if it is assumed to be 100% CO2-caused) does not support high climate sensitivity. Rather than getting climate sensitivity from a model that produces too much warming, L&C instead attempt to answer the question, “What is the climate sensitivity based upon our best estimates of global average temperature change, radiative forcing, and ocean heat storage over the last century?”

Vindication for the Spencer and Braswell Studies

I feel a certain amount of vindication upon reading the Gregory et al. paper. It’s been close to 10 years now since Danny Braswell and I published a series of papers pointing out that time-varying radiative forcing generated naturally in the climate system obscures the diagnosis of radiative feedback. Probably the best summary of our points was provided in our paper On the diagnosis of radiative feedback in the presence of unknown radiative forcing (2010). Choi and Lindzen later followed up with papers that further explored the problem.

The bottom line of our work is that standard ordinary least-squares (OLS) regression techniques applied to observed co-variations between top-of-atmosphere radiative flux (from ERBE or CERES satellites) and temperature will produce a low bias in the feedback parameter, and so a high bias in climate sensitivity. [I provide a simple demonstration at the end of this post]. The reason why is that time-varying internal radiative forcing (say, from changing cloud patterns reflecting more or less sunlight to outer space) de-correlates the data (example below). We were objecting to the use of such measurements to justify high climate sensitivity estimates from observations.

Our papers were, of course, widely criticized, with even the editor of Remote Sensing being forced to resign for allowing one of the papers to be published (even though the paper was never retracted). Andrew Dessler objected to our conclusions, claiming that all cloud variations must ultimately be due to feedback from some surface temperature change somewhere at some time (an odd assertion from someone who presumably knows some meteorology and cloud physics).

So, even though the new Gregory et al. paper does not explicitly list our papers as references, it does heavily reference Proistosescu et al. (2018) which directly addresses the issues we raised. These newer papers show that our points were valid, and they come to the same conclusions we did — that high climate sensitivity estimates from the observed co-variations in temperature and radiative flux were not trustworthy.

The Importance of the New Study

The new Gregory et al. paper is extensive and makes many good conceptual points which I agree with. Jonathan Gregory has a long history of pioneering work in feedback diagnosis, and his published research cannot be ignored. The paper will no doubt figure prominently in future IPCC report writing.

But I am still trying to understand the significance of CMIP5 model results to our efforts to measure climate sensitivity from observations, especially the model results in their Fig. 5. It turns out what they are doing with the model data differs substantially with what we try to do with radiative budget observations from our limited (~20 year) satellite record.

First of all, they don’t actually regress top of atmosphere total radiative fluxes from the models against temperature; they first subtract out their best estimate of the radiative forcing applied to those models. This helps isolate the radiative feedback signal responding to the radiative forcing imposed upon the models. Furthermore, they beat down the noise of natural internal radiative and non-radiative variability by using only annual averages. Even El Nino and La Nina events in the models will have trouble surviving annual averaging. Almost all that will remain after these manipulations is the radiative feedback to just the CO2 forcing-induced warming. This also explains why they do not de-trend the 30-year periods they analyze — that would remove most of the temperature change and thus radiative feedback response to temperature change. They also combine model runs together before feedback diagnosis in some of their calculations, further reducing “noise” from internal fluctuations in the climate system.

In other words, their methodology would seem to have little to do with determination of climate sensitivity from natural variations in the climate system, because they have largely removed the natural variations from the climate model runs. The question they seem to be addressing is a very special case: How well can the climate sensitivity in models be diagnosed from 30-year periods of model data when the radiative forcing causing the temperature change is already known and can be subtracted from the data? (Maybe this is why they term theirs a “perfect model” approach.) If I am correct, then they really haven’t fully addressed the more general question posed by their paper’s title: How accurately can the climate sensitivity to CO2 be estimated from historical climate change? The “historical climate change” in the title has nothing to do with natural climate variations.

Unfortunately — and this is me reading between the lines — these newer papers appear to be building a narrative that observations of the climate system cannot be used to determine the sensitivity of the climate system; instead, climate model experiments should be used. Of course, since climate models must ultimately agree with observations, any model estimate of climate sensitivity must still be observations-based. We at UAH continue to work on other observational techniques, not addressed in the new papers, to tease out the signature of feedback from the observations in a simpler and more straightforward manner, from natural year-to-year variations in the climate system. While there is no guarantee of success, the importance of the climate sensitivity issue requires this.

And, again, Nic Lewis is right to object to their implicit lumping the Lewis & Curry observational determination of climate sensitivity work from energy budget calculations in with statistical diagnoses of climate sensitivity, the latter which I agree cannot yet be reliably used to diagnose ECS.

Supplement: A Simple Demonstration of the Feedback Diagnosis Problem

Whether you like the term “feedback” or not (many engineering types object to the terminology), feedback in the climate sense quantifies the level to which the climate system adjusts radiatively to resist any imposed temperature change. This radiative resistance (dominated by the “Planck effect”, the T^4 dependence of outgoing IR radiation on temperature) is what stabilizes every planetary system against runaway temperature change (yes, even on Venus).

The strength of that resistance (e.g., in Watts per square meter of extra radiative loss per deg. C of surface warming) is the “net feedback parameter”, which I will call λ. If that number is large (high radiative resistance to an imposed temperature change), climate sensitivity (proportional to the reciprocal of the net feedback parameter) is low. If the number is small (weak radiative resistance to an imposed temperature change) then climate sensitivity is high.

[If you object to calling it a “feedback”, fine. Call it something else. The physics doesn’t care what you call it.]

I first saw the evidence of the the different signatures of radiative forcing and radiative feedback when looking at the global temperature response to the 1991 eruption of Mt. Pinatubo. When the monthly, globally averaged ERBE radiative flux data were plotted against temperature changes, and the data dots connected in chronological order, it traced out a spiral pattern. This is the expected result of a radiative forcing (in this case, reduced sunlight) causing a change in temperature (cooling) that lags the forcing due to the heat capacity of the oceans. Importantly, this involves a direction of causation opposite to that of feedback (a temperature change causing a radiative change).

The newer CERES instruments provide the longest and most accurate record of changes in top-of-atmosphere radiative balance. Here’s the latest plot for 19 years of monthly Net (reflected shortwave SW plus emitted longwave LW) radiative fluxes versus our UAH lower tropospheric temperatures.

Fig. 1. Observed monthly global average anomalies in UAH lower tropospheric temperatures (LT) versus anomalies in CERES Net radiative flux at the top-of-atmosphere, March 2000 through April 2019.

Note I have connected the data dots in chronological order. We see than “on average” (from the regression line) there appears to be about 2 W/m2 of energy lost per degree of warming of the lower troposphere. I say “appears” because some of the radiative variability in that plot is not due to feedback, and it decorrelates the data leading to uncertainty in the slope of the regression line, which we would like to be an estimate of the net feedback parameter.

This contaminating effect of internal radiative forcing can be demonstrated with a simple zero-dimensional time-dependent forcing-feedback model of temperature change of a swamp ocean:

Cp[dT(t)/dt] = F(t) – λ [dT(t)]

where the left side is the change in heat content of the swamp ocean with time, and on the right side F is all of the radiative and non-radiative forcings of temperature change (in W/m2) and λ is the net feedback parameter, which multiplies the temperature change (dT) from an assumed energy equilibrium state.

While this is probably the simplest time-dependent model you can create of the climate system, it shows behavior that we see in the climate system. For example, if I make time series of low-pass filtered random numbers about zero to represent the known time scales of intraseasonal oscillations and El Nino/La Nina, and add in another time series of low-pass filtered “internal radiative forcing”, I can roughly mimic the behavior seen in Fig. 1.

Fig. 2. As in Fig. 1, but produced by a simple time-dependent forcing feedback model with a “swamp” ocean of assumed 15 m depth, and low-pass filtered random forcings which are approximately 60% radiative (e.g. random cloud variations) and 40% non-radiative (e.g. intraseasonal oscillations and ENSO). The model time step is one day, and the model output is averaged to 30 days, and run for the same period of time (230 months) as in Fig. 1.

Now, the key issue for feedback diagnosis is that even though the regression line in Fig. 2 has a slope of 1.8 W m-2 K-1, the feedback I specified in the model run was 4 W m-2 K-1. Thus, if I had interpreted that slope as indicating the sensitivity of the simple model climate system, I would have gotten 2. 1 deg. C, when in fact the true specified sensitivity was only 0.9 deg. C (assuming 2XCO2 causes 3.7 W m-2 of radiative forcing).

This is just meant to demonstrate how internal radiative variability in the climate system corrupts the diagnosis of feedback from observational data, which is also a conclusion of the newer published studies referenced above.

And, as I have mentioned above, even if we can diagnose feedbacks from such short term variations in the climate system, we have no guarantee that they also determine (or are even related to) the long-term sensitivity to increasing CO2.

So (with the exception of studies like L&C) be prepared for increased reliance on climate models to tell us how sensitive the climate system is.

Record Antarctic Stratospheric Warming Causes Sept. 2019 Global Temperature Update Confusion

Friday, October 4th, 2019

While the vast majority of our monthly global temperature updates are pretty routine, September 2019 is proving to be a unique exception. The bottom line is that there is nothing wrong with the UAH temperatures we originally reported. But what I discovered about last month is pretty unusual.

It all started when our global lower tropospheric (LT) temperature came in at an unexpectedly high +0.61 deg. C above the 1981-2010 average. I say “unexpected” because, as WeatherBell’s Joe Bastardi has pointed out, the global average surface temperature from NOAA’s CFS model had been running about 0.3 C above normal, and our numbers are usually not that different from that model product.

[By way of review, the three basic layers we compute average temperatures from the satellites are, in increasing altitude, the mid-troposphere (MT), tropopause region (TP), and lower stratosphere (LS). From these three deep layer temperatures, we compute the lower tropospheric (LT) product using a linear combination of the three main channels, LT = 1.548MT – 0.538TP +0.01LS.]

Yesterday, John Christy noticed that the Southern Hemisphere was unusually warm in our lower stratosphere (LS) temperature product, while the Northern Hemisphere was unusually cool. This led me to look at the tropical results for our mid-troposphere (MT) and ‘tropopause’ (TP) products, which in the tropics usually track each other. A scatterplot of them revealed September 2019 to be a clear outlier, that is, the TP temperature anomaly was too cool for the MT temperature anomaly.

So, John put a notice on his monthly global temperature update report, and I added a notice to the top of my monthly blog post, that we suspected maybe one of the two satellites we are currently using (NOAA-19 and Metop-B) had problems.

As it turns out, there were no problems with the data. Just an unusual regional weather event that produced an unusual global response.

Blame it on Antarctica

Some of you might have seen news reports several weeks ago that a strong stratospheric warming (SSW) event was expected to form over Antarctica, potentially impacting weather in Australia. These SSW events are more frequent over the Arctic, and occur in winter when (put very simply) winds in the stratosphere flow inward and force air within the cold circumpolar vortex to sink (that’s called subsidence). Since the stratosphere is statically stable (its temperature lapse rate is nearly isothermal), any sinking leads to a strong temperature increase. CIRES in Colorado has provided a nice description of the current SSW event, from which I copied this graphic showing the vertical profile of temperature normally (black like) compared to that for September (red line).

By mass continuity, the air required for this large-scale subsidence must come from lower latitudes, and similarly, all sinking air over Antarctica must be matched by an equal mass of rising air, with temperatures falling. This is part of what is called the global Brewer-Dobson circulation in the stratosphere. (Note that because all of this occurs in a stable environment, it is not ‘convection’, but must be forced by dynamical processes).

As can be seen in this GFS model temperature field for today at the 30 mb level (about 22 km altitude) the SSW is still in play over Antarctica.

GFS model temperature departures from normal at about 22 km altitude in the region around Antarctica, 12 UTC 4 October 2019. Graphic from WeatherBell.com.

The following plot of both Arctic and Antarctic UAH LS temperature anomalies shows just how strong the September SSW event was, with a +13.7 deg. C anomaly averaged over the area poleward of 60 deg. S latitude. The LS product covers the layer from about 15 to 20 km altitude.

As mentioned above, when one of these warm events happens, there is cooling that occurs from the rising air at the same altitudes, even very far away. Because the Brewer-Dobson circulation connects the tropical stratosphere to the mid-latitudes and the poles, a change in one region is mirrored with opposite changes elsewhere.

As evidence of this, if I compute the month-to-month changes in lower stratospheric temperatures for a few different regions, I find the following correlations between regions (January 1979 through September 2019). These negative correlations are evidence of this see-saw effect in stratospheric temperature between different latitudes (and even hemispheres).

Tropics vs. Extratropics: -0.78

Arctic vs. S. Hemisphere: -0.70

Antarctic vs. N. Hemisphere: -0.50

N. Hemis. vs. S. Hemis.: -0.75

Because of the intense stratospheric warming over Antarctica, it caused an unusually large difference in the NH and SH anomalies, which raised a red flag for John Christy.

Next I can show that the SSW event extended to lower altitudes, influencing the TP channel which we use to compute the LT product. This is important because sinking and warming at the altitudes of the TP product (roughly 8-14 km altitude) can cause cooling at those same altitudes very far away. This appears to be why I noticed the tropics having the lowest-ever TP temperature anomaly for the MT anomaly in September, which raised a red flag for me.

In this plot of the difference between those two channels [TP-MT] over the Antarctic, we again see that September 2019 was a clear outlier.

Conceptually, that plot shows that the SSW subsidence warming extends down into altitudes normally considered to be the upper troposphere (consistent with the CIRES plot above). I am assuming that this led to unusual cooling in the tropical upper troposphere, leading to what I thought was anomalous data. It was indeed anomalous, but the reason wasn’t an instrument problem, it was from Mother Nature.

Finally, Danny Braswell ran our software, leaving out either NOAA-19 or Metop-B, to see if there was an unusual difference between the two satellites we combine together. The global LT anomaly using only NOAA-19 was +0.63 deg. C, while that using only Metop-B was +0.60 deg. C, which is pretty close. This essentially rules out an instrument problem for the unusually warm LT value in September, 2019.

UAH Global Temperature Update for September, 2019: +0.61 deg. C (see update, below)

Tuesday, October 1st, 2019

UPDATE: (10/3/2019, 4:55 p.m. CDT): We have discovered that the last 1-2 months of LT data could be biased high. This is based upon a quick analysis of tropical temperatures where our mid-tropospheric (MT) and upper-tropospheric product (TP) anomalies are usually in good agreement. September 2019 is a clear outlier, with TP much too cold compared to MT. MT was cooler in the tropics in than in August, but because TP fell so much more, their weighted difference produced a spuriously warm result for LT. Furthermore, the tropical LS (lower stratospheric temperature) is at a record low in the tropics, a result which I do not believe. I will provide an update when we figure out the problem.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for September, 2019 was +0.61 deg. C, up considerably from the August value of +0.38 deg. C.

The linear warming trend since January, 1979 remains at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1981-2010) average for the last 21 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2018 01 +0.29 +0.51 +0.06 -0.10 +0.70 +1.39 +0.52
2018 02 +0.24 +0.28 +0.21 +0.05 +0.99 +1.22 +0.35
2018 03 +0.28 +0.43 +0.12 +0.08 -0.19 -0.32 +0.76
2018 04 +0.21 +0.32 +0.09 -0.14 +0.06 +1.02 +0.84
2018 05 +0.16 +0.38 -0.05 +0.01 +1.90 +0.14 -0.24
2018 06 +0.20 +0.33 +0.06 +0.11 +1.11 +0.76 -0.42
2018 07 +0.30 +0.38 +0.22 +0.28 +0.41 +0.24 +1.48
2018 08 +0.18 +0.21 +0.16 +0.11 +0.02 +0.11 +0.37
2018 09 +0.13 +0.14 +0.13 +0.22 +0.89 +0.23 +0.27
2018 10 +0.19 +0.27 +0.12 +0.30 +0.20 +1.08 +0.43
2018 11 +0.26 +0.24 +0.27 +0.45 -1.16 +0.68 +0.55
2018 12 +0.25 +0.35 +0.15 +0.30 +0.25 +0.69 +1.20
2019 01 +0.38 +0.35 +0.41 +0.35 +0.53 -0.15 +1.15
2019 02 +0.37 +0.47 +0.28 +0.43 -0.02 +1.04 +0.05
2019 03 +0.34 +0.44 +0.25 +0.41 -0.55 +0.96 +0.58
2019 04 +0.44 +0.38 +0.51 +0.53 +0.50 +0.92 +0.91
2019 05 +0.32 +0.29 +0.35 +0.39 -0.61 +0.98 +0.38
2019 06 +0.47 +0.42 +0.52 +0.64 -0.64 +0.90 +0.35
2019 07 +0.38 +0.33 +0.44 +0.45 +0.11 +0.33 +0.87
2019 08 +0.38 +0.38 +0.39 +0.42 +0.17 +0.44 +0.23
2019 09 +0.61 +0.64 +0.58 +0.60 +1.21 +0.75 +0.57

This makes September, 2019 the warmest September in the 41 year satellite record.

The UAH LT global anomaly image for September, 2019 should be available in the next few days here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt