U.S. chance of Tiangong-1 sighting now less than 2%

March 31st, 2018

The latest Aerospace Corp. prediction of the reentry time for the Chinese Space Station Tiangong-1 is now 3:30 p.m. CDT (plus or minus 8 hours) on Sunday, April 1. As reentry approaches, the predictions will get better, and the potential paths of the satellite will be narrowing.

The latest potential paths of reentry look like this:

Potential Tiangong-1 reentry orbital paths on April 1 2018 (Aerospace Corp.)

The paths over the U.S. are morning paths, and would be quite early in the time window of reentry. The total time these orbits are visible from the contiguous U.S. is only about 25 minutes (you could see the satellite burning up as far as 400 miles away from these paths, assuming no clouds are in the way). That is only 2.6 percent of the total time of the reentry window (16 hours), so given the the fact the U.S. paths are quite early in the window (and thus lower probability), I’d say the chances of anyone in the U.S. getting to see the fireworks show is less than 2%. Once you factor in cloud cover, it’s probably more like 1%.

Of course, we always knew the probability was very small.

And I think Michigan can now deactivate their Emergency Operations Center.

But, if you are feeling lucky and live within a few hundred miles of one of the paths show in the above graphic, I suggest visiting heavens-above.com, (1) enter your location (or nearest city), (2) click on “Tiangong-1”, and (3) change from “Visible only” to “All”, to see exactly what time(s) the satellite will be passing near you. Click on one of those times to see the path it will be making across the sky.

Lord Monckton Responds

March 23rd, 2018

NOTE: In fairness to Lord Monckton, I have accepted his request to respond to my post where I criticized his claim than an “elementary error of physics” could be demonstrated on the part of climate modelers. While Christopher & I are in agreement that the models produce too much warming, we disagree on the reasons why. From what I can tell, Christopher claims that climatologists have assumed the theoretical 255K average global surface temperature in the absence of the greenhouse effect would actually induce a feedback response; I disagree… 255K is the theoretical, global average temperature of the Earth without greenhouse gases but assuming the same solar insolation and albedo. It has no feedback response because it is a pure radiative equilibrium calculation. Besides, the climate models do not depend upon that theoretical construct anyway; it has little practical value — and virtually no quantitative value –other than in conceptual discussions (how could one have clouds without water vapor? How could a much colder Earth have no more ice cover than today?). But I will let the reader decide whether his arguments have merit. I do think the common assumption that the climate system was in equilibrium in the mid-1800s is a dubious one, and I wish we could attack that, instead, because if some of the warming since the 1800s was natural (which I believe is likely) it would reduce estimates of climate sensitivity to increasing carbon dioxide even further.

Of ZOD and NOGs

By Christopher Monckton of Brenchley

Roy Spencer has very kindly allowed me to post up this reply to his interesting posting about my team’s discussion of a large error we say we have found in climatological physics.

The error arises from the fact that climate models are calibrated by reference to past climate. They have to explain why the world in, say, 1850, was 32 K warmer than the 255 K that would have prevailed that year (assuming today’s insolation and an albedo of about 0.3), in the absence of the naturally-occurring, non-condensing greenhouse gases (NOGS).

Till now, it has generally been assumed that between a third and a quarter of that 32 K warming is directly forced by the presence of the NOGS, and that between two-thirds and three-quarters is a feedback response to the directly-forced warming from the NOGS.

That gives a feedback fraction of 2/3 to 3/4, or 0.67 to 0.75. The feedback fraction is simply the fraction of final or equilibrium temperature that constitutes the feedback response to the directly-forced warming.

Roy is quite right to point out that the general-circulation models do not use the concept of feedback directly. However, there is a handy equation, with the clunky name zero-dimensional-model equation (lets call it ZOD) that allows us to diagnose what equilibrium temperature the models would predict.

All we need to know to diagnose the equilibrium temperature the models would be expected to predict is the reference temperature, here the 255 K emission temperature, and the feedback fraction.

ZOD works also for changes in temperature rather than entire temperatures. The reason is that a temperature feedback is a temperature response induced by a temperature or temperature change.

If a feedback is present in a dynamical system (that’s a mathematically-describable object that changes its state over time, such as the climate), that feedback does not distinguish between the initial entire temperature (known to feedback-analysis geeks as the input signal) and any change in that temperature (the direct gain), such as a directly-forced increase owing to the presence of NOGS.

We say that climatology errs in assuming that the input signal (the 255 K emission temperature that would prevail at the surface in the absence of greenhouse gases) does not induce a feedback response, but that the additional 8 Kelvin of warming directly forced by the presence of the NOGS somehow, as if by magic, induces a feedback response and not just any old feedback response, but a temperature of 24 K, three times the direct warming that induced it.

Now, here’s the question for anyone who thinks climatology has gotten this right. By what magical process waving a wand, scattering stardust, casting runes, reading tea-leaves, pick a card, any card do the temperature feedbacks in the climate distinguish between the input signal of 255 K and the direct gain of 8 K in deciding whether to respond?

Do the feedbacks gather around, have a beer and take a vote? OK, boys, lets go on strike until the surface temperature exceeds 255 K, and lets go to work in a big way then, but only in response to the extra 8 K of temperature from our good mates the NOGs?

Of course not. If a feedback process subsists in a dynamical object, it will respond not only to what the feedback geeks call the direct gain but also to the input signal. Why on Earth would feedbacks refuse to deliver any response at all to 255 K of emission temperature but then suddenly deliver a whopper of a 24 K response to just 8 K of further temperature?

Roy’s difficulty in accepting that the emission temperature induces a feedback response is that it is not a forcing. Of course it isn’t. Emission temperature, as its name suggests, is a temperature, denominated in Kelvin, not a forcing (a change in radiative flux density denominated in Watts per square meter).

But what is a temperature feedback? The clue is in the name on the tin. A temperature feedback is a feedback to temperature, not to a forcing. It is itself a forcing, this time denominated in Watts per square meter per Kelvin of the temperature (or temperature change) that induced it.

A temperature feedback just doesn’t care whether it is responding to an initial temperature, or to a subsequent change in temperature driven by a forcing such as that from the presence of the NOGs.

Take the Earth in 1850, but without greenhouse gases, and yet preserving today’s insolation and albedo. The reason for this rather artificial construct is that that’s the way climatology determines the influence of feedbacks, by comparing like with like. The ice, clouds and sea have much the same extents as today, so the thought experiment says.
And that means there are feedbacks. Specifically, the water-vapor feedback somewhat offset by the lapse-rate feedback, the surface albedo feedback, and the cloud feedbacks.
Those feedbacks respond to temperature. Is there one? Yes. There is a temperature of 255 K. At this stage in the calculation, we don’t have much of an idea of how much the feedback response to 255 K of temperature would be.

Lets press ahead and bring on the NOGS. Up goes the temperature by a directly-forced 8 K, from 255 K to 263 K, or thereabouts.

What’s the equilibrium temperature in this experiment? Its simply the actual, measured temperature in 1850: namely, around 287 K. The climate is presumed to have been in equilibrium then.

Now we have all we need to deploy the ZOD to diagnose approximately what the feedback fraction would be in the models, provided that, as in this experiment, they took account of the fact that the emission temperature as well as well as the NOGs induces a feedback response.

The ZOD is a really simple equation. If, as here, we have some idea of the reference temperature (in this case, 263 K) and the equilibrium temperature (287 K), the feedback fraction is simply 1 minus the ratio of emission temperature to equilibrium temperature, thus: 1 – 263/287. That works out at 0.08, and not, as now, 0.67 or 0.75.

Armed with the approximate value of the feedback fraction, we can use the ZOD to work out the Charney sensitivity (i.e., equilibrium sensitivity to doubled CO2) if the models were to take account of the fact that feedbacks will respond just as enthusiastically to the emission temperature as to the small change in that temperature forced by the presence of the NOGS.

The models current estimate of reference sensitivity to doubled CO2 is 1.1 K. Using their current estimate of the feedback fraction, 0.67, the ZOD tells us Charney sensitivity would be 1.1/(1 – 0.67), or a heftyish 3.3 K. That’s the official mid-range estimate.

But with our corrected approximation to the feedback fraction, Charney sensitivity would be 1.1/(1 – 0.08), or only 1.2 K. End of global warming problem.

What of Roy’s point that the models don’t explicitly use the ZOD? The models have been tuned to assume that two-thirds to three-quarters of the 32 K difference between emission temperature and real-world temperature in 1850 is accounted for by feedback responses to the 8 K directly forced warming from the NOGs.

The models are also told that there is no feedback response to the 255 K emission temperature, even though it is 32 times bigger than the 8 K warming from the NOGs.

So they imagine, incorrectly, that Charney sensitivity is almost three times the value that they would find if the processes by which they represent what we are here calling feedbacks had been adjusted to take account of the fact that feedbacks respond to any temperature, whether it be the entire original temperature or some small addition to it.

Mainstream climate science thus appeared to us to be inconsistent with mainstream science. So we went to a government laboratory and said, Build us an electronic model of the climate, and do the following experiment. Assume that the input signal is 255 K. Assume that there are no greenhouse gases, so that the value of the direct-gain factor in the gain block is unity [feedback geek-speak, but they knew what we meant]. Assume that the feedback fraction is 0.1. And tell us what the output signal would be.

Now, climatology would say that, in the absence of any forcings from the greenhouse gases, the output signal would be exactly the same as the input signal: 255 K. But we said to the government lab, We think the answer will be 283 K.

So the lab built the test circuit, fed in the numbers, and simply measured the output, and behold, it was 283 K. They weren’t at all surprised, and nor were we. For ZOD said 255/(1 – 0.1) = 283.

That’s it, really. But our paper is 7500 words long, because we have had to work so hard to nail shut the various rat-holes by which climatologists will be likely to try to scurry away.

Will it pass peer review? Time will tell. But we have the world’s foremost expert in optical physics and the world’s foremost expert in the application of feedback math to climate on our side.

Above all, we have ZOD on our side. ZOD gives us a very simple way of working out what warming the models would predict if they did things right. We calibrated ZOD by feeding in the official CMIP5 models values of the reference temperature and of the feedback fraction, and we obtained the official interval of Charney sensitivities that the current models actually predict. ZOD works.

We went one better. We took IPCC’s mid-range estimate of the net forcing from all anthropogenic sources from 1850-2011 and worked out that that implied a reference sensitivity over that period of 0.72 K. But the actual warming was 0.76 K, and that’s near enough the equilibrium warming (it might be a little higher, owing to delays caused by the vast heat-sink that is the ocean).

And ZOD said that the industrial-era feedback fraction was 1 – 0.72/0.76, or 0.05. That was very close to the pre-industrial feedback fraction 0.08, but an order of magnitude smaller than the official estimates, 0.67-0.75.

Or ZOD can do it the other way about. If the feedback fraction is really 0.67, as the CMIP5 models think, then the equilibrium warming from 1850-2011 would not be the measured 0.76 K: it would be 0.72/(1 – 0.67) = 2.2 K, almost thrice what was observed.

Does ocean overturning explain that discrepancy? Well, we know from the pre-industrial experiment, in which ocean overturning is inapplicable, that the feedback fraction is about 0.08. And there’s not likely to be all that much difference between the pre-industrial and industrial-era values of the feedback fraction.

ZOD, therefore, works as a diagnostic tool. And ZOD tells us Charney sensitivity to doubled CO2 will be only 1.2 K, plus or minus not a lot. Game over.

Or so we say.

Climate F-Words

March 22nd, 2018

President Trump explaining climate change terminology.


A recent article by Lord Christopher Monckton over at WUWT argues that there has been an “elementary error of physics” that has led to climate sensitivity being overestimated by about a factor of 2.

I agree with the conclusion but not the reason why. It is already known from the work of Otto et al. (2013), Lewis & Curry (2015) and others that the climate system (including the deep oceans) has warmed by an amount that suggests a climate sensitivity only about half of what the models produce (AR5 models warm by an average of 3.4 deg. C in response to a doubling of CO2).

But the potential reasons why are many, and as far as I can tell not dependent upon Christopher’s arguments. For those who don’t know, Lord Monckton is a pretty talented mathematician. However, like others I have encountered over the years, I believe he errs in his assumptions about how the climate research community uses — and does or does not depend upon — the concept of feedback in climate modeling.

You Don’t Have to Use F-Words

I’ve been told that the feedback concept used by climate researchers is a very poor analog for feedbacks in electrical circuit design. Fine. It doesn’t matter. How modern 3D coupled ocean-atmosphere climate models work does not depend upon the feedback concept.

What they DO depend upon is energy conservation: If the system is in energy equilibrium, its average temperature will not change (that’s not precisely true, because it makes little sense energetically to average the temperature of all ocean water with the atmosphere, and there can be energy exchanges between these two reservoirs which have vastly different heat capacities. Chris Essex has written on this). The point is that the total heat content of the system in Joules stays the same unless an energy imbalance occurs. (Temperature is focussed on so intensely because it determines the rate at which the Earth sheds energy to outer space. Temperature stabilizes the climate system.)

The amount of surface temperature change in response to that energy imbalance is, by definition, the climate sensitivity, which in turn depends upon feedback components. You can call the feedbacks anything… maybe “temperature sensitivity parameters” if you wish. Feedback is just a convenient term that quantifies the proportionality between an imposed energy imbalance and the resulting temperature change response, whether it’s for a pot of water on the stove, the climate system, or anything that is initially at a constant temperature but then is forced to change its temperature. Christopher’s claim that the Earth’s effective radiating temperature (ERT) to outer space (around 255 K) itself causes a “feedback” makes no sense to me, because it isn’t (nor does it represent) a “forcing”. Feedbacks, by the climate definition, are only in response to forced departures from energy equilibrium.

The proportionality factor between a forcing (another f-word) and temperature response in climate parlance is called the net feedback parameter, and has units of Watts per sq. meter per deg. C, usually referenced to a surface temperature change. You could come up with a sensitivity parameter for a pot of water on the stove, too. In the climate system the net feedback parameter has components from temperature-dependent changes in clouds, water vapor, etc., as well as the Sigma-T^^4 “Planck” effect that ultimately stabilizes the climate system from experiencing large temperature fluctuations.

Now, in the process of describing climate change in simple terms with such proportionalities between imposed energy imbalance and temperature response, various feedback equations have been published. But NONE of the IPCC models depend upon any formulation of any feedback equation you wish to devise. Neither do they depend upon whether the Earth’s natural greenhouse effect on surface temperature is estimated to be 33 deg. C, or 75 deg. C (Manabe & Strickler, 1964), or any other value. Nor do they depend upon how that 33 deg or 75 deg is apportioned from different components. These are all conceptual constructs which help us understand and discuss the climate system, but the climate models do not depend upon them.

Modern 3D climate models are basically weather forecast models (with an ocean model added) that are run for a hundred years or more of model run time (rather than 3-14 days, which is pretty common for weather forecast models). One of the biggest differences is that climate models have been tuned so that they keep a relatively constant temperature over a long integration, which also means their rates of energy gain (from the sun) and energy loss to outer space are, in the long term, equal. (I question whether they actually conserve energy, but that’s a different discussion).

Once you have a model whose temperature does not drift over time, then you can impose a forcing upon it. All that means is impose an energy imbalance. Once again, it doesn’t matter to the physics what you call it. To change the energy balance, you could increase the solar input. Or, you could reduce the rate of radiative cooling to outer space, e.g. from increasing atmospheric CO2. The point is that forcing is just an imposed energy imbalance, while feedback quantifies how much of a temperature response you will get for a given amount of forcing.

As the climate system warms from an energy imbalance, a wide variety of changes can take place (clouds, water vapor, etc.) which affect how much warming will occur before energy balance is once again restored, and the system stops warming. Those component changes, for better or worse, are called “feedbacks” (e.g. cloud feedback, water vapor feedback). Again, you don’t have to use the f-word. Call it anything you want. Its just a proportionality constant (or not a constant?) that quantitatively relates an energy imbalance to a temperature response.

Nowhere do the IPCC models invoke, use, assume, or otherwise depend upon any feedback equations. Those equations are just greatly simplified approximations that allow us to discuss how the climate system responds to an imposed energy imbalance. If somebody has published a paper that incorrectly explains the climate system with a feedback equation, that does not invalidate the models. There might be many errors in models that cause them to be too sensitive, but how someone misrepresents the model behavior with their favorite feedback equation is that person’s problem… not the model’s problem.

Feedbacks in the IPCC models are diagnosed after the model is run; they are not specified before it is run. Now, it IS true that how some uncertain model processes such as cloud parameterizations are specified will affect the feedbacks, and therefore affect the climate sensitivity of the model. So, I suppose you can say that feedbacks are indirectly imposed upon the models. But there isn’t a feedback factor or feedback equation input into the model.

The ultimate climate sensitivity of the models to an energy imbalance (say, increasing CO2) depends upon how clouds, water vapor, etc., all change with warming in the model in such a way to make the warming either large or small. The equations in the models governing this involve energy and mass conservation, moisture, thermodynamics, dynamics, radiation, etc., along with some crucial approximations for processes which the models cannot resolve (e.g. cloud parameterizations, which will affect cloud feedback) or which we do not even understand well enough to put in the models (e.g. temperature-dependent changes in precipitation efficiency, which will affect water vapor feedback).

But nowhere does the sensitivity of modern 3D climate models depend upon any feedback equations.

Now, if I have misrepresented Lord Monckton’s argument, I apologize. But I am having difficulty determining exactly what his argument is, and how it affects the processes specified in climate models. Maybe someone can help me. We can agree that the models are too sensitive, but we must make sure our arguments for their excessive sensitivity make sense, or we will continue to be dismissed out of hand by the mainstream climate community.

Chinese satellite filled with corrosive fuel will probably hit… the ocean

March 11th, 2018

Oh, boy. If only reporters checked with anyone who knows orbital mechanics before writing stories like this:

Chinese satellite filled with corrosive fuel could hit lower Michigan

The orbital decay of the Chinese space station Tiangong-1 will lead to its uncontrolled reentry around April. The green and yellow areas on the following map show where the satellite might hit…somewhere:

Now, because of the inclination of the orbit (the highest latitude it reaches), the yellow areas have a higher probability of being hit than the green area…per square mile. But the green area is a whole lot bigger than the yellow area.

As a result, past experience has shown that these satellites usually reenter over the ocean…usually the Pacific. It’s a really big area.

As the satellite falls, it encounters more atmospheric drag (anyone see the movie Gravity?) The resulting enhanced orbital decay then becomes very rapid, and the satellite burns up. But the point at which this happens is unpredictable. If the reentry prediction is off by, say, 50 minutes (a half orbit), the satellite will reenter on the opposite side of the Earth (!)

Here’s a recent reentry window forecast from the Aerospace Corporation…note the window is about 6 days wide. And, again… a 50 minute error in the prediction means the other side of the world:

So, to do a news story that the satellite might hit Lower Michigan… well… that takes an extra dose of either moxey or idiocy.

UAH Global Temperature Update for February, 2018: +0.20 deg. C

March 1st, 2018

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for February, 2018 was +0.20 deg. C, down a little from the January value of +0.26 deg. C:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed, and so has the distinction between calendar months.

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 14 months are:

YEAR MO GLOBE NHEM. SHEM. TROPICS
2017 01 +0.33 +0.31 +0.34 +0.10
2017 02 +0.38 +0.57 +0.19 +0.08
2017 03 +0.23 +0.36 +0.09 +0.06
2017 04 +0.27 +0.28 +0.26 +0.21
2017 05 +0.44 +0.39 +0.49 +0.41
2017 06 +0.21 +0.33 +0.10 +0.39
2017 07 +0.29 +0.30 +0.27 +0.51
2017 08 +0.41 +0.40 +0.42 +0.46
2017 09 +0.54 +0.51 +0.57 +0.54
2017 10 +0.63 +0.66 +0.59 +0.47
2017 11 +0.36 +0.33 +0.38 +0.26
2017 12 +0.41 +0.50 +0.33 +0.26
2018 01 +0.26 +0.46 +0.06 -0.12
2018 02 +0.20 +0.24 +0.15 +0.03

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through February 2018 remains at +0.13 C/decade.

The UAH LT global anomaly image for February, 2018 should be available in the next few days here.

The new Version 6 files should also be updated in the coming days, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

Warming to 2100: A Lukewarmer Scenario

February 28th, 2018

My previous post dealt with a 1D model of ocean temperature changes to 2,000m depth, optimized to match various observed quantities: deep-ocean heat storage, surface temperature warming, the observed lagged-variations between CERES satellite radiative flux and surface temperature, and warming/cooling associated with El Nino/La Nina.

While that model was meant to match global average (land+ocean) conditions, I more recently did one for oceans-only (60N-60S). I changed a few things, so the models are not directy comparable. For example, I used all of the RCP6.0 radiative forcings, but with the land use and snow albedo changes removed (since the model is ocean-only). For SST observations, I used the ERSSTv5 data.

The resulting equilibrium climate sensitivity (ECS) is 1.54 deg. C (coincidently the same as the previous, global model).

What I thought would be fun, though, would be to run the model out to 2100. This requires an estimate of ENSO activity (I used the MEI index). After examining the history of MEI, including it’s low-frequency variations (which are somewhat related to the Pacific Decadal Oscillation, PDO), I made the February 2018 MEI values onward equal to the Feb. 1929 values up to the present.

The resulting forecast shows global average SST almost reaching 1.5 C above pre-industrial times by the end of this century:

2-Layer ocean model sea surface temperature variations. See the figure inset for model assumptions and how it was tuned.

Because I used past MEI data for the future, the lack of significant warming until the late 2040s is due to reduced El Nino activity that was observed from about 1940 to the late 1970s. The enhanced warming after 2040 is analogous to the enhanced warming from stronger El Nino activity that existed from the late 1970s to the late 1990s.

Of course, this whole exercise assumes that, without humans, the climate system would have had no temperature trend between 1765-2100. That is basically the IPCC assumption — that the climate system is in long-term energy equilibrium, not only at the top-of atmosphere, but in terms of changes in ocean vertical circulation whcih can warm the surface and atmosphere without any TOA radiative forcing.

I don’t really believe the “climate stasis” assumption, because I believe the Medieval Warm Period and the Little Ice Age were real, and that some portion of recent warming has been natural. In that case, the model climate sensitivity would be lower, and the model warming by 2100 would be even less.

What would cause warming as we came out of the Little Ice Age? You don’t need any external forcing (e.g. the Sun) to accomplish it, although I know that’s a popular theory. My bet (but who knows?) is a change in ocean circulation, possibly accompanied by a somewhat different cloud regime. We already know that El Nino/La Nino represents a bifurcation in how the climate system wants to behave on interannual time scales. Why not multi-century time scale bifurcations in the deep ocean circulation? This possibility is simply swept under the rug by the IPCC.

A 1D Model of Global Temperature Changes, 1880-2017: Low Climate Sensitivity (and More)

February 22nd, 2018

UPDATE(2/23/18): The previous version of this post had improper latitude bounds for the HadCRUT4 Tsfc data. I’ve rerun the results… the conclusions remain the same. I have also added proof that ENSO is accompanied by its own radiative forcing, a controversial claim, which allows it to cause multi-decadal climate change. In simple terms, this is clear evidence the climate system can cause its own, natural, internally-generated climate changes. This is partly what has caused recent warming, and the climate modelling community has assumed it was all human-caused.

Executive Summary
A 1D forcing-feedback model with two equivalent-ocean layers is used to model monthly global average surface temperatures from 1880 through 2017. Reflected shortwave (SW) and thermally emitted longwave (LW) forcings and feedbacks are included in an attempt to obtain the closest match between the model and HadCRUT4 surface temperatures based upon correlation and long-term trends.

The traditional radiative forcings included are RCP estimates of volcanic (SW), anthropogenic greenhouse gases (LW), and anthropogenic direct aerosol forcing (SW). The non-traditional forcings are ENSO-driven SW radiative forcing based upon the observed lagged relationship between CERES satellite SW radiative flux and the Multivariate ENSO Index during 2000-2017, which shows radiative accumulation (loss) during El Nino warming (La Nina cooling), and a non-radiative forcing of surface temperature proportional to ENSO activity since 1871 (MEI “ext” index).

Heat is pumped into the deep ocean in proportion how far the surface layer temperature deviates from energy equilibrium, with the proportionality constant chosen to match the observed average rate of heat accumulation in the 0-2000m layer between 1990 and 2017 from NODC data.

LW and SW feedbacks are adjusted in the model to optimize model agreement with observations, as is the model surface layer depth, and the ENSO non-radiative forcing strength. By incrementally changing the adjustable parameters, the model and observed surface temperature trends are matched and (using monthly running 12-month averages) a correlation of 0.88 is achieved from 1880-2017. The optimum effective depth of the surface mixed layer is 38 meters (equivalent to 54 m ocean, 0 m land in the global average), and the resulting model equilibrium climate sensitivity is 1.54 deg. C, which is less than half the average IPCC AR5 model sensitivity of 3.4 deg.

Curiously, the model surface temperature trend during 1979-2017 (+0.113 C/decade) is a much closer match to our UAH LT data (+0.128 C/decade) than it is to the HadCRUT4 data (+0.180 C/decade), despite the fact the model was optimized to match HadCRUT4 during 1880-2017.

It is also demonstrated that using either the model-generated, or the CERES-observed, radiative fluxes during 2000-2017 to diagnose feedbacks results in a climate sensitivity that is far too high, consistent with the published papers of Spencer & Braswell on this subject. Thus, CERES-derived radiative feedback, while useful for model comparison, should not be used to diagnose feedbacks in the climate system.

Background: CERES Radiative Fluxes Cannot be Used to Diagnose Global Feedbacks

I recently revisited the CERES-EBAF dataset of top-of-atmosphere (TOA) radiative fluxes, a multi-satellite best-estimate of those fluxes updated for the period March 2000 through September, 2017. When I examined the feedback parameters (regression coefficients) diagnosed from the new, longer data record, the result for the Net (thermally emitted longwave LW + reflected shortwave SW) was clearly unrealistic. The plot of monthly global radiative flux variations are shown in Fig. 1, for LW, SW, and Net (LW+SW) fluxes compared to global average surface temperature variations from HadCRUT4.

FIG. 1. Scatterplots of monthly global average anomalies in CERES SW, LW, and Net (LW+SW) radiative fluxes versus HadCRUT4 surface temperatures, March 2000 through September 2017. The negative sign of the regression result in the bottom plot is physically impossible if interpreted as a net feedback parameter in the climate system.

Significantly, the Net flux regression result in Fig. 1 (-0.12 W/m2 K) is physically impossible as a feedback interpretation, with the wrong sign. It would suggest that as the climate system warms, it traps even more radiative energy, which would produce an unstable climate system with runaway warming (or cooling).

The SW and LW regression results in Fig. 1 are at least possible in terms of their signs… at face value suggesting positive SW feedback, and for the longwave (compared to a temperature-only “Planck effect” value of 3.2 W/m2 K), the 1.72 W/m2 K value would suggest positive LW feedback, probably from water vapor (maybe high clouds).

As I will demonstrate, however, the regression coefficients themselves are not well related to feedback, and thus climate sensitivity. (The equilibrium climate sensitivity is computed by dividing the theoretically-expected radiative forcing from a doubling of atmospheric CO2 [2XCO2], 3.7 W/m2, by the Net feedback parameter, which must be positive for the climate system to be stable [all IPCC models have Net feedback parameters that are positive]).

We have published a few papers on this subject before, and it was the theme of my book, The Great Global Warming Blunder. I have, quite frankly, been disappointed that the climate research establishment (with the exception of Dick Lindzen) has largely ignored this issue. I hope that the work (in progress) I post here will lead to some renewed interest in the subject.

After spending some time (once again) trying to come up with some way to convincingly explain why the regression coefficients like those in Fig. 1 aren’t really a measure of feedback (without gnashing of teeth in the climate community, or journal editors resigning after publishing our paper), I decided to code up a simple 1D forcing-feedback model that would allow me to (1) explain the temperature variations since 1880 in a physically consistent way, and then (2) use the radiative output from the model during the CERES period (2000-2017) to show that the model-diagnosed feedback parameters indicate a much higher climate sensitivity than was actually specified in the model run.

In the rest of the post below, I believe I will convincingly demonstrate what I am saying… while also providing both an estimate of climate sensitivity from the last 137 years of climate variability, and explaining features like the pre-1940 warming trend, the post 1940 warming hiatus, and the post-1997 warming hiatus.

The 1D Energy Balance Forcing-Feedback Model

While striving for maximum simplicity while still explaining the observed data, I finally realized that the 20-layer ocean used in the model of Spencer & Braswell (2014) was needlessly complex, and the resulting criticism of our ocean heat diffusion scheme was a distraction from the core conclusions of the paper.

So, I’ve now convinced myself that all that is required is a 2-layer model, where the rate of deep ocean storage is simply proportional to how warm the surface layer gets compared to energy equilibrium. While not necessarily totally representative of how the ocean works, it does meet the IPCC expectation that as global temperatures warm, the deep ocean also warms, and allows a sink for a portion of the energy that accumulates in the surface layer. The proportionality constant for this is set to produce the same amount of average 0-2000m warming from NODC ocean heat content (OHC) during 1990-2017. We couldnt do this in our original work because estimates of 0-2000m OHC were not yet published (I contacted Sid Levitus at the time, and he said they were working on it).

The depth of the model top layer is an adjustable parameter that can be tuned to provide the best agreement with HadCRUT4 observations; it is assumed to represent a global average of an ocean mixed layer of constant depth, and assumed no net storage (or loss) of energy by land during warming (or cooling).

The model is based upon the commonly used forcing-feedback energy budget equation for the climate system, assuming temperature deviations are from some state of energy equilibrium (I know, that’s debatable… bear with me here):

ΔT/Δt = [F(t) – λ ΔTsfc]/Cp

This equation simply says that the temperature change with time of a system with heat capacity Cp is related to the time-varying forcings F (say, excess radiative energy forced into the system from anthropogenic GHG accumulation) minus the net radiative feedback (radiative loss by the system proportional to how warm it gets, with λ being the net feedback parameter with units W/m2 K). The net feedback parameter λ implicitly includes all fast surface and atmospheric feedbacks in the system: clouds, water vapor, lapse rate changes, etc.

In our case, there are two model layers, the forcings are several, and there is a transfer of energy between the two ocean layers. Importantly, I also separate out the LW and SW forcings (and feedbacks) so we can ultimately compare the model results during 2000-2017 with the CERES satellite measurements during the same period of time.

The model radiative forcings include the RCP6.0 anthropogenic GHGs (assumed LW), volcanic aerosols (assumed SW), and anthropogenic aerosol direct forcing (assumed SW). The indirect aerosol forcing is excluded since there is recent evidence aerosol forcing is not as strong as previously believed, so I retain only the direct forcing as a simple way to reduce the total (direct+indirect) anthropogenic aerosol forcing.

As Spencer and Braswell (2014) did, I include an ENSO-related SW radiative (and a little LW) forcing, proportional to the MEI extended index (1871-2017). I use a total value of 0.23 W/m2 per MEI index, initially calculated as 0.20 by regression from how much average CERES SW energy accumulation (loss) there is averaged over the 1 to 3 months before El Nino (La Nina) during the updated CERES data record (March 2000-September 2017). The SW and LW forcing values were adjusted slightly as the model was run until the model lag regression coefficients of MEI versus radiative flux matched the same metrics from CERES observations. I have added the following intermediate figure to demonstrate this controversial claim: that ENSO involves not only a change in the vertical temperature structure of the ocean (non-radiative forcing of surface temperature), but that radiative changes precede ENSO; that is, ENSO provides its own radiative forcing of the climate system:

Intermediate Plot A: The CERES observed relationship between radiative flux and ENSO activity can ONLY be explained by invoking radiative forcing prior to ENSO. This significantly impacts the “feedback” interpretation of CERES radiative fluxes, decorrelating their relationship to temperature, thus giving the illusion of an excessively sensitive climate system if one interprets the regression slopes as only due to feedback.

The ENSO non-radiative forcing (e.g. warming of the surface layer during El Nino, with an energy-equivalent cooling of the deeper layer, due to a global-average reduction in the rate of ocean overturning) is directly proportional to the MEI index value, with no time lag. It is tuned to help maximize the match between modeled and observed ENSO warming and cooling episodes in surface temperatures.

Significantly, I have adjusted the MEI values by a constant so that their sum during 1871-2017 is zero. This is to avoid expected criticism that the MEI index could be inadvertently driving a net gain or loss of energy by the model climate system over this time because it has a net high bias. (This is indeed a possibility in nature; I note even that with the mean removed, there is a small upward linear trend in the MEI, corresponding to a radiative forcing of -0.08 W/m2 in 1871, linearly increasing to +0.08 W/m2 in 2017 using my CERES-derived coefficient; I have not looked at how much this trend affects the results, and it might well be that La Nina activity was more prevelant in the late 1800s and El Nino more prevalent in the last 20th Century). Here is what the MEI time series looks like, on an expanded scale so you can see how the 10-year trailing averages of MEI reveals interdecadal variations, which are an important component of global temperature variability:

Intermediate Plot B. The merged and biased-adjusted extended MEI time series, 1871 through 2017, revealing decadal time scale variability in the trailing 10-year averages. This decadal variability, combined with both radiative and non-radiative forcing of surface temperatures related to MEI causes much of the multidecadal temperature variations we have experinced in the instrumental record.

As mentioned above, the rate of deep-ocean heat storage is simply assumed to be proportional to how far the surface layer temperature departs from energy equilibrium… the warmer the surface layer gets, the faster heat is pumped into the model deep ocean. The proportionality constant is tuned until the model produces an average deep-ocean (0-2000m) heating rate of 0.51 W/m2 over the period 1990 through 2017, matching NODC data after being modified by the global coverage by ocean (71%), and assuming the land does not store (or lose) appreciable energy.

The model is entered into an Excel spreadsheet with each row being a one month time step. It is initialized in the year 1765, which is when the RCP radiative forcing is initialied to zero. Correspondingly, the model temperature is initialized at zero departure from energy equilibrium in 1765 (this is not necessary if one believes the climate system was in the Little Ice Age at that time, but for now I want to make assumptions as similar to IPCC climate model assumptions as possible).

The adjustable parameters of the model are changed to improve the model fit to the HadCRUT4 data in real time in the Excel spreadsheet. For example, one parameter (say, the surface layer thickness) is adjusted until maximum agreement is reached. Then another parameter is adjusted (say, the LW feedback parameter) in the same way until further improvement is achieved. But then the other parameters must be re-adjusted. This iterative process is rather brute-force, but within a few hours one converges on a set of adjustable parameter values which produce the best results in terms of correlation and matching temperature trends between the model and HadCRUT4 observations.

Model Results

Fig. 2 shows one of many model realizations which comes close to the data, in terms of correlations (here about 0.88) and the same temperature trends. Note that the observed temperature time series has a 12-month smoother applied (click for large version).

Fig. 2. One-dimensional time-dependent model of global average equivalent-ocean surface layer temperature departures from energy equilibrium (dark blue), using RCP6 radiative forcings, ENSO-related radiative and non-radiative forcing, and deep ocean storage of heat proportional to the surface layer temperature departure from equilibrium. HadCRUT4 surface temperature anomalies (12-month smoothed, red) are adjusted vertically on the graph to have the same average values as the model. The temperature trends lines (1880-2017, dashed) of the model and observations coincide, since part of the feedback tuning is to force the trends to match. The UAH LT temperature variations are shown in light blue.

Following are several significant findings from this modeling exercise:

1. the specified model feedback parameters correspond to an equilibrium climate sensitivity of only 1.54 deg. C. This is less than half of the IPCC AR5 model average of 3.4 deg. C, and in close agreement with the best estimate of 1.6 deg. C of Lewis and Curry (2015). As we already know, the IPCC models tend to overestimate warming compared to what has been observed, and the current study suggests their excess warming is due to the models’ climate sensitivity being too high.

2. Note that the ENSO activity during the 20th Century largely explains the anomalous warmth around the 1940s. In fact, this feature exists even with the anthropogenic aerosol forcing removed, in which case a warming hiatus exists from the 1940s to the 1980s. This is the result of the ENSO radiative forcing term (0.23 W/m2 per MEI index value) combined with stronger El Ninos before the 1940s and weaker ones from the 1940s until the late 1970s.

3. The warming hiatus from 1997 to 2016 is evident in the model.

4. The model trend during the satellite temperature record (1979-2017) shows much better agreement with the UAH LT (lower troposphere) temperatures than with HadCRUT4, even though HadCRUT4 was used to optimize the model (!):

Here are the 1979-2017 trends, and correlation with model:

Model: +0.113 C/decade

UAH LT: +0.128 C/decade (r=0.81)

HadCRUT4: +0.180 C/decade (r=0.85)

Compared to the model, the UAH LT trend is only 0.015 C/decade higher, but the HadCRUT4 trend is 0.067 C/decade higher.

5. We can take the model output radiative fluxes, which include both forcing and feedback, during the CERES satellite period of record (March 2000 through September 2017) to see if the “feedbacks” diagnosed from regression are consistent with the actual feedbacks specified in the model. What we find (Fig. 3) is that, just as Spencer & Braswell have been arguing, the feedback parameters diagnosed from the radiative flux and temperature variations lead to regression coefficients quite far from those specified:

Fig. 3. Model diagnosed feedback parameters for the same period as the CERES satellite radiative flux record (March 200 through September 2017) shown in Fig. 1. Significantly, the model-diagnosed feedback parameters (regression slopes) are far from those specified in the model, leading to a gross overestimation of climate sensitivity if interpreted as feedback parameters.

The ECS thus (incorrectly) diagnosed from the model radiative fluxes is 3.25 deg. C, even though the feedbacks specified in the model have an ECS of 1.54 deg. C! This supports our contention that use of CERES radiative fluxes to estimate ECS will lead to overestimation of climate sensitivity (e.g. Spencer & Braswell, 2011). The cause of the problem is time-varying radiative forcing internal to the climate system contaminating the radiative feedback signal.

Note that there is less scatter in the model plots (Fig. 3) that in the observations (Fig. 1). This is mainly due to the observations in Fig. 1 having far more sources of internal radiative forcing than the one specified in the model (only ENSO-related). Contrary to what the IPCC seems to believe (and what Andy Dessler has argued to me before), there are all kinds of non-feedback radiative variations in the climate system, internally generated by chaotic variability not caused by temperature changes. Cloud (and thus SW radiative flux) variations are NOT simply a response to surface temperature changes; some of those temperature changes are due to cloud variations caused by any number of atmospheric circulation-related changes.

Put more simply, causation works in both directions between temperature and radiative flux; if causation is assumed in only one direction (temperature change => radiative flux change), then diagnosing feedback parameters from the data will lead to a bias toward high climate sensitivity.

Conclusions

The 1D model fit to the HadCRUT4 data is quite good, despite the simplicity of the model. The model climate sensitivity of only 1.54 deg. C is just within the IPCC’s likely ECS range of 1.5 to 4.5 deg. C, and well below the AR5 model average ECS of 3.4 deg. C.

I believe this is some of the strongest evidence yet that (1) the real climate system is relatively insensitive, and (2) feedbacks diagnosed from TOA radiative fluxes cannot be used to diagnose feedbacks, and thus climate sensitivity.

The above must be considered as a work in progress. Publication (if it is ever allowed by the IPCC gatekeepers) will require demonstration of the sensitivity of the model results to changes in the adjustable parameters. I do posts like this partly to help guide and organize my thinking on the problem.

It is also worth noting that one can do all kinds of experiments with such a simple model, such as exploring the effect of the inclusion or exclusion of various forcings on the model results. Some of this was done by Spencer and Braswell (2014) who found that inclusion of ENSO effects substantially reduced the model’s climate sensitivity.

References

Lewis, N., and C.A. Curry, 2015: The implications for climate sensitivity of AR5 forcing and heat uptake estimates. Climate Dynamics, 45 (3-4), 1009-1023.

Spencer, R. W., and W. D. Braswell, 2011: On the misdiagnosis of surface temperature feedbacks from variations in Earths radiant energy balance. Remote Sensing, 3, 1603-1613; doi:10.3390/rs3081603

Spencer, R.W., and W.D. Braswell, 2014: The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate mode. Asia-Pacific Journal of Atmospheric Sciences, 50(2), 229-237.

Diagnosing Climate Sensitivity Assuming Some Natural Warming

February 16th, 2018

Climate sensitivity has been diagnosed based upon energy budget considerations by several authors in recent years using observational data combined with estimates of anthropogenic radiative forcing (e.g. Otto et al., 2013; Lewis & Curry, 2014).

Significantly, they generally calculate a lower equilibrium climate sensitivity (ECS) than the average of the IPCC AR5 climate models. Whereas the IPCC models average about 3.4 deg. C of warming from a doubling of atmospheric CO2 (2XCO2), these diagnostic studies get ECS from about 1.6 to 2.0 deg. C. Nic Lewis has provided detailed analysis over at Judith Curry’s blog about what goes into these estimates and the uncertainties of each observational variable.

The ECS estimate is based upon conservation of energy, and uses four variables in a single equation that uses differences in the climate system at two different times (say, two different decades) sufficiently separated in time where there has been a large climate reponse in surface temperature to an assumed radiative forcing. Note that the climate response is assumed to be a response to anthropogenic radiative forcing, plus volcanoes, and analysis is usually restricted to the oceans (where heat storage can be more accurately estimated):

ECS = F2XCO2[ ΔT/(ΔF – ΔQ)],

where:

F2XCO2 = 3.7 W/m2, the assumed radiative forcing from a doubling of atmospheric CO2,

ΔT = the change in global average surface temperature between two periods (deg. C);

ΔF = the change in radiative forcing (imposed energy imbalance on the climate system at top of atmosphere) between two time periods (W/m2);

ΔQ = the change in ocean heat storage between two time periods(W/m2).

In the aforementioned papers, the earlier time period has been chosen to be in the mid- to late- 1800s, while the second has been some subset of the period 1970-2011.

I have verified the above equation using a time-dependent energy balance model of a 2-layer ocean extending to 2,000m depth using either the RCP6.0 radiative forcing history, or an instantaneously imposed doubling of CO2 back in the 1800s, and I get the same ECS calculated from the model output as I prescribed as input to the model. The equation works.

What if a Portion of Recent Warming Was Natural?

As you might recall, the IPCC is quite certain that the dominant cause of warming since the mid-20th Century was due to anthropogenic forcings.

What does “dominant” mean? Well, I’m sure it means over 50%. This implies that they are leaving the door open to the possibility that some of the recent warming has been natural, right?

Well, we can use the above equation to do a first-cut estimate of what the diagnosed climate sensitivity would be if some fraction of the surface and deep-ocean warming was natural.

All we have to do is replace ΔQ with fΔQ, where f is the fraction of ocean warming which is human-caused. We also do the same thing for the surface warming term: fΔT.

When we do this for anthropogenic fractions from 0% to 100%, here’s what we get:

How the data-diagnosed equilibrium climate sensitivity changes assuming different fractions of the warming due to humans (and the rest natural).

Note that even assuming 70% of recent ocean warming is due to humans (consistent with their claim that humans “dominate” warming), that the diagnosed climate sensitivity is only 1.3 deg. C. which is below the range even the IPCC (AR5) considers likely (1.5 to 4.5 deg. C).

Now, this raises an interesting issue… almost a dichotomy. I have heard some IPCC-type folks claim that recent anthropogenic warming could have been damped by some natural cooling mechanism. After all, the models are warming (on average) about twice as fast as the measurements of the lower troposphere. If they really believe the models, and also believe there has been some natural cooling mechanism going on suppressing anthropogenic warming, why doesn’t the IPCC simply claim ALL recent warming was due to human causation? That would be the logical conclusion.

But the way the AR5 was written, they are suggesting that a portion of recent warming could be natural, which is the basis for my analysis, above, which produces a very low climate sensitivity number.

They can’t have it both ways.

UAH Global Temperature Update for January, 2018: +0.26 deg. C

February 1st, 2018

Coolest tropics since June, 2012 at -0.12 deg. C.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for January, 2018 was +0.26 deg. C, down from the December, 2017 value of +0.41 deg. C:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed as has the distinction between calendar months.

The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 13 months are:

YEAR MO GLOBE NHEM. SHEM. TROPICS
2017 01 +0.33 +0.31 +0.34 +0.10
2017 02 +0.38 +0.57 +0.20 +0.08
2017 03 +0.23 +0.36 +0.09 +0.06
2017 04 +0.27 +0.28 +0.26 +0.21
2017 05 +0.44 +0.39 +0.49 +0.41
2017 06 +0.21 +0.33 +0.10 +0.39
2017 07 +0.29 +0.30 +0.27 +0.51
2017 08 +0.41 +0.40 +0.42 +0.46
2017 09 +0.54 +0.51 +0.57 +0.54
2017 10 +0.63 +0.66 +0.59 +0.47
2017 11 +0.36 +0.33 +0.38 +0.26
2017 12 +0.41 +0.50 +0.33 +0.26
2018 01 +0.26 +0.46 +0.06 -0.12

Note that La Nina cooling in the tropics has finally penetrated the troposphere, with a -0.12 deg. C departure from average. The last time the tropics were cooler than this was June, 2012 (-0.15 deg. C). Out of the 470 month satellite record, the 0.38 deg. C one-month drop in January tropical temperatures was tied for the 3rd largest, beaten only by October 1991 (0.51 deg. C drop) and August, 2014 (0.41 deg. C drop).

The last time the Southern Hemisphere was this cool (+0.06 deg. C) was July, 2015 (+0.04 deg. C).

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through January 2018 remains at +0.13 C/decade.

The UAH LT global anomaly image for January, 2018 should be available in the next few days here.

The new Version 6 files should also be updated in the coming days, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

U.S. Corn Yield a New Record – Again

January 29th, 2018

Global warming be damned — full speed ahead on the Maize Train.

Kentucky Corn Growers Association

The numbers are in from USDA, and 2017 saw a new record in average corn yield, with 176.6 bushels per acre.

In fact, the last four growing seasons (2014, 2015, 2016, 2017) had higher yields than any previous years. The last time that happened was in 1964.

And compared to 1964, the U.S. is producing nearly three times as much corn per acre as we did back then.

There is no indication of a slowdown in the long-term upward trends in corn yields. While the 176.6 bpa U.S. average for 2017 is a huge increase compared to just 50 years ago, the latest winner for the highest yield produced by a single farmer has risen again to over 542 bpa, which is fully three times the U.S. average yield.

While the global warmmongers continue to wring their hands over rising temperatures hurting yields (the Corn Belt growing season has indeed warmed slightly since 1960), improved varieties and the “global greening” benefits of more atmospheric CO2 have more than offset any negative weather effects — if those even exist.

Globally, upward trends in all grain yields have been experienced in recent decades. Of course, droughts and floods cause regional crop failures almost every year. That is normal and expected. But there has been no global average increase in these events over the last century.

In his latest movie, Al Gore claimed just the opposite for wheat yields in China. While I hesitate to call him a liar, since I don’t know where he got his information — Gore was just plain wrong.

The sky is not falling. Life on Earth depends upon CO2, even though there is so little of it — now 4 parts per 10,000 of the atmosphere, compared to 3 parts a century ago. No matter how much we emit, nature gobbles up 50% of it.

Most of the evidence suggests that life is now breathing more freely than any time in human history, thanks to our CO2 emissions.