Aliens Cause Global Warming

January 31st, 2019

Yesterday I was reminded of this brilliant lecture by the late Dr. Michael Crichton, American author, screenwriter, director, and producer. Some of his more notable works include The Andromeda Strain (1969), Jurassic Park (1990), State of Fear (2004), The Great Train Robbery (1979), Twister (1996), and ER (1994-2009). John Christy and I were the basis for one of the characters in his book State of Fear.

Although I never met Dr. Crichton, he was immensely cordial and supportive of my first book when I had an email conversation with him, not long before his death in 2008. As I recall, he said he was dismayed that his 2005 congressional testimony led to so much criticism, and he was trying to avoid the subject going forward.

The themes in his 2003 lecture are just as relevant today as they were 16 years ago. I am told that some of of his works have been removed from the internet, possibly due to his controversial (non-PC) views on environmental matters. The lecture is lucid and concise, and echo the warning President Eisenhower gave in his 1961 Farewell Address about the government being in control of scientific research. I encourage you to spend 15 minutes reading it… there are gems throughout. (I have made made only very slight edits.)

Aliens Cause Global Warming

By Michael Crichton

Caltech Michelin Lecture January 17, 2003

My topic today sounds humorous but unfortunately I am serious. I am going to argue that extraterrestrials lie behind global warming. Or to speak more precisely, I will argue that a belief in extraterrestrials has paved the way, in a progression of steps, to a belief in global warming. Charting this progression of belief will be my task today.

Let me say at once that I have no desire to discourage anyone from believing in either extraterrestrials or global warming. That would be quite impossible to do. Rather, I want to discuss the history of several widely-publicized beliefs and to point to what I consider an emerging crisis in the whole enterprise of science — namely the increasingly uneasy relationship between hard science and public policy.

I have a special interest in this because of my own upbringing. I was born in the midst of World War II, and passed my formative years at the height of the Cold War. In school drills, I dutifully crawled under my desk in preparation for a nuclear attack.

It was a time of widespread fear and uncertainty, but even as a child I believed that science represented the best and greatest hope for mankind. Even to a child, the contrast was clear between the world of politics — a world of hate and danger, of irrational beliefs and fears, of mass manipulation and disgraceful blots on human history. In contrast, science held different values — international in scope, forging friendships and working relationships across national boundaries and political systems, encouraging a dispassionate habit of thought, and ultimately leading to fresh knowledge and technology that would benefit all mankind. The world might not be a very good place, but science would make it better. And it did. In my lifetime, science has largely fulfilled its promise. Science has been the great intellectual adventure of our age, and a great hope for our troubled and restless world. But I did not expect science merely to extend lifespan, feed the hungry, cure disease, and shrink the world with jets and cell phones. I also expected science to banish the evils of human thought — prejudice and superstition, irrational beliefs and false fears. I expected science to be, in Carl Sagan’s memorable phrase, “a candle in a demon haunted world.” And here, I am not so pleased with the impact of science. Rather than serving as a cleansing force, science has in some instances been seduced by the more ancient lures of politics and publicity. Some of the demons that haunt our world in recent years are invented by scientists. The world has not benefited from permitting these demons to escape free.

But let’s look at how it came to pass.

Cast your minds back to 1960. John F. Kennedy is president, commercial jet airplanes are just appearing, the biggest university mainframes have 12K of memory. And in Green Bank, West Virginia at the new National Radio Astronomy Observatory, a young astrophysicist named Frank Drake runs a two-week project called Ozma, to search for extraterrestrial signals. A signal is received, to great excitement. It turns out to be false, but the excitement remains. In 1960, Drake organizes the first SETI conference, and came up with the now-famous Drake equation:

N=R*fp*ne*fl*fi*fc*fL

[where R is the number of stars in the Milky Way galaxy; fp is the fraction with planets; ne is the number of planets per star capable of supporting life; fl is the fraction of planets where life evolves; fi is the fraction where intelligent life evolves; and fc is the fraction that communicates; and fL is the fraction of the planet’s life during which the communicating civilizations live.]

This serious-looking equation gave SETI a serious footing as a legitimate intellectual inquiry. The problem, of course, is that none of the terms can be known, and most cannot even be estimated. The only way to work the equation is to fill in with guesses. And guesses — just so we’re clear — are merely expressions of prejudice. Nor can there be “informed guesses.” If you need to state how many planets with life choose to communicate, there is simply no way to make an informed guess. It’s simply prejudice.

As a result, the Drake equation can have any value from “billions and billions” to zero. An expression that can mean anything means nothing. Speaking precisely, the Drake equation is literally meaningless, and has nothing to do with science. I take the hard view that science involves the creation of testable hypotheses. The Drake equation cannot be tested and therefore SETI is not science. SETI is unquestionably a religion. Faith is defined as the firm belief in something for which there is no proof. The belief that the Koran is the word of God is a matter of faith. The belief that God created the universe in seven days is a matter of faith. The belief that there are other life forms in the universe is a matter of faith. There is not a single shred of evidence for any other life forms, and in forty years of searching, none has been discovered. There is absolutely no evidentiary reason to maintain this belief. SETI is a religion.

One way to chart the cooling of enthusiasm is to review popular works on the subject. In 1964, at the height of SETI enthusiasm, Walter Sullivan of the NY Times wrote an exciting book about life in the universe entitled WE ARE NOT ALONE. By 1995, when Paul Davis wrote a book on the same subject, he titled it ARE WE ALONE? ( Since 1981, there have in fact been four books titled ARE WE ALONE.) More recently we have seen the rise of the so-called “Rare Earth” theory which suggests that we may, in fact, be all alone. Again, there is no evidence either way.

Back in the sixties, SETI had its critics, although not among astrophysicists and astronomers. The biologists and paleontologists were harshest. George Gaylord Simpson of Harvard sneered that SETI was a “study without a subject,” and it remains so to the present day. But scientists in general have been indulgent toward SETI, viewing it either with bemused tolerance, or with indifference. After all, what’s the big deal? It’s kind of fun. If people want to look, let them. Only a curmudgeon would speak harshly of SETI. It wasn’t worth the bother.

And of course, it is true that untestable theories may have heuristic value. Of course, extraterrestrials are a good way to teach science to kids. But that does not relieve us of the obligation to see the Drake equation clearly for what it is — pure speculation in quasi-scientific trappings.

The fact that the Drake equation was not greeted with screams of outrage —similar to the screams of outrage that greet each Creationist new claim, for example — meant that now there was a crack in the door, a loosening of the definition of what constituted legitimate scientific procedure. And soon enough, pernicious garbage began to squeeze through the cracks.

Now let’s jump ahead a decade to the 1970s, and Nuclear Winter.

In 1975, the National Academy of Sciences reported on “Long-Term Worldwide Effects of Multiple Nuclear Weapons Detonations” but the report estimated the effect of dust from nuclear blasts to be relatively minor. In 1979, the Office of Technology Assessment issued a report on “The Effects of Nuclear War” and stated that nuclear war could perhaps produce irreversible adverse consequences on the environment. However, because the scientific processes involved were poorly understood, the report stated it was not possible to estimate the probable magnitude of such damage.

Three years later, in 1982, the Swedish Academy of Sciences commissioned a report entitled “The Atmosphere after a Nuclear War: Twilight at Noon,” which attempted to quantify the effect of smoke from burning forests and cities. The authors speculated that there would be so much smoke that a large cloud over the northern hemisphere would reduce incoming sunlight below the level required for photosynthesis, and that this would last for weeks or even longer.

The following year, five scientists including Richard Turco and Carl Sagan published a paper in Science called “Nuclear Winter: Global Consequences of Multiple Nuclear Explosions.” This was the so-called TTAPS report, which attempted to quantify more rigorously the atmospheric effects, with the added credibility to be gained from an actual computer model of climate.

At the heart of the TTAPS undertaking was another equation, never specifically expressed, but one that could be paraphrased as follows:

Ds = Wn*Ws*Wh*T*Tb*Pt*Pr*Pe, etc.

(The amount of tropospheric dust = # warheads x size warheads x warhead detonation height x flammability of targets x Target burn duration x Particles entering the Troposphere x Particle reflectivity x Particle endurance, and so on.)

The similarity to the Drake equation is striking. As with the Drake equation, none of the variables can be determined. None at all. The TTAPS study addressed this problem in part by mapping out different wartime scenarios and assigning numbers to some of the variables, but even so, the remaining variables were — and are — simply unknowable. Nobody knows how much smoke will be generated when cities burn, creating particles of what kind, and for how long. No one knows the effect of local weather conditions on the amount of particles that will be injected into the troposphere. No one knows how long the particles will remain in the troposphere. And so on.

And remember, this is only four years after the OTA study concluded that the underlying scientific processes were so poorly known that no estimates could be reliably made. Nevertheless, the TTAPS study not only made those estimates, but concluded they were catastrophic.

According to Sagan and his coworkers, even a limited 5,000 megaton nuclear exchange would cause a global temperature drop of more than 35 degrees Centigrade, and this change would last for three months. The greatest volcanic eruptions that we know of changed world temperatures somewhere between 0.5 and 2 degrees Centigrade. Ice ages changed global temperatures by 10 degrees. Here we have an estimated change three times greater than any ice age. One might expect it to be the subject of some dispute.

But Sagan and his coworkers were prepared, for nuclear winter was from the outset the subject of a well-orchestrated media campaign. The first announcement of nuclear winter appeared in an article by Sagan in the Sunday supplement, Parade. The very next day, a highly-publicized, high-profile conference on the long-term consequences of nuclear war was held in Washington, chaired by Carl Sagan and Paul Ehrlich, the most famous and media-savvy scientists of their generation. Sagan appeared on the Johnny Carson show 40 times. Ehrlich was on 25 times. Following the conference, there were press conferences, meetings with congressmen, and so on. The formal papers in Science came months later.

This is not the way science is done, it is the way products are sold.

The real nature of the conference is indicated by these artists’ renderings of the effect of nuclear winter.

I cannot help but quote the caption for figure 5: “Shown here is a tranquil scene in the north woods. A beaver has just completed its dam, two black bears forage for food, a swallow-tailed butterfly flutters in the foreground, a loon swims quietly by, and a kingfisher searches for a tasty fish.” Hard science if ever there was.

At the conference in Washington, during the question period, Ehrlich was reminded that after Hiroshima and Nagasaki, scientists were quoted as saying nothing would grow there for 75 years, but in fact melons were growing the next year. So, he was asked, how accurate were these findings now?

Ehrlich answered by saying “I think they are extremely robust. Scientists may have made statements like that, although I cannot imagine what their basis would have been, even with the state of science at that time, but scientists are always making absurd statements, individually, in various places. What we are doing here, however, is presenting a consensus of a very large group of scientists.”

I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled.

Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.

In addition, let me remind you that the track record of the consensus is nothing to be proud of. Let’s review a few cases.

In past centuries, the greatest killer of women was fever following childbirth . One woman in six died of this fever. In 1795, Alexander Gordon of Aberdeen suggested that the fevers were infectious processes, and he was able to cure them. The consensus said no. In 1843, Oliver Wendell Holmes claimed puerperal fever was contagious, and presented compelling evidence. The consensus said no. In 1849, Semmelweiss demonstrated that sanitary techniques virtually eliminated puerperal fever in hospitals under his management. The consensus said he was a Jew, ignored him, and dismissed him from his post. There was in fact no agreement on puerperal fever until the start of the twentieth century. Thus the consensus took one hundred and twenty five years to arrive at the right conclusion despite the efforts of the prominent “skeptics” around the world, skeptics who were demeaned and ignored. And despite the constant ongoing deaths of women.

There is no shortage of other examples. In the 1920s in America, tens of thousands of people, mostly poor, were dying of a disease called pellagra. The consensus of scientists said it was infectious, and what was necessary was to find the “pellagra germ.” The US government asked a brilliant young investigator, Dr. Joseph Goldberger, to find the cause. Goldberger concluded that diet was the crucial factor. The consensus remained wedded to the germ theory. Goldberger demonstrated that he could induce the disease through diet. He demonstrated that the disease was not infectious by injecting the blood of a pellagra patient into himself, and his assistant. They and other volunteers swabbed their noses with swabs from pellagra patients, and swallowed capsules containing scabs from pellagra rashes in what were called “Goldberger’s filth parties.” Nobody contracted pellagra. The consensus continued to disagree with him. There was, in addition, a social factor — southern States disliked the idea of poor diet as the cause, because it meant that social reform was required. They continued to deny it until the 1920s. Result — despite a twentieth century epidemic, the consensus took years to see the light.

Probably every schoolchild notices that South America and Africa seem to fit together rather snugly, and Alfred Wegener proposed, in 1912, that the continents had in fact drifted apart. The consensus sneered at continental drift for fifty years. The theory was most vigorously denied by the great names of geology — until 1961, when it began to seem as if the sea floors were spreading. The result: it took the consensus fifty years to acknowledge what any schoolchild sees.

And shall we go on? The examples can be multiplied endlessly. Jenner and smallpox, Pasteur and germ theory. Saccharine, margarine, repressed memory, fiber and colon cancer, hormone replacement therapy. The list of consensus errors goes on and on.

Finally, I would remind you to notice where the claim of consensus is invoked. Consensus is invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2 . Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way.

But back to our main subject.

What I have been suggesting to you is that nuclear winter was a meaningless formula, tricked out with bad science, for policy ends. It was political from the beginning, promoted in a well-orchestrated media campaign that had to be planned weeks or months in advance.

Further evidence of the political nature of the whole project can be found in the response to criticism. Although Richard Feynman was characteristically blunt, saying, “I really don’t think these guys know what they’re talking about,” other prominent scientists were noticeably reticent. Freeman Dyson was quoted as saying “It’s an absolutely atrocious piece of science but who wants to be accused of being in favor of nuclear war?” And Victor Weisskopf said, “The science is terrible but — perhaps the psychology is good.” The nuclear winter team followed up the publication of such comments with letters to the editors denying that these statements were ever made, though the scientists since then have subsequently confirmed their views.

At the time, there was a concerted desire on the part of lots of people to avoid nuclear war. If nuclear winter looked awful, why investigate too closely? Who wanted to disagree? Only people like Edward Teller, the “father of the H bomb.”

Teller said, “While it is generally recognized that details are still uncertain and deserve much more study, Dr. Sagan nevertheless has taken the position that the whole scenario is so robust that there can be little doubt about its main conclusions.” Yet for most people, the fact that nuclear winter was a scenario riddled with uncertainties did not seem to be relevant.

I say it is hugely relevant. Once you abandon strict adherence to what science tells us, once you start arranging the truth in a press conference, then anything is possible. In one context, maybe you will get some mobilization against nuclear war. But in another context, you get Lysenkoism. In another, you get Nazi euthanasia. The danger is always there, if you subvert science to political ends.

That is why it is so important for the future of science that the line between what science can say with certainty, and what it cannot, be drawn clearly — and defended.

What happened to Nuclear Winter? As the media glare faded, its robust scenario appeared less persuasive; John Maddox, editor of Nature, repeatedly criticized its claims; within a year, Stephen Schneider, one of the leading figures in the climate model, began to speak of “nuclear autumn.” It just didn’t have the same ring.

A final media embarrassment came in 1991, when Carl Sagan predicted on Nightline that Kuwaiti oil fires would produce a nuclear winter effect, causing a “year without a summer,” and endangering crops around the world. Sagan stressed this outcome was so likely that “it should affect the war plans.” None of it happened.

What, then, can we say were the lessons of Nuclear Winter? I believe the lesson was that with a catchy name, a strong policy position and an aggressive media campaign, nobody will dare to criticize the science, and in short order, a terminally weak thesis will be established as fact. After that, any criticism becomes beside the point. The war is already over without a shot being fired. That was the lesson, and we had a textbook application soon afterward, with second hand smoke.

In 1993, the EPA announced that second-hand smoke was “responsible for approximately 3,000 lung cancer deaths each year in nonsmoking adults,” and that it “impairs the respiratory health of hundreds of thousands of people.” In a 1994 pamphlet the EPA said that the eleven studies it based its decision on were not by themselves conclusive, and that they collectively assigned second-hand smoke a risk factor of 1.19. (For reference, a risk factor below 3.0 is too small for action by the EPA. or for publication in the New England Journal of Medicine, for example.) Furthermore, since there was no statistical association at the 95% confidence limits, the EPA lowered the limit to 90%. They then classified second-hand smoke as a Group-A Carcinogen.

This was openly fraudulent science, but it formed the basis for bans on smoking in restaurants, offices, and airports. California banned public smoking in 1995. Soon, no claim was too extreme. By 1998, the Christian Science Monitor was saying that “Second-hand smoke is the nation’s third-leading preventable cause of death.” The American Cancer Society announced that 53,000 people died each year of second-hand smoke. The evidence for this claim is nonexistent.

In 1998, a Federal judge held that the EPA had acted improperly, had “committed to a conclusion before research had begun,” and had “disregarded information and made findings on selective information.” The reaction of Carol Browner, head of the EPA was: “We stand by our science; there’s wide agreement. The American people certainly recognize that exposure to second hand smoke brings a whole host of health problems.” Again, note how the claim of consensus trumps science. In this case, it isn’t even a consensus of scientists that Browner evokes! It’s the consensus of the American people.

Meanwhile, ever-larger studies failed to confirm any association. A large, seven-country WHO study in 1998 found no association. Nor have well-controlled subsequent studies, to my knowledge. Yet we now read, for example, that second-hand smoke is a cause of breast cancer. At this point you can say pretty much anything you want about second-hand smoke.

As with nuclear winter, bad science is used to promote what most people would consider good policy. I certainly think it is. I don’t want people smoking around me. So who will speak out against banning second-hand smoke? Nobody, and if you do, you’ll be branded a shill of RJ Reynolds. A big tobacco flunky. But the truth is that we now have a social policy supported by the grossest of superstitions. And we’ve given the EPA a bad lesson in how to behave in the future. We’ve told them that cheating is the way to succeed.

As the twentieth century drew to a close, the connection between hard scientific fact and public policy became increasingly elastic. In part this was possible because of the complacency of the scientific profession; in part because of the lack of good science education among the public; in part, because of the rise of specialized advocacy groups which have been enormously effective in getting publicity and shaping policy; and in great part because of the decline of the media as an independent assessor of fact. The deterioration of the American media is dire loss for our country. When distinguished institutions like the New York Times can no longer differentiate between factual content and editorial opinion, but rather mix both freely on their front page, then who will hold anyone to a higher standard?

And so, in this elastic anything-goes world where science — or non-science — is the hand maiden of questionable public policy, we arrive at last at global warming. It is not my purpose here to rehash the details of this most magnificent of the demons haunting the world. I would just remind you of the now-familiar pattern by which these things are established. Evidentiary uncertainties are glossed over in the unseemly rush for an overarching policy, and for grants to support the policy by delivering findings that are desired by the patron. Next, the isolation of those scientists who won’t get with the program, and the characterization of those scientists as outsiders and “skeptics” in quotation marks — suspect individuals with suspect motives, industry flunkies, reactionaries, or simply anti-environmental nut-cases. In short order, debate ends, even though prominent scientists are uncomfortable about how things are being done.

When did “skeptic” become a dirty word in science? When did a skeptic require quotation marks around it?

To an outsider, the most significant innovation in the global warming controversy is the overt reliance that is being placed on models. Back in the days of nuclear winter, computer models were invoked to add weight to a conclusion: “These results are derived with the help of a computer model.” But now, large-scale computer models are seen as generating data in themselves. No longer are models judged by how well they reproduce data from the real world — increasingly, models provide the data. As if they were themselves a reality. And indeed they are, when we are projecting forward. There can be no observational data about the year 2100. There are only model runs.

This fascination with computer models is something I understand very well. Richard Feynmann called it a disease. I fear he is right. Because only if you spend a lot of time looking at a computer screen can you arrive at the complex point where the global warming debate now stands.

Nobody believes a weather prediction twelve hours ahead. Now we’re asked to believe a prediction that goes out 100 years into the future? And make financial investments based on that prediction? Has everybody lost their minds?

Stepping back, I have to say the arrogance of the model-makers is breathtaking. There have been, in every century, scientists who say they know it all. Since climate may be a chaotic system — no one is sure — these predictions are inherently doubtful, to be polite. But more to the point, even if the models get the science spot-on, they can never get the sociology. To predict anything about the world a hundred years from now is simply absurd.

Look: If I was selling stock in a company that I told you would be profitable in 2100, would you buy it? Or would you think the idea was so crazy that it must be a scam?

Let’s think back to people in 1900 in, say, New York. If they worried about people in 2000, what would they worry about? Probably: Where would people get enough horses? And what would they do about all the horseshit? Horse pollution was bad in 1900, think how much worse it would be a century later, with so many more people riding horses?

But of course, within a few years, nobody rode horses except for sport. And in 2000, France was getting 80% its power from an energy source that was unknown in 1900. Germany, Switzerland, Belgium and Japan were getting more than 30% from this source, unknown in 1900. Remember, people in 1900 didn’t know what an atom was. They didn’t know its structure. They also didn’t know what a radio was, or an airport, or a movie, or a television, or a computer, or a cell phone, or a jet, an antibiotic, a rocket, a satellite, an MRI, ICU, IUD, IBM, IRA, ERA, EEG, EPA, IRS, DOD, PCP, HTML, internet, interferon, instant replay, remote sensing, remote control, speed dialing, gene therapy, gene splicing, genes, spot welding, heat-seeking, bipolar, prozac, leotards, lap dancing, email, tape recorder, CDs, airbags, plastic explosive, plastic, robots, cars, liposuction, transduction, superconduction, dish antennas, step aerobics, smoothies, twelve-step, ultrasound, nylon, rayon, teflon, fiber optics, carpal tunnel, laser surgery, laparoscopy, corneal transplant, kidney transplant, AIDS. None of this would have meant anything to a person in the year 1900. They wouldn’t know what you are talking about.

Now. You tell me you can predict the world of 2100. Tell me it’s even worth thinking about. Our models just carry the present into the future. They’re bound to be wrong. Everybody who gives a moment’s thought knows it.

I remind you that in the lifetime of most scientists now living, we have already had an example of dire predictions set aside by new technology. I refer to the green revolution. In 1960, Paul Ehrlich said, “The battle to feed humanity is over. In the 1970s the world will undergo famines — hundreds of millions of people are going to starve to death.” Ten years later, he predicted four billion people would die during the 1980s, including 65 million Americans. The mass starvation that was predicted never occurred, and it now seems it isn’t ever going to happen. Nor is the population explosion going to reach the numbers predicted even ten years ago. In 1990, climate modelers anticipated a world population of 11 billion by 2100. Today, some people think the correct number will be 7 billion and falling. But nobody knows for sure.

But it is impossible to ignore how closely the history of global warming fits on the previous template for nuclear winter. Just as the earliest studies of nuclear winter stated that the uncertainties were so great that probabilities could never be known, so, too the first pronouncements on global warming argued strong limits on what could be determined with certainty about climate change. The 1995 IPCC draft report said, “Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.” It also said, “No study to date has positively attributed all or part of observed climate changes to anthropogenic causes.” Those statements were removed, and in their place appeared: “The balance of evidence suggests a discernable human influence on climate.”

What is clear, however, is that on this issue, science and policy have become inextricably mixed to the point where it will be difficult, if not impossible, to separate them out. It is possible for an outside observer to ask serious questions about the conduct of investigations into global warming, such as whether we are taking appropriate steps to improve the quality of our observational data records, whether we are systematically obtaining the information that will clarify existing uncertainties, whether we have any organized disinterested mechanism to direct research in this contentious area.

The answer to all these questions is no. We don’t.

In trying to think about how these questions can be resolved, it occurs to me that in the progression from SETI to nuclear winter to second-hand smoke to global warming, we have one clear message, and that is that we can expect more and more problems of public policy dealing with technical issues in the future — problems of ever greater seriousness, where people care passionately on all sides.

And at the moment we have no mechanism to get good answers. So I will propose one.

Just as we have established a tradition of double-blinded research to determine drug efficacy, we must institute double-blinded research in other policy areas as well. Certainly the increased use of computer models, such as GCMs, cries out for the separation of those who make the models from those who verify them. The fact is that the present structure of science is entrepreneurial, with individual investigative teams vying for funding from organizations that all too often have a clear stake in the outcome of the research — or appear to, which may be just as bad. This is not healthy for science.

Sooner or later, we must form an independent research institute in this country. It must be funded by industry, by government, and by private philanthropy, both individuals and trusts. The money must be pooled, so that investigators do not know who is paying them. The institute must fund more than one team to do research in a particular area, and the verification of results will be a foregone requirement: teams will know their results will be checked by other groups. In many cases, those who decide how to gather the data will not gather it, and those who gather the data will not analyze it. If we were to address the land temperature records with such rigor, we would be well on our way to an understanding of exactly how much faith we can place in global warming, and therefore with what seriousness we must address this.

I believe that as we come to the end of this litany, some of you may be saying, well what is the big deal, really. So we made a few mistakes. So a few scientists have overstated their cases and have egg on their faces. So what?

Well, I’ll tell you.

In recent years, much has been said about the post-modernist claims about science to the effect that science is just another form of raw power, tricked out in special claims for truth-seeking and objectivity that really have no basis in fact. Science, we are told, is no better than any other undertaking. These ideas anger many scientists, and they anger me. But recent events have made me wonder if they are correct. We can take as an example the scientific reception accorded a Danish statistician, Bjorn Lomborg, who wrote a book called The Skeptical Environmentalist.

The scientific community responded in a way that can only be described as disgraceful. In professional literature, it was complained he had no standing because he was not an earth scientist. His publisher, Cambridge University Press, was attacked with cries that the editor should be fired, and that all right-thinking scientists should shun the press. The past president of the AAAS wondered aloud how Cambridge could have ever “published a book that so clearly could never have passed peer review.” (But of course, the manuscript did pass peer review by three earth scientists on both sides of the Atlantic, and all recommended publication.) But what are scientists doing attacking a press? Is this the new McCarthyism — coming from scientists?

Worst of all was the behavior of the Scientific American, which seemed intent on proving the post-modernist point that it was all about power, not facts. The Scientific American attacked Lomborg for eleven pages, yet only came up with nine factual errors despite their assertion that the book was “rife with careless mistakes.” It was a poor display, featuring vicious ad hominem attacks, including comparing him to a Holocaust denier. The issue was captioned: “Science defends itself against the Skeptical Environmentalist.” Really. Science has to defend itself? Is this what we have come to?

When Lomborg asked for space to rebut his critics, he was given only a page and a half. When he said it wasn’t enough, he put the critics’ essays on his web page and answered them in detail. Scientific American threatened copyright infringement and made him take the pages down.

Further attacks since, have made it clear what is going on. Lomborg is charged with heresy. That’s why none of his critics needs to substantiate their attacks in any detail. That’s why the facts don’t matter. That’s why they can attack him in the most vicious personal terms. He’s a heretic.

Of course, any scientist can be charged as Galileo was charged. I just never thought I’d see the Scientific American in the role of Mother Church.

Is this what science has become? I hope not. But it is what it will become, unless there is a concerted effort by leading scientists to aggressively separate science from policy. The late Philip Handler, former president of the National Academy of Sciences, said that “Scientists best serve public policy by living within the ethics of science, not those of politics. If the scientific community will not unfrock the charlatans, the public will not discern the difference — science and the nation will suffer.”

Personally, I don’t worry about the nation. But I do worry about science.

Dangerous, Record-Breaking Cold to Invade Midwest, Chicago

January 24th, 2019

Lake Michigan ice as temperatures plunged to -16 deg. F in Chicago, IL on Jan. 6, 2014. The low temperature on Wednesday, January 30, 2019 could approach -30 deg F in the Chicago suburbs. (Getty Images)

A “Siberian Express” weather disturbance currently crossing the Arctic Ocean will meet up with the semi-permanent winter “polar vortex” over Canada, pushing a record-breaking cold air mass into the Upper Plains and Midwest U.S. by Wednesday.

Chicago All-Time Record Low?

Both the European (ECMWF) and U.S. (GFS) weather forecast models are in agreement that by Wednesday morning temperatures in the Chicago suburbs will be approaching -30 deg. F. The all-time official record low for the Chicago metro area was -27 deg. F (O’Hare) on January 20, 1985, and that 34 year old record could fall as the ECMWF model is forecasting -32 deg. F for Thursday morning while the GFS model is bottoming out at -26 deg. F on Wednesday morning. Of course, these forecasts will change somewhat in the coming days as the cold wave approaches.

Dangerous Wind Chills

Like the record-breaking event of January 1985, the frigid temperatures will be accompanied by strong winds — gusting to 20 to 30 mph — with wind chills plunging to -60 deg. F at times. This is dangerously cold, and I suspect schools will close, water lines will freeze, and travel will be discouraged. Again, this event is still several days away, but the public should be aware of the potential severity of this cold wave.

Not Just Chicago

The GFS forecast temperatures for Wednesday morning shows most of the upper Midwest will be well below zero, and temperatures might not get above -20 deg. F even at midday on Wednesday as far south as northern Indiana. Again, the strong northwest winds will be pushing this air southeast, and Thursday morning will also bring record-breaking cold into the Ohio River Valley.

Forecast temperatures Wednesday morning, January 30, 2019 from the GFS model. (Graphic courtesy of WeatherBell.com).

Total Lunar Eclipse Time Lapse, January 20-21, 2019

January 21st, 2019

I’ve been wanting to do this kind of time lapse of a lunar eclipse to show just how much change in brightness occurs when the full moon suddenly becomes nearly dark.

Most presentations of a lunar eclipse don’t really capture the darkening, just the change in color as the moon transitions from being illuminated by direct sunlight to the weak sunset glow from the annulus of scattered sunlight through Earth’s atmosphere.

I took three hours of photos, one every 23 seconds, at Little River Canyon, Alabama to make this. The camera setting was constant throughout (ISO640, f/5.6, 20 sec exposures). The temperature was unusually cold, 26 deg. F, low humidity, and there was a moderate wind out of the north. The video is best appreciated full-screen.

Tomorrow’s Total Lunar Eclipse, and a Mystery

January 19th, 2019

Tomorrow night (January 20-21) will present the whole U.S. with a total lunar eclipse, the best one until May 15, 2022.

Totality here in Alabama will occur approximately from 10:40 to 11:40 p.m. CST. Clear weather will be restricted mostly to the southeastern U.S., and portions of the Northern Plains and Great Lakes:

A Mystery (to me, anyway)

There’s one aspect of the eclipse I cannot figure out. I’m sure the explanation will be simple, and when someone explains it to me, my response will be, “DOH!”.

The illumination of the moon during totality is due to light scattered through Earth’s atmosphere. Just as we see red sunsets, that red light will be shining on the moon from an annulus of red sunset light circling the Earth.

What I don’t understand, though, is the role of sunlight refraction (bending of sunlight) as it passes through the atmosphere at an oblique angle. The refraction occurs whether it is the moon or the sun being viewed through the limb, and I will use the example of moonlight shining through the limb.

My understanding is that light (from either the moon or sun) bends as I crudely show in the following cartoon. The “mystery” arises from the fact that we know that the appearance of the moon is that it is flattened due to refraction (this is NOT a diagram of what is happening during the eclipse.. it’s a general question about how either sunlight or moonlight is refracted as it passes close to Earth’s limb):

The moon composite photo is from the ISS, so it is exactly analogous to the situation shown in the drawing.

So, the mystery: Why is the moon flattened rather elongated? I simply don’t know. But I’m sure the explanation is simple.

Update: Mystery Solved

As I suspected, the problem was in the way I was looking at it. As
Brent Auvermann suggests in the comments, here’s the proper way to look at it. The eye sees the top and bottom of the moon at 2 slightly different angles, which are normally separated by 0.5 deg. But when the view in the direction of the bottom of the moon (0.5 deg. below the top of the moon) goes through a lot of atmosphere, it gets refracted downward, and the view from that direction comes from below the moon. In other words, what is a 0.5 degree subtended angle viewed by the eye actually originates from a bigger angle than that on the other side of the Earth’s limb. That causes the bottom of the moon to be compressed into a smaller angle (flattened):

Ocean Warming in Climate Models Varies Far More than Recent Study Suggests

January 17th, 2019

I wanted to expand upon something that was mentioned in yesterday’s blog post about the recent Cheng et al. paper which was widely reported with headlines suggesting a newer estimate of the rate of ocean warming is 40% higher than old estimates from the IPCC AR5 report in 2013. I demonstrated that the new dataset was only only 11% warmer when compared to the AR5 best estimate of ocean warming during 1971-2010.

The point I want to reemphasize today is the huge range in ocean warming between the 33 models included in that study. Here’s a plot based upon data from Cheng’s website which, for the period in question (1971-2010) shows a factor of 8 range between the model with the least ocean warming and the model with the most warming, based upon linear trends fitted to the model curves:

Yearly ocean heat content (OHC) changes since 1971 in 33 models versus the recent Cheng reanalysis of XBT and Argo ocean temperature data for the surface to 2,000m layer. The vertical scale is in both ZettaJoules (10^21 Joules) and in deg. C (assuming an ocean area of 3.6 x 10^14 m^2). The Cheng et al. confidence interval has been inflated by 1.43 to account for the difference between the surface area of the Earth (Cheng et al. usage) and the actual ocean surface area.

I have also included Cheng’s reanalysis of ocean heat content (OHC) data over the same period of time, showing how well it fits the *average* of all 33 models included in the study. Cheng’s OHC dataset is now the warmest of all reanalyzed OHC datasets, which means (mark my words) it will gain the greatest favor in the next IPCC report.

Mark. My. Words.

What is disconcerting is the huge (8x) range in ocean warming between models for the period 1971-2010. This is partly due to continuing uncertainty in climate sensitivity (ranging over a factor of ~3 according to the IPCC), but also due to uncertainties in how much aerosol forcing has occurred, especially in the first half of the period in question. The amount of climate system warming in models or in nature is a function of both forcing and the system response to that forcing.

If models are based upon fundamental physical principles, as we are often told, how can they give such a wide range of results? The answer, of course, is that there are some physical processes which are not well known, for example how clouds and upper tropospheric water vapor change with warming. The devil is in the details.

Dodgy Statistics

One of the problems with the results in the Cheng et al. study is how the 90% confidence intervals are computed. Most people will simply assume they are related to how well the stated numbers are known. That is, how good the various observational and model estimates of ocean warming are.

But they would be wrong.

The confidence intervals given in the paper (we are told at the end of the Supplementary Materials section) simply refer to how well each time series of OHC (whether observations or models) is fit by a regression line.

They have nothing to do with how good a certain OHC dataset is. In fact, they assume (as John Christy pointed out to me) each dataset is perfect!

In the above plot I show the difference between the quoted 90% confidence interval in the paper for the models, and the 90% confidence interval I computed which represents the variability between the models warming trends, which is much more informative to most readers. The difference is huge.

What Cheng et al. provided for confidence intervals isn’t “wrong”. It’s simply misleading for most readers who are trying to figure out how good these various observational OHC trends are, or how uncertain the climate model OHC trends are.

Is the Average of the Climate Models Better than the Individual Models?

Cheng et al. only deal with the 33-model average, and don’t mention the huge inter-model differences. One might claim that the average of the 33 models is probably better than the individual models, anyway.

But I’m not so sure one can make such an argument.

The various climate models cannot be viewed as some sort of “truth model” about which the various modeling groups around the world have added noise. If that were the case then, yes, the average of all the models would give the best result.

Instead, each modeling group makes their own best estimate of a wide variety of forcings and physical processes in the models, and they get a wide variety of results. It is not clear which of them is closest to the truth. It could be an outlier model is best. For example, the model with the closest agreement with our (UAH) satellite tropospheric temperatures since 1979 is the Russian model, which wasn’t even included in the new study. That model has the lowest rate of tropospheric warming since 1979 out of over 100 models we have checked.

The new OHC dataset might reduce uncertainty somewhat (although we still don’t know how accurate it is), but one also has to evaluate surface temperature trends, tropospheric temperature trends (which I believe are telling us water vapor feedback isn’t as strong as in the models), as well as uncertainties in forcings which, even if the models contained perfect physics, would still lead to different projected rates of warming.

Given all of the uncertianties, I think we are still far from understanding just how much future warming will occur from increasing CO2 concentrations in the atmosphere.

Media Reports of +40% Adjustment in Ocean Warming Were Greatly Exaggerated

January 16th, 2019

Summary: The recently reported upward adjustment in the 1971-2010 Ocean Heat Content (OHC) increase compared to the last official estimate from the IPCC is actually 11%, not 40%. The 40% increase turns out to be relative to the average of various OHC estimates the IPCC addressed in their 2013 report, most of which were rejected. Curiously, the new estimate is almost identical to the average of 33 CMIP climate models, yet the models themselves range over a factor of 8 in their rates of ocean warming. Also curious is the warmth-enhancing nature of temperature adjustments over the years from surface thermometers, radiosondes, satellites, and now ocean heat content, with virtually all data adjustments leading to more warming rather than less.

I’ve been trying to make sense out of the recent Science paper by Cheng et al. entitled How Fast are the Oceans Warming? The news headlines I saw which jumped out at me (and several others who asked me about them) were:

World’s Oceans Warming 40% Faster than Previously Thought (EcoWatch.com),

The oceans are heating up 40% faster than scientists realized which means we should prepare for more disastrous flooding and storms (businessinsider.com)

For those who read the paper, let me warn you: The paper itself does not have enough information to figure out what the authors did, but the Supplementary Materials for the paper provide some of what is needed. I suspect this is due to editorial requirements by Science to make articles interesting without excessive fact mongering.

One of the conclusions of the paper is that Ocean Heat Content (OHC) has been rising more rapidly in the last couple decades than in previous decades, but this is not a new finding, and I will not discuss it further here.

Of more concern is the implication that this paper introduces some new OHC dataset that significantly increases our previous estimates of how much the oceans have been warming.

As far as I can tell, this is not the case.

Dazed and Confused

Most of the paper deals with just how much the global oceans from the surface to 2,000 m depth warmed during the period 1971-2010 (40 years) which was also a key period in the IPCC 5th Assessment Report (AR5).

And here’s where things get confusing, and I wasted hours figuring out how they got their numbers because the authors did not provide sufficient information.

Part of the confusion comes from the insistence of the climate community on reporting ocean warming in energy content units of zettajoules (a zettajoule is 1,000,000,000,000,000,000,000 Joules, which is a billion trillion Joules… also a sextillion Joules, but male authors fear calling it that), rather than in what is actually measured (degrees). This leads to confusion because almost nowhere is it ever stated what assumed area of ocean was used in the computation of OHC (which is proportional to both temperature change and the volume of seawater involved in that temperature change). I’ve looked in this paper and other papers (including Levitus), and only in the 2013 IPCC report (AR5) did I find the value 3.6 x 10^14 square meters given for ocean area. (Just because we know the area of the global oceans doesn’t mean that is what is monitored, or what was used in the computation of OHC).

Causing still further confusion is that Cheng et al. then (apparently) take the ocean area, and normalize it by the entire area of the Earth, scaling all of their computed heat fluxes by 0.7. I have no idea why, since their paper does not deal with the small increase in heat content of the land areas. This is just plain sloppy, because it complicates and adds uncertainty when others try to replicate their work.

It also raises the question of why energy content? We don’t do that for the atmosphere. Instead, we use what is measured — degrees. The only reason I can think of is that the ocean temperature changes involved are exceedingly tiny, either hundredths or thousandths of a degree C, depending upon what ocean layer is involved and over what time period. Such tiny changes would not generate the alarm that a billion-trillion Joules would (or the even scarier Hiroshima bomb-equivalents).

But I digress.

The Results

I think I finally figured out what Cheng et al. did (thanks mostly to finding the supporting data posted at Cheng’s website).

The “40%” headlines derive from this portion of the single figure in their paper, where I have added in red information which is either contained in the Supplementary Materials (3-letter dataset IDs from the authors’ names) or are my own annotations:

The five different estimates of 40-year average ocean heating rates from the AR5 report (gray bars) are around 40% below the newer estimates (blue bars), but the AR5 report did not actually use these five in their estimation — they ended up using only the highest of these (Domingues et al., 2008). As Cheng mentions, the pertiment section of the IPCC report is the “Observations: Oceans” section of Working Group 1, specifically Box 3.1 which contains the numerical facts one can factmonger with.

From the discussion in Box 3.1, one can compute that the AR5-estimated energy accumulation rate in the 0-2000 m ocean layer (NOT adjusted for total area of the Earth) during 1971-2010 corresponds to an energy flux of 0.50 Watts per sq. meter. This can then be compared to newer estimates computed from Cheng’s website data (which is stated to be the data used in the Science study) of 0.52 W/m2 (DOM), 0.51 W/m2 (ISH), and 0.555 W/m2 (CHG).

Significantly, even if we use the highest of these estimates (Cheng’s own dataset) we only get an 11% increase above what the IPCC claimed in 2013 — not 40%.

Agreement Between Models and Observations

Cheng’s website also contains the yearly 0-2000m OHC data from 33 CMIP5 models, from which I calculated the average warming rate, getting 0.549 W/m2 (again, not scaled by 0.7 to get a whole-Earth value). This is amazingly close to Cheng’s 0.555 W/m2 he gets from reanalysis of the deep-ocean temperature data.

This is pointed to as evidence that observations support the climate models which, in turn, are of course the basis for proposed energy policy changes and CO2 emissions reduction.

How good is that multi-model warming rate? Let me quote the Science article (again, these number are scaled by 0.7):

“The ensemble average of the models has a linear ocean warming trend of 0.39 +/- 0.07 W/m2 for the upper 2000 m from 1971-2010 compared with recent observations ranging from 0.36 to 0.39 W/m2.”

See that +/- 0.07 error bar on the model warming rate? That is not a confidence interval on the warming rate. It’s the estimated error in the fit of a regression line to the 33-model average warming trace during 1971-2010. It says nothing about how confident we are in the warming rate, or even the range of warming rates BETWEEN models.

And that variation between the models is where things REALLY get interesting. Here’s what those 33 models’ OHC warming profiles look like, relative to the beginning of the period (1971), which shows they range over a factor of 8X (from 0.11 W/m2 to 0.92 W/m2) for the period 1971-2010!

What do we make of a near-perfect level of agreement (between Cheng’s reanalysis of OHC warming from observational data, and the average of 33 climate models), when those models themselves disagree with each other by up to a factor of 8 (700%)?

That is a remarkable stroke of luck.

It’s Always Worse than We Thought

It is also remarkable how virtually every observational dataset — whether (1) surface temperature from thermometers, (2) deep-ocean temperature measurements, atmospheric temperature from (3) satellites, and from (4) radiosondes, when reanalyzed for the same period, always ends up with more (not less) warming? What are the chances of this? It’s like flipping a coin and almost always getting heads.

Again, a remarkable stroke of luck.

Chuck Todd Devotes an Hour to Attacking a Strawman

January 3rd, 2019

or, All Credentialed Journalists are Sex Abusers

Meet the Depressed host Chuck Todd, sans brain.

Chuck Todd, on a recent episode of Meet the Press, highlighted the issue of global warming and climate change. He unapologetically made it clear that he wasn’t interested in hearing from people on the opposing side of the scientific issue, stating:

“We’re not going to debate climate change, the existence of it. The Earth is getting hotter. And human activity is a major cause, period. We’re not going to give time to climate deniers. The science is settled, even if political opinion is not.”

This is what’s called a “strawman” argument, where you argue against something your opponent never even claimed.

I cannot think of a single credentialed, published skeptical climate scientist who doesn’t believe in the “existence” of climate change, or that “the Earth is getting hotter”, or even that human activity is likely a “major cause”. Pat Michaels, Richard Lindzen, Judith Curry, John Christy, and myself (to name a few) all believe these things. That journalists continue to characterize us as having extremist views shows just how far journalism has fallen as a (somewhat) respectable profession.

What if I claimed that all journalists are sex abusers? Of course, no reasonable person would believe that. Yet, I would wager that up to half of the U.S. population has been led to believe that climate change skeptics are “deniers” (as in, Holocaust deniers), about whom journalist Ellen Goodman said 12 years ago,

“Let’s just say that global warming deniers are now on a par with Holocaust deniers”

At least my hypothetical claim that “journalists are sex abusers” is statistically more accurate than journalists’ claims that we skeptical scientists “deny” this, that, and the other thing (for those allegations, see Mark Halperin, Matt Lauer, Tom Brokaw, Charlie Rose, Tavis Smiley, Michael Oreskes, and others).

The fact is that even if humans are, say, 60% responsible for the warming of the global ocean and atmosphere over the last 60 years (which would be consistent with both the UN IPCC’s and Todd’s phrasing), the lastest analyses (Lewis & Curry, 2018) of what this would mean leads to an eventual warming of only 1 deg. C from a doubling of atmospheric CO2 (we are currently about halfway to that doubling). That’s only 1/3 of what the IPCC claims is going to happen, and an even smaller fraction of what the ratings-boosting extremists who journalists like to trot out will claim.

A Nuance Chuck Todd is Ill-Prepared to Discuss

Journalists are notoriously under-informed on science issues. For example, let’s look at the claim that recent warming has been human-caused. It is easy to show that such attribution is more faith-based than science-based.

Between 2005 and 2017, the global network of thousands of Argo floats have measured an average temperature increase of the upper half of the ocean of 0.04 deg. C. That’s less than 0.004 C/year, an inconceiveably small number.

Significantly, it represents an imbalance in energy flows in and out of the climate system of only 1 part in 260. That’s less than 0.5%, and climate science does not know any of the NATURAL flows of energy to that level of accuracy. The tiny energy imbalance causing the warming is simply ASSUMED to be the fault of humans and not part of some natural cycle in the climate system. Climate models are adjusted in a rather ad hoc manner until their natural energy flows balance, then increasing CO2 from fossil fuels is used as the forcing (imposed energy imbalance) causing warming.

That’s circular reasoning. Or, some might say, garbage in, garbage out.

The belief in human-caused warming exceeding a level that what would be relatively benign, and maybe even beneficial, is just that — a belief. It is not based upon known, established, and quantified scientific principles. It is based upon the assumption that natural climate change does not exist.

So, journalists do a lot of talking about things of which they know nothing. As Scarecrow from the Wizard of Oz said in 1939,

UAH Global Temperature Update for December 2018: +0.25 deg. C

January 2nd, 2019

2018 was 6th Warmest Year Globally of Last 40 Years

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for December, 2018 was +0.25 deg. C, down a little from +0.28 deg. C in November:

Global area-averaged lower tropospheric temperature anomalies (departures from 30-year calendar monthly means, 1981-2010). The 13-month centered average is meant to give an indication of the lower frequency variations in the data; the choice of 13 months is somewhat arbitrary… an odd number of months allows centered plotting on months with no time lag between the two plotted time series. The inclusion of two of the same calendar months on the ends of the 13 month averaging period causes no issues with interpretation because the seasonal temperature cycle has been removed, and so has the distinction between calendar months.

Various regional LT departures from the 30-year (1981-2010) average for the last 24 months are:

YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2017 01 +0.33 +0.32 +0.35 +0.11 +0.28 +0.95 +1.22
2017 02 +0.39 +0.58 +0.20 +0.08 +2.16 +1.33 +0.22
2017 03 +0.23 +0.37 +0.10 +0.06 +1.22 +1.24 +0.98
2017 04 +0.28 +0.29 +0.27 +0.22 +0.90 +0.23 +0.40
2017 05 +0.45 +0.40 +0.50 +0.42 +0.11 +0.21 +0.06
2017 06 +0.22 +0.34 +0.10 +0.40 +0.51 +0.10 +0.34
2017 07 +0.29 +0.31 +0.28 +0.51 +0.61 -0.27 +1.03
2017 08 +0.41 +0.41 +0.42 +0.47 -0.54 +0.49 +0.78
2017 09 +0.55 +0.52 +0.58 +0.54 +0.30 +1.06 +0.60
2017 10 +0.64 +0.67 +0.60 +0.48 +1.22 +0.83 +0.86
2017 11 +0.36 +0.34 +0.39 +0.27 +1.36 +0.68 -0.12
2017 12 +0.42 +0.50 +0.33 +0.26 +0.45 +1.37 +0.36
2018 01 +0.26 +0.46 +0.06 -0.11 +0.59 +1.36 +0.43
2018 02 +0.20 +0.25 +0.16 +0.04 +0.92 +1.19 +0.18
2018 03 +0.25 +0.40 +0.10 +0.07 -0.32 -0.33 +0.60
2018 04 +0.21 +0.32 +0.11 -0.12 -0.00 +1.02 +0.69
2018 05 +0.18 +0.41 -0.05 +0.03 +1.93 +0.18 -0.39
2018 06 +0.21 +0.38 +0.04 +0.12 +1.20 +0.83 -0.55
2018 07 +0.32 +0.43 +0.22 +0.29 +0.51 +0.29 +1.37
2018 08 +0.19 +0.22 +0.17 +0.13 +0.07 +0.09 +0.26
2018 09 +0.15 +0.15 +0.14 +0.24 +0.88 +0.21 +0.19
2018 10 +0.22 +0.31 +0.13 +0.34 +0.25 +1.11 +0.39
2018 11 +0.28 +0.27 +0.30 +0.50 -1.13 +0.69 +0.53
2018 12 +0.25 +0.32 +0.19 +0.32 +0.20 +0.65 +1.19

The 2018 globally averaged temperature anomaly, adjusted for the number of days in each month, is +0.23 deg. C, making 2018 the 6th warmest year in the now-40 year satellite record of global lower tropospheric temperature variations.

The linear temperature trend of the global average lower tropospheric temperature anomalies from January 1979 through December 2018 remains at +0.13 C/decade.

The UAH LT global anomaly image for December, 2018 should be available in the next few days here.

The new Version 6 files should also be updated at that time, and are located here:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

Government Shutdown Delays UAH Global Temperature Update

January 1st, 2019

The NOAA CLASS system we obtain our satellite orbit files (raw data) from has been taken offline until the government shutdown ends. As a result, our UAH monthly global temperature update is delayed.

UPDATE: We have a separate data feed and so I’ll be able to post results tomorrow, Jan. 2.

Giving Credit to Willis Eschenbach

December 31st, 2018

The non-greenhouse theory of Nikolov (and now Zeller-Nikolov) continues to live on, most recently in this article I’ve been asked about on social media.

In short, it is the theory that there really isn’t a so-called “greenhouse effect”, and that the excess planetary surface temperatures on Earth, Venus, and other planets above the Stefan-Boltzmann (SB) temperature calculated from the rate of absorbed solar radiation is due to compressional heating by the atmosphere.

This is a popular alternative explanation that I am often asked about. Of course, if there is no “greenhouse effect”, we don’t have to worry about increasing CO2 in the atmosphere and all of the global warmmongers can go home.

I have posted on this blog many times over the years all of the evidences I can think of to show there really is a greenhouse effect, but it is never enough to change the minds of those who have already convinced themselves that planetary surface temperatures are only a function of (1) absorbed sunlight and (2) atmospheric pressure, as Zeller and Nikolov claim.

I’ve always had the nagging suspicion there was a simpler proof that the Zeller-Nikolov theory was wrong, but I could never put my finger on it. My co-worker, Danny Braswell (a PhD computational physicist) and I have joked over the years that we tend to make problems too difficult… we’ve spent days working a problem when the simple solution was staring us in the face all along.

Enter citizen scientist Willis Eschenbach, a frequent contributor at Wattsupwiththat.com, who back in 2012 posted there a “proof” that Nikolov was wrong. The simplicity of the proof makes it powerful, indeed. I don’t know why I did not notice it at the time. My apologies to Willis.

Basically, the proof starts with the simplified case of the average planetary temperature without an atmosphere, which can be calculated using a single equation (the Stefan-Boltzmann equation). Conceptually, in the absence of an atmosphere, sunlight will heat the surface and the temperature will rise until the rate of emitted infrared radiation from the surface to outer space equals the rate of absorbed solar energy. (To be accurate, one needs to take into account the fact the planet is rotating and spherical, the rate of heat conduction into the sub-surface, and you also need to know the planet’s albedo (solar reflectivity) and infrared emissivity).

The SB equation always results in a surface temperature that is too cold compared to surface temperatures when an atmosphere is present, and greenhouse theory is traditionally invoked to explain the difference.

Significantly, Willis pointed out that if atmospheric pressure is instead what raises the temperature above the S-B value, as the Zeller-Nikolov theory claims, the rate of energy loss by infrared radiation will then go up (for the same reason a hotter fire feels hotter on your skin at a distance). But now the energy loss by the surface is greater than the energy gained, and energy is no longer conserved. Thus, warming cannot occur from increasing pressure alone.

In other words, without the inclusion of the greenhouse effect (which has downward IR emission by the atmosphere reducing the net loss of IR by the surface), the atmospheric pressure hypothesis of Zeller-Nikolov cannot explain surface temperatures above the Stefan-Boltzmann value without violation of the fundamental 1st Law of Thermodynamics: Conservation of Energy.

This is a simple and elegant proof that radiation from the atmosphere does indeed warm the surface above the S-B value. This will be my first go-to argument from now on when asked about the no-greenhouse theory.

I like to give credit where credit is due, and Willis provided a valuable contribution here.

(For those who are not so scientifically inclined, I still like the use of a simple hand-held IR thermometer to demonstrate that the cold atmosphere can actually cause a warmer surface to become warmer still [and, no, the 2nd Law of Thermodynamics is not violated]).