HomeEnvironment & Climate NewsAnalysis: Can Computer Models Predict Climate?
Every Video from ALL of Heartland's Climate Conferences!spot_img

Analysis: Can Computer Models Predict Climate?

By Christopher Essex

It is well known that daytime winter temperatures on Earth can fall well below -4°F (-20℃ ) in some places, even in midlatitudes, despite warming worries. Sometimes the surface can even drop below -40°F (-40℃ ), which is comparable to the surface of Mars. What is not so well known is that such cold winter days are colder than they would be with no atmosphere at all!

How can that be if the atmosphere is like a blanket, according to the standard greenhouse analogy? If the greenhouse analogy fails, what is climate?

Climate computer models in the 1960s could not account for this non-greenhouse-like picture. However modern computer models are better than those old models, but the climate implications of an atmosphere that cools as well as warms has not been embraced. Will computer models be able to predict climate after it is? The meteorological program for climate has been underway for more than 40 years. How did it do?

Feynman, Experiment and Climate Models
“Model” is used in a peculiar manner in the climate field. In other fields, models are usually formulated so that they can be found false in the face of evidence. From fundamental physics (the Standard Model) to star formation, a model is meant to be put to the test, no matter how meritorious.

Climate models do not have this character. No observation from Nature can cause them to be replaced by some new form of model. Instead, climate models are seen by some as the implementation of perfect established classical physics expressed on oracular computers, and as such must be regarded as fully understood and beyond falsification. In terms of normal science, this is fantasy.

Modern critics of climate models cite a famous remark of the physicist Richard Feynman: “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” Those critics imagine models as theory, and observations as experiment. No knowledgeable model builder believes that climate models capture all features of the system well. As such they disagree with observations. However, they do not violate Feynman’s edict because climate models are no theory for climate, and observations of an uncontrolled system are no experiment. Feynman was speaking in the context of controlled physical experiments, which cannot be done for climate.

If a climate model disagrees with data, in principle the sub-grid-scale (more below) of ad hoc climate models can be adjusted to make it agree. Fortunately, good model builders resist the temptation to overdo such tuning. However, they may do things inadvertently like tune models to be more like each other than like the atmosphere and oceans. [reference 1]

Extreme Computing in Search of Climate
Extreme conditions can compromise any computer calculation, despite popular faith otherwise. Sharp transitions on boundaries, extreme gradients, and extremes in density are examples. There are also extremes that are often overlooked, e.g., an extreme of time. Direct computation of the meteorological physics for long timescales is an extreme in time. Integrations of classical physics on computers for climatological timescales are unique and unprecedented. Like other forms of extreme computation, there are consequences.

Numerical analysis on computers contends with the finite representation (i.e., a finite number of numbers) of all computers. There are three types of errors that result,

  1. Round off error: the computer must chop off (truncate) numbers because of space limitations.
  2. Truncation error: To put an equation onto a computer you must usually chop off (truncate) parts of the physical equations you aim to compute.
  3. Symmetry Error: How you chop up the equations affects the symmetry (Lie symmetry) of the equations you plan to integrate. This is realized in the violation of conservation laws, which are uniquely important for extreme climate timescales. [reference 2]

The first two on this list are routine numerical analysis that all must face with computer calculations. Mostly they are not a problem, but in serious computing they come up much more frequently than one might like, and measures must be taken. The third type of error tells us that the actual computer model equations that take us into the future will usually conserve different things than the original equations. The conservation laws from the original mathematics are broken and replaced with something artifactual. For example, consider a simple numerical treatment of a pendulum. Typically, such numerical treatments do not conserve energy, even though the original equations do. For long times the amplitude of the pendulum can grow (unphysically) with time, because the energy grows instead of being constant in the numerical system. Note that there are conservation laws, due to symmetries, in dissipative systems too. [reference 2]

The significance for long term forecasting is clear. The only tie the present has to the future, through fundamental equations, is in terms of change relative to those properties that are preserved over time. Change those properties; change the prescribed future. Such change can accrue over long timescales.

Computation for climate regimes has another claim to extremity. The range of space scales is extraordinary. There are few other scientific problems that compare. The finite representation enters here too, inducing something like pixels on a computer screen. Between pixels nothing is captured. For proper computing, grid spacings must be smaller than anything you hope to capture. All the wiggles in the equation’s solution must be larger than the grid spacing. Everything else is lost.

But the enormous scales and complexity in climate mean that the wiggles are much smaller than grid spacings. Not even thunderstorms show up given resolutions of 100s of kilometers! If you put together a grid that could capture all turbulence, say, you’d need a spacing of about 1mm – air’s Kolmogorov cutoff (the smallest turbulent eddy size). Considering the scale of the Earth, on modern computers, a proper computation of a ten-year forecast for the atmosphere and oceans, can be estimated as taking in excess of the age of the Universe, squared.

The climate problem is much too big, and computers remain far too small and slow to do proper computation for this problem. We can proceed no further unless one compromises to improper computation. Important processes between the grid points must be treated, but with timesaving, empirically-based replacements for proper physics. These are the sub-grid-scale “parameterizations.” All climate models are improper in this sense, employing mathematical cartoons instead of the advertised physics. The basis for any unalloyed faith in climate models is thus dispensed with. Thus, released from the strictures of specific mathematics and physics, models can always be tuned to approximate any observations one wants. If we have future data, we can tune the models to that too. But we cannot adjust for conditions we haven’t encountered yet. That is a key property of real climate change: conditions that we haven’t encountered yet. So, for climate change, empiricism fails. Only extrapolation remains, making the exercise fundamentally not predictive.

There is yet another issue. Nonlinear equations, distorted into discrete representations on grid points, fed faux physics, integrated for extreme long times, are notoriously computationally unstable. There has been a long struggle to get these algorithms to settle down and stop wandering off into fantasyland – gradual loss of system mass, negative densities and other wonders. To get these problems under control, models had non-physical energy flows injected into them to keep them stable. These were called flux adjustments in the AR4. They were like reins for a bronco. [reference 3 – see bottom of this page]

In contrast, modern versions are so stable that nothing happens unless pushed from the outside. Models exhibit no natural variability over long times (white spectra). But instability is also a real-world property. Are computational stabilization schemes too aggressive, throwing the baby out with the bath water? Have they encountered computational over-stabilization? [reference 4] Is their long-term stability a bug or a feature? Some modelers believe the latter. They believe that models have discovered what climate is. Thus, they contend that climate is a “boundary value problem,” as startup conditions no longer matter in the long term. If true, an observer living on climate timescales would experience no variability – nothing analogous to weather. Every moment would be like the last. Change would strictly be a matter of external causes. However, there is no known way to deduce it from first principles, and long-term internal variability is evident. [reference 5 & 5a]

Closure and the Climate Snipe Hunt
Barry Saltzman worked on finding climate from first principles (directly from the fundamental equations), seeking a natural separation between the meteorology and climate regimes. [reference 6] One seeks averaged (climate) equations that are physically consistent with the meteorological regime while also being able to “ignore” it. Fortunately, Nature separates itself in such regimes. For example, we can ignore quantum mechanics on our trips to the grocery store. Climate would find a coherent definition and meaning in a theory that could “ignore” in this way. This property is called “closure.” It would give otherwise unmoored computer models something to aim for.

But Saltzman and his contemporaries chose a tough path. The closure problem of turbulence was known to be and remains one of the fundamental unsolved problems in science, and climate contains turbulence. One of Saltzman’s efforts along this line led directly to Lorenz’s work, which revolutionized modern science. While that is quite an accomplishment, he gave up on his agenda in the end, ultimately deferring to a version of the aforesaid meteorological model program for discovering what climate is. [reference 3 – see bottom of this page]

Meanwhile, ironically extending from Lorenz in part, a small revolution in other fields of science emerged. Ideas like sensitivity to initial conditions, bifurcation, fractals, and complex system dynamics rose in importance. Such ideas have come late to thinking about climate and models, although sensitivity, known as “natural variability,” was already in play. Few know that climate models cope with this by something called “ensemble averaging.” A single computation of the future can’t address such sensitivity, so the alternative offered is to do the integration repeatedly with a collection (or ensemble) of slightly different initial values. The average over these is presented as the future. It seems technical, but in terms of the future it is something like the difference between, “You will meet a tall handsome stranger,” and “you may or may not meet an average person.” Forecasts like that are difficult to falsify.

The depth of difficulty of the scientific problem is obscured by the machinery inherited from the radiative-convective-model picture originating in the 60s [reference 7 and 7a], which is peculiarly imposed on modern models. We imagine in accordance with radiative-convective-model thinking that an integral over a temperature field (temperature index) is proportional to an integral over the radiation field (changes in infrared gas amounts). The constant of proportionality is known as the “climate sensitivity.” Much effort has gone into determining its “correct” value in the context of climate models. But such a relationship implies that these integrals can be related to each other in a function, which can ignore the underlying meteorology. That is, it is a claim of closure, and tantamount to a definition of climate. There is no reason to support this claim in Nature. If this function does not exist, neither does climate sensitivity, and the models that conform to this picture are falsified. [reference 8]

A completely different modern approach to climate and climate change is through bifurcation. Bifurcation is a rich subject, existing prior to rudimentary thoughts about “tipping points.” Complex systems can change qualitatively with very small changes of a control parameter for some family of differential equations. For climate change, one sort of chaos inflected flow pattern would change to a different one in this picture. Persistent new weather patterns result. This different approach has little to do with temperature. There is practical climate change possible without any “warming” in this picture!

Bifurcation was put directly into the climate context through fluid dynamics on a rotating sphere. [reference 9] Lewis and Langford generated something close to the famous three cell Hadley circulation spontaneously from first principles! Moreover, this circulation emerged as a result of a bifurcation process in terms of the equator-pole surface temperature gradient (not temperature!). The bifurcation turned out to be a hysteresis bifurcation (cubic normal form). The familiar Hadley circulation changed into a different circulation (different “climate”) but did not change back when the control parameter was reversed! Irreversible climate change?

Conclusions
A physical definition for climate remains scientifically elusive because it represents a deep problem that neither elegant theories nor brute force computations have succeeded in getting a foothold on. Without that definition, the question posed by the title cannot be answered.

There are many paths yet to explore, but they are buried by the greenhouse mindset inherited from the models of the 1960s. It makes this deep problem seem trivial and it invites the vision of one temperature controlled solely by infrared active gases. That is the basis of climate sensitivity, which amounts to a dubious claim of closure for the climate problem. However, this function need not exist in Nature.

This questionable closure invites the vision of climate as a control problem. But it would be control over something that is not actually climate through a function that exists only in the radiative-convective models. This vision is itself unfalsifiable. Following it ensures that we only fool ourselves, because as Feynman also said, “Nature can’t be fooled.”

Christopher Essex, Ph.D., is Emeritus Professor of Mathematics and Physics, University of Western Ontario.

This article originally appeared on Big Picture News and is republished with permission.

For Further Information
Essex, C., and R. McKitrick (2008): Taken by Storm: The Troubled Science, Policy, and Politics of Global Warming, Key Porter Books.

Essex, C., M. Davison, and C. Schulzky (2000): Numerical MonstersSIGSAM Bulletin (ACM Press), 134, 16-32.

Reference 1: Essex, C., and A.A. Tsonis (2018): Model falsifiability and climate slow modesPhysica A, 502, 554-562.

Reference 2: Essex, C., S. Ilie, and R.M. Corless (2007): Broken symmetry and long-term forecastingJournal of Geophysical Research, 112, D24S17.

Reference 3: Intergovernmental Panel on Climate Change [IPCC] (2006): Climate Change 2007 – The Physical Science Basis (Second Order Draft). Contribution of Working Group I to the Fourth Assessment Report of the IPCC.

Reference 4: Corless, R.M., C. Essex, and M.A.H. Nerenberg (1991): Numerical methods can suppress chaosPhysics Letters A, 157, 27-36.

Reference 5: Tsonis, A.A., K.L.Swanson, and S.Kravtsov (2007): A new dynamical mechanism for major climate shiftsGeophysical Research Letters, 34 L13705.

Reference 5a: Tsonis, A.A., and K.L. Swanson (2011): Climate mode covariability and climateshiftsInternational Journal of Bifurcation and Chaos 21, 3549–3556.

Reference 6: Maasch, K.A., R.J. Oglesby, and A. Fournier (2005): Barry Saltzman and the theory of climateJournal of Climate. 18, 2141–2150.

Reference 7: Manabe, S., and R.F. Strickler (1964): Thermal Equilibrium of the Atmosphere with a Convective AdjustmentJournal of the Atmospheric Sciences. 21, 361-385.

Reference 7a: Manabe, S., and R.T. Wetherald (1967): Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative HumidityJournal of the Atmospheric Sciences, 24, 241-259.

Reference 8: Essex, C. (2011): Climate theory versus a theory for climateInternational Journal of Bifurcation and Chaos, 21, 3477–3487.

Reference 9: Lewis, G., and W.F. Langford (2008): Hysteresis in a differentially heated spherical shell of Boussinesq fluidSIAM J. Applied Dynamical Systems, 7, 1421-1444.

For articles on climate policy, click here.

For more on climate science, click here and here.

Christopher Essex, Ph.D.
Christopher Essex, Ph.D.
Christopher Essex, Ph.D., is Emeritus Professor of Mathematics and Physics, University of Western Ontario.

1 COMMENT

  1. Whilst the temperature graph is spectacular, it is now almost 9 years old. Have you done a graph up until the end of 2021?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LIVE Fridays 1pm ET: Heartland's Climate Change Roundtable
- Advertisement -spot_img
Get it at Amazon.comspot_img

Most Popular

- Advertisement -spot_img

Recent Comments