A model of production with energy and matter on the Planet of the Iron Giants (corrected)

This is the third in a series of posts documenting the development of a single commodity model of production with energy and matter. In the first I published what appeared to work as a model, but which had obvious scaling issues. In the second, I explained that this first model had errors that I spotted when working with Matheus and Tim: there was no formal link between the output of the energy sector and the energy consumption of all three sectors (energy mining, iron ore mining, factory turning iron ore into iron plus slag).

In this post, I can report on an error-free model. Following Matheus’s suggestion, I modelled the system as effectively vertically integrated: the output of both energy and mining was set to meet the needs of the whole economy. Energy output (E, not shown in the model below) was equal to the needs of energy in all three sectors; mining output (or iron ore) was identical, by the conservation of matter, to the matter output of the factory sector (which consists of both iron, and slag).

This made it possible to work with a composite of capital, rather than modelling capital and output in three separate sectors. It led to some complicated constants, but it worked. The product was the expected “Goodwin cycle”, but in this first model occurring in the wage rate (in kilograms of iron per year) and aggregate capital (measured in kilograms of iron).

Now that we’ve completed it, one of the research outputs of our group will be a paper linking the original Goodwin model to these two extensions. They were done both for their own sake—to replace the “ad hoc” productivity in the Goodwin model with something derived from the reality that all production is based on energy—and as a foundation for integrated modelling of economics and ecology. Waste output is easily added to this model, and related to the output of iron; CO2 is easily added and linked to the consumption of energy. Feedbacks can be added from waste generation (both slag and CO2) to the productivity of the economy—the sort of realism that is notably absent from the “Integrated Assessment Models” produced by Neoclassical eCONomists like William Nordhaus and Richard Tol.

I’ll close this Patron-only post with the observation that this project showed up a weakness of Minsky as it stands. Though I’m showing the simulation in Minsky, it was actually easier to do the logical work in Mathcad (see the two files attached to this post; the image below is an excerpt from one of those Mathcad files).

The flowchart paradigm is very useful when you are modelling a causal sequence, but there was a lot of algebraic logic needed to get this model right, and there the direct equation entry capabilities of Mathcad were far easier to use than Minsky’s flowchart.

This is one of the reasons that we plan to add the capability to enter parameter and variable definitions on new tabs within Minsky (currently labelled “parameters” and “variables”). It will take some time and programming nous (supplied by Russell Standish and Wynand Dednam rather than me), but I want to have the same capability to write equations just as you would on paper in Minsky—and then simulate them there.

It’s also much easier to read equations—sometimes—than it is to read a flowchart. The first yellow highlighted equation in the figure above is:

    

If you’re into reading math, then that’s pretty easy to get your head around. Not so the flowchart version of the same, which also takes up much more space:

Anyway, I’m posting these files for the interest of those of you who are into mathematical modelling, and as a courtesy to young students, who will often think, when they see a completed work by an academic, that they couldn’t do the same thing. In fact, a lot of research involves making mistakes which are gradually corrected over time. What you see in a journal paper is often the product of a lot of mistakes that get corrected over time. So don’t ever be discouraged by reading a refereed paper.

More is Different—the Class Economists Failed to Attend

An Australian friend of mine tweeted that he was seeing economists criticizing #MMT for its lack of microfoundations:

This typical display of ignorance—by economists, not Con!—masquerading as wisdom prompted me to reply:

Looking for an accessible PDF of Anderson’s 1972 paper {Anderson, 1972 #4697} led to finding the one linked in the tweet above: an open access reprint of the article in a 2014 issue of the journal Emergence: Complexity & Organization. I’ve attached that reprint to this post, and encourage you to read it, both the original and the fascinating commentary on how the article came to be.

The introduction to Anderson’s paper, written by Jeffrey Goldstein, explains that Anderson was involved in an acrimonious debate within physics over the role of reductionism, and it had echoes of the obsession with microfoundations that Con had experienced in his Twitter feed:

It is worthwhile to recognize that Anderson’s paper was written within the context of an ongoing, and at the time vituperative debate, between particle physicists, on the one hand, with their highly effective Standard Model of the so-called fundamental forces (such as weak, strong, electro-magnetic on up to their final unified “theory of everything”) and mostly negative attitude towards emergence in the past, and solid state or condensed matter physics, on the other hand, whose investigations into phenomena such as phase transitions, superconductivity, ferromagnetism and so on required the introduction of constructs and methods pertaining to higher scale dynamics, organizing principles, and emergent collectivities. Two of the chief antagonists in this conceptual battle have been the Nobel Laureate particle physicist Steven Weinberg known for his work on the unification of the electro-magnetic and the weak forces and Anderson who of course is another Nobel Prize winning physicist (on this dispute see Silberstein, 2009). This clash shows itself in this classic paper through Anderson’s attack on strident reductionism, of which Weinberg has long been a vigorous proponent, along with Victor Weiskopf whose reductionist stance involving extensive and intensive explanatory strategies Anderson takes on in his paper. (Goldstein, pp. 118-19)

The key point in Anderson’s paper was not a rejection of reductionism per se, but the obverse of reductionism, which he termed “constructionism”:

In his classic paper, Anderson did not then, nor does he now, completely renounce reductionism as such as if he were calling for an embrace of some kind of “holism”. Instead his criticism is of the totalizing type which he describes through his notion of the “constructionist hypothesis”: “The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe“. (Goldstein, p. 119)

This attitude is the essence of the microfoundations obsession: the belief that everything should be constructed from its lower level foundations: that macro should be constructed from micro. Leaving aside that Neoclassical micro itself is a logical and empirical travesty, Anderson’s key point was that, though a hierarchy of sciences can be constructed:

according to the idea: The elementary entities of science X obey the laws of science Y. But this hierarchy does not imply that science X is “just applied Y*” At each stage entirely new laws, concepts, and generalizations are necessary, requlring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor s biology applied chemistry. (Anderson, p. 393).

Likewise, macroeconomics is not applied microeconomics—but that is what economists have been trying to do ever since Muth developed the fantasy of “rational expectations” {Muth, 1961 #1900}.

It would be better to start with Anderson’s paper, rather than the introduction—ironically the introduction is more difficult and more specialized than the original paper (though it is still worth reading if you’re, like me, an obsessive about these things.

Answering some questions on the evolution of my approach to economics

1. What was the driving force behind writing the paper Finance and Economic Breakdown: Modeling Minsky’s Financial Instability Hypothesis?

 

I had become a critic of mainstream economics way back in 1971, in my first year at University, courtesy of being exposed to the “Theory of the Second Best” (Lancaster 1966) by our brilliant lecturer Frank Stilwell (Butler, Jones et al. 2009), and then discovering the “Cambridge Controversies” (Harcourt 1972) on my own in the economic department’s library. I was particularly struck by Samuelson’s concession of defeat in 1966 (Samuelson 1966), since we were using Samuelson’s textbook at the time, and there was no sign of this debate in his textbook, let alone the fact that he conceded that the rebels were right.

I was also doing first year mathematics at the time, and loved differential equations, but they simply did not turn up in my economics courses, where the mathematical techniques used seemed arcane by comparison.

Like all critics, I looked for alternative analyses to Neoclassical economics. None of the alternative literature really inspired me, until I read Minsky. Marxian economics had an obvious flaw to me that I spotted when I first read Capital in 1973: Marx’s dialectical philosophy contradicted the Labour Theory of Value (Keen 1993; Keen 1993). Austrian economics was watered-down Neoclassicism. Post Keynesian was realistic but felt arbitrary—there was no unifying theory of value.

Most of the critiques of capitalism argued that it had a tendency to stagnation (Baran 1968), something that in my own experience I couldn’t see. My Arts Degree coincided with a huge boom, driven largely by real estate in Sydney (Daly 1982), and it seemed to me there was a tendency to euphoria rather than depression. Then the boom busted spectacularly in 1975 when I was completing my Law degree. There were financial failures everywhere, and unemployment trebled. I wanted a theory that would explain that process.


I finally found this theory when I went back to university in my mid-30s to do a Masters degree as a prelude to undertaking my PhD. Bill Junor, an excellent Keynesian lecturer there, set us the task of reviewing a major book in the Macroeconomics module in 1987. Minsky’s John Maynard Keynes (Minsky 1975) was one of the books on the list. I’d heard of his name before, but I hadn’t read anything by him. So, I chose to review it.

It was a revelationary experience. Finally, I had found a critic of capitalism who appeared to understand its tendency to credit-driven booms and busts, and who was free of the proclivity to reason in equilibrium terms:

“we are dealing with a system that is inherently unstable, and the fundamental instability is “upward.”” (Minsky 1975, p. 162)

However to me, there was one obvious weakness in Minsky’s analysis: the mathematical model he had tried to build of his Financial Instability Hypothesis was based on Hicks’s 2nd order difference equation model of the trade cycle:

    

I already knew that this was a bad model in itself, because I had applied my mathematics training to analyzing it in another course, and showed that it was economically invalid. It was derived from adding together, as Hicks defined them, actual savings and desired investment . There is no economic theory, Keynesian or otherwise, that says that this addition makes any sense (Keen 2020). But at the same time, Minsky’s verbal model was eminently suited to treating in differential equation terms. It considered an economy in historical time, as opposed to the faux equilibrium “time” of Neoclassical models, and even the logical time of Robinson’s “Golden Age” analysis. It had the “initial condition” that it began after a preceding economic crisis—because we are dealing with the real world where such a history exists, rather than the stale blackboard abstractions of Neoclassicism.

While I did my Masters, I also sat in on undergraduate mathematics courses at the University of New South Wales, since it was by then more than 15 years since I had last studied mathematics. The staff there alerted me to the fact that the department’s by-then-deceased founder, the John Blatt, had turned his formidable mathematical skills to critiquing economics, in a brilliant book entitled Dynamic Economic Systems: A Post Keynesian Approach (Blatt 1983). In it, Blatt praised Richard Goodwin’s “A growth cycle” model(Goodwin 1967), and explained it far more clearly than Goodwin himself had done. He concluded that one flaw of the model—that its equilibrium was “not unstable”!—could be remedied by introducing a financial sector:

Of course, the model is far from perfect. In particular, we feel that the existence of an equilibrium which is not unstable (it is neutral) is a flaw in this model; so is the possibility of arbitrarily large cycles. The first flaw can be remedied in several ways: (1) introduction of more disaggregation in the nature of the output, e.g., separate outputs of consumer goods and investment goods; (2) introduction of a financial sector, including money and credit as well as some index of business confidence. Either or both of these changes are likely to make the equilibrium point locally unstable, as is desirable. (Blatt 1983, p. 210)

I took that as my lead, and added the private debt ratio as a third system state to the two in Goodwin’s model, the employment rate and the wages share of GDP. I was astonished to see that the resulting model not only reproduced the prediction Minsky made—that capitalism could fall into a debt-deflationary trap after a series of credit-driven boom and bust cycles—but also generated an unexpected result, that the cycles in the economic growth rate would diminish at first and then rise.

This discovery forced me to take seriously the nascent area of chaos and complexity studies, since it turned out that this phenomenon was first identified in studies of fluid dynamics (Pomeau and Manneville 1980) inspired by Lorenz’s work on turbulence (Lorenz 1963).

Because I have primarily explored the private-sector-only model since then, it is worth noting that I also developed a model of a counter-cyclical government, which generated another model displaying far from equilibrium dynamics. This replicated Minsky’s observation that “Big Government” could stabilize an unstable economy.

2. What influenced the interest in data from the United States if you have previously analyzed Australia?

I first became aware that my Minsky model was becoming an empirical reality when I was asked to write an Expert Witness opinion in a law case over predatory lending, in December 2005 (Keen 2005). Looking at the data through both Minsky’s eyes and the lens of my model, it was apparent to me that this was “It”: the crisis about which Minsky asked “Can “It” Happen Again?”. Minsky had argued that a combination of government deficits and lender of last resort interventions by the Central Bank could prevent a deep crisis (Minsky 1963; Minsky 1982), but Neoliberal ideology had weakened both of these since he wrote. It seemed to me that a repeat of “the Big One”—the bust that led to the Great Depression—was imminent, though its severity would be attenuated by an inevitable rise in net government spending once it occurred. Someone had to warn about it, and at least in Australia, that somebody was me.

I established the blog http://www.debtdeflation.com/blogs/, went on the media as much as I could, and successfully raised the alarm in Australia—only to see its government enact policies that re-started the Australian housing bubble (see FHB Boost is Australia’s “Sub-prime Lite”) and thus prevent a deep crisis in Australia, while one occurred in the USA.

I copped a lot of flak in Australia for being wrong—”there wasn’t a crisis, so what were you worried about?”, while the fact that this was a global crisis was ignored. That alone encouraged me to focus less on my own country and more on the rest of the world.

Also, one of the original difficulties in doing this work was that the core data—the level and rate of change of private debt—wasn’t recorded in a systematic way by any international authority. But then the Bank of International Settlements, following the lead of its superb research director Bill White (Bill was the only person in an official capacity to have read and understood Minsky, and was thus aware of the potential for a crisis, and warning of it via official BIS publications), started publishing an excellent database on private and government debt with data from over 40 countries. This made it easy to analyse debt at the global level.

Despite its problems, America is still the biggest economy on the planet, and the most self-contained. So extraneous factors—like for example, Chinese demand to buy housing, or export price and exchange rate volatility, don’t wash out the inherent dynamics as much in the USA as they can in Australia, Canada, the UK, etc. So it made sense to focus on the USA so that the general principles of monetary macroeconomics were more obvious.

3.Why do you think so few economists predicted the crisis from the year 2008? what should we learn from this lesson?

It’s the same reason that no Ptolemaic astronomer predicted the return of Halley’s comet: they were using the wrong paradigm. In Ptolemy’s model of astronomy, the Earth was at the centre of the Universe, the Heavens were perfect and unchanging, and the Earth was where change and decay occurred. In that paradigm, comets were atmospheric phenomena and therefore couldn’t be predicted.

Likewise, according to the Neoclassical paradigm, credit (which I define as the annual change in private debt), is simply a transfer of money from one non-bank agent to another, and it simply redistributes spending power, without changing its magnitude. With this perspective, increasing or falling levels of credit can’t be predicted to have any significant impact on the economy, unless you know in advance which agents have a higher propensity to spend: it’s quite possible that a fall in credit could cause an increase in aggregate demand because savers had a higher propensity to spend than borrowers. This is exactly the reason that Ben Bernanke, who was in a position to see the data on private debt, and do something about it rising too quickly, completely ignored it instead. He asked the right questions about the Great Depression—what caused aggregate demand to plunge and stay so low:

Because the Depression was characterized by sharp declines in both output and prices, the premise of this essay is that declines in aggregate demand were the dominant factor in the onset of the Depression. This starting point leads naturally to two questions: First, what caused the worldwide collapse in aggregate demand in the late 1920s and early 1930s (the “aggregate demand puzzle”)? Second, why did the Depression last so long? In particular, why didn’t the “normal” stabilizing mechanisms of the economy, such as the adjustment of wages and prices to changes in demand, limit the real economic impact of the fall in aggregate demand (the “aggregate supply puzzle”)? (Bernanke 2000, p. ix)

You can see his Neoclassical paradigm getting in the way already—the belief that capitalism has “”normal” stabilizing mechanisms of the economy, such as the adjustment of wages and prices to changes in demand”. But at least he’s starting from the right question, which is what caused aggregate demand to plunge so much.

But he then rejected the most accurate explanation for what caused that plunge—Irving Fisher’s “Debt Deflation Theory of Great Depressions” (Fisher 1932; Fisher 1933) because it relied on changes in bank lending causing changes in aggregate demand, and in his Neoclassical, “Loanable funds” paradigm, that couldn’t happen:

The idea of debt-deflation goes back to Irving Fisher (1933). Fisher envisioned a dynamic process in which falling asset and commodity prices created pressure on nominal debtors, forcing them into distress sales of assets, which in turn led to further price declines and financial difficulties.” His diagnosis led him to urge President Roosevelt to subordinate exchange-rate considerations to the need for reflation, advice that (ultimately) FDR followed. Fisher’s idea was less influential in academic circles, though, because of the counterargument that debt-deflation represented no more than a redistribution from one group (debtors) to another (creditors). Absent implausibly large differences in marginal spending propensities among the groups, it was suggested, pure redistributions should have no significant macroeconomic effects. (Bernanke 2000, p. 24)

All it would have taken is a look at the data, which was available when Bernanke wrote his Essays on the Great Depression (Census 1949; Census 1975). But he didn’t even look, because his paradigm told him that data was irrelevant.

So much for being irrelevant. But still, over a decade after the crisis, Neoclassicals are ignoring this data, and the Central Banks that are telling them that their textbook models of banking are wrong (Krugman 2014; McLeay, Radia et al. 2014; Deutsche Bundesbank 2017).

What we should learn is that economics is dominated by a paradigm that is as wrong about capitalism as Ptolemaic astronomy was about the universe. We need a new paradigm, and there is virtually nothing in the old paradigm that will be of use in the new.

4. What do you think about consequences from the covid-19 crisis for the World economy?

In the medium run I think it will end the fetish for globalisation, which has really been about the West exploiting low wages and the East using this to industrialize rapidly. The perverse outcome this has led to with Covid-19 is that southeast Asia and China in particular were well prepared for the pandemic by having the manufacturing capability to supply their populations with personal protective equipment (whereas the developed nations, which had outsourced their manufacturing, could not), and the social capacity to enforce social distancing policies. Now China, most of southeast Asia, maybe Japan and South Korea, and a handful of non-Asian countries—including New Zealand and perhaps Australia, but also Norway, Switzerland, Croatia, Lithuania, Angola and Zambia, and other scattered countries—will be a virus-free block. America, Europe, especially the UK, Russia and South America will be in a virus-zone.

It’s a division of the planet unlike anything in our history. Previous blocks have united countries with similar histories or politics; nothing of the sort applies here. But they will form because it will be easy to travel between virus-free countries, difficult to travel between virus-afflicted, and very difficult to cross the viral border. It is a fractured planet.

Figure 1: The www.endcoronavirus.org website’s map of the fractured planet

This will have all sorts of effects on the global economy, some of them contradictory. China’s power and prestige will be enhanced, as will its dominance of its local region—unless Japan manages to overcome the virus as well. Seaborne transportation of manufactured goods will be severely compromised, since ships need crews, who would need to be quarantined at either end of their journeys, drastically increasing the costs and creating points of weakness in disease control measures. Bulk transport of oil and coal and other minerals might still work, but more likely rail transport and pipelines would dominate. This could strengthen Russia over Saudi Arabia, for example.

At the macroeconomic level, it promises a debt-deflation at the end of the measures used to contain the virus. These measures have been successful trials of many progressive ideas—such as a Universal Basic Income, or direct funding of the UK Treasury by the Bank of England—which we might have thought would never see the light of day.

However, governments are likely to revert to Neoliberal type, cut back government supports too early, and institute austerity again to try to reduce government debt levels, even though austerity is a major reason why they were unprepared for the pandemic in the first place. This could lead to a debt-deflationary crisis in the USA, UK and Europe—and even Australia, which is already imposing austerity measures.

One worrying trend here has been a huge jump in corporate indebtedness, probably due to companies borrowing to be able to sustain unavoidable outlays in the midst of a collapse in their cash flows.

There could be a wave of evictions of renters and mortgagors too, thanks to the huge rise in unemployment and collapse in “the gig economy” jobs that kept many households barely solvent before the crisis.

My fear is that the government response to the aftermath of the crisis will be as bad as the response to it during the crisis, generally speaking, was good. So bankruptcies and poverty effects which were minimized during the crisis by the large scale government financing that was needed, will instead occur after the crisis.

That still assumes that there will be an “after”, which depends on the development of a vaccine and its effective distribution to enough of the world’s population to drive the vaccine extinct. That is still a speculation rather than a certainty. It is also possible that Covid-19 is just the first wave in a sequence of global environmental crises that are caused by the excessive pressure of human industrial society on the planet.

References

Baran, P. A. (1968). Monopoly capital an essay on the American economic and social order / Paul A. Baran and Paul M. Sweezy. New York, New York : Monthly Review Press.

Bernanke, B. S. (2000). Essays on the Great Depression. Princeton, Princeton University Press.

Blatt, J. M. (1983). Dynamic economic systems: a post-Keynesian approach. Armonk, N.Y, M.E. Sharpe.

Butler, G., E. Jones, et al. (2009). Political Economy Now!: The struggle for alternative economics at the University of Sydney. Sydney, Darlington Press.

Census, B. o. (1949). Historical Statistics of the United States 1789-1945. B. o. t. Census. Washington, United States Government.

Census, B. o. (1975). Historical Statistics of the United States Colonial Times to 1970. B. o. t. Census. Washington, United States Government.

Daly, M. T. (1982). Sydney Boom, Sydney Bust. Sydney, George Allen and Unwin.

Deutsche Bundesbank (2017). “The role of banks, non- banks and the central bank in the money creation process.” Deutsche Bundesbank Monthly Report: 13-33.

Fisher, I. (1932). Booms and Depressions: Some First Principles. New York, Adelphi.

Fisher, I. (1933). “The Debt-Deflation Theory of Great Depressions.” Econometrica
1(4): 337-357.

Goodwin, R. M. (1967). A growth cycle. Socialism, Capitalism and Economic Growth. C. H. Feinstein. Cambridge, Cambridge University Press: 54-58.

Harcourt, G. C. (1972). Some Cambridge Controversies in the Theory of Capital. Cambridge, Cambridge University Press.

Keen, S. (1993). “The Misinterpretation of Marx’s Theory of Value.” Journal of the History of Economic Thought
15(2): 282-300.

Keen, S. (1993). “Use-Value, Exchange Value, and the Demise of Marx’s Labor Theory of Value.” Journal of the History of Economic Thought
15(1): 107-121.

Keen, S. (2005). Expert Opinion, Permanent Mortgages vs Cooks. Sydney, Legal Aid NSW.

Keen, S. (2020). “Emergent Macroeconomics: Deriving Minsky’s Financial Instability Hypothesis Directly from Macroeconomic Definitions.” Review of Political Economy
Forthcoming.

Krugman, P. (2014). “A Monetary Puzzle.” The Conscience of a Liberal
http://krugman.blogs.nytimes.com/2014/04/28/a-monetary-puzzle/.

Lancaster, K. (1966). “A New Approach to Consumer Theory.” Journal of Political Economy
74(2): 132-157.

Lorenz, E. N. (1963). “Deterministic Nonperiodic Flow.” Journal of the Atmospheric Sciences
20(2): 130-141.

McLeay, M., A. Radia, et al. (2014). “Money creation in the modern economy.” Bank of England Quarterly Bulletin
2014 Q1: 14-27.

Minsky, H. P. (1963). Can “It” Happen Again? Banking and Monetary Studies. D. Carson. Homewood, Illinois, Richard D. Irwin: 101-111.

Minsky, H. P. (1975). John Maynard Keynes. New York, Columbia University Press.

Minsky, H. P. (1982). “Can ‘It’ Happen Again? A Reprise.” Challenge
25(3): 5-13.

Pomeau, Y. and P. Manneville (1980). “Intermittent transition to turbulence in dissipative dynamical systems.” Communications in Mathematical Physics
74: 189-197.

Samuelson, P. A. (1966). “A Summing Up.” Quarterly Journal of Economics
80(4): 568-583.

 

 

The Appallingly Bad Neoclassical Economics of Climate Change

This is the draft of an invited paper for the journal Globalizations on “Economics and the Climate Crisis“. As I’ve argued for some time, Neoclassical economists—especially William Nordhaus and Richard Tol—bear enormous responsibility for trivializing the dangers of climate change on intellectually spurious grounds. This paper is the most comprehensive overview I’ve done of this issue, and it includes new material on Nordhaus’s misreading of scientific literature. Word has deleted the footnotes, which you can find in the attached PDF.

Introduction

William Nordhaus was awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (Mirowski 2020) in 2018 for his work on climate change. His first major paper in this area was “World Dynamics: Measurement Without Data” (Nordhaus 1973), which attacked the pessimistic predictions in Jay Forrester’s World Dynamics (Forrester 1971; Forrester 1973) on the grounds, amongst others, that they were not firmly grounded in empirical research:

The treatment of empirical relations in World Dynamics can be summarised as measurement without dataNot a single relationship or variable is drawn from caactual data or empirical studies. (Nordhaus 1973, p. 1157. Italics in original. Subsequent emphases added.)

There is no explicit or apparent reference to data or existing empirical studies. (Nordhaus 1973, p. 1182)

Whereas most scientists would require empirical validation of either the assumptions or the predictions of the model before declaring its truth content, Forrester is apparently content with subjective plausibility. (Nordhaus 1973, p. 1183)

Sixth, there is some lack of humility toward predicting the future. Can we treat seriously Forrester’s (or anybody’s) predictions in economics and social science for the next 130 years? Long-run economic forecasts have generally fared quite poorly… And now, without the scantest reference to economic theory or empirical data, Forrester predicts that the world’s material standard of living will peak in 1990 and then decline. (Nordhaus 1973, p. 1183)

After this paper, Nordhaus’s own research focused upon the economics of climate change. One could rightly expect, from his critique of Forrester, that Nordhaus was scrupulous about basing his modelling upon sound empirical data.

One’s expectations would be dashed. Whereas Nordhaus characterised Forrester’s work as “measurement without data”, Nordhaus’s can be characterised as “making up numbers to support a pre-existing belief”: specifically, that climate change could have only a trivial impact upon the economy. This practice was replicated, rather than challenged, by subsequent Neoclassical economists—with some honourable exceptions, notably Weissman (Weitzman 2011; Weitzman 2011), de Canio (DeCanio 2003), Cline (Cline 1996), Darwin (Darwin 1999), Kaufmann (Kaufmann 1997; Kaufmann 1998), and Quiggin and Horowitz (Quiggin and Horowitz 1999).

The end product is a set of purported empirical estimates of the impact of climate change upon the economy that are utterly spurious, and yet which have been used to calibrate the “Integrated Assessment Models” (IAMs) that have largely guided the political responses to climate change. Stephen de Canio expressed both the significance and the danger of this work very well in his neglected book Economic Models of Climate Change: a Critique:

Perhaps the greatest threat from climate change is the risk it poses for large-scale catastrophic disruptions of Earth systems…

Business as usual amounts to conducting a one-time, irreversible experiment of unknown outcome with the habitability of the entire planet.

Given the magnitude of the stakes, it is perhaps surprising that much of the debate about the climate has been cast in terms of economics

Nevertheless, it is undeniably the case that economic arguments, claims, and calculations have been the dominant influence on the public political debate on climate policy in the United States and around the world… It is an open question whether the economic arguments were the cause or only an ex post justification of the decisions made by both administrations, but there is no doubt that economists have claimed that their calculations should dictate the proper course of action. (DeCanio 2003, pp. 2-4)

The impact of these economists goes beyond merely advising governments, to actually writing the economic components of the formal reports by the IPCC (“Intergovernmental Panel On Climate Change”), the main authority coordinating humanity’s response, such as it is, to climate change. The sanguine conclusions they state—such as the following from the 2014 IPCC Report (Field, Barros et al. 2014)—carry more weight with politicians, obsessed as they are with their countries’ GDP growth rates, than the far more alarming warnings in the sections of the Report written by actual scientists:

Global economic impacts from climate change are difficult to estimate. Economic impact estimates completed over the past 20 years vary in their coverage of subsets of economic sectors and depend on a large number of assumptions, many of which are disputable, and many estimates do not account for catastrophic changes, tipping points, and many other factors. With these recognized limitations, the incomplete estimates of global annual economic losses for additional temperature increases of ~2°C are between 0.2 and 2.0% of income. (Arent, Tol et al. 2014, p. 663. Emphasis added)

This is a prediction, not of a drop in the annual rate of economic growth—which would be significant even, at the lower bound of 0.2%–but a prediction that the level of GDP will be between 0.2% and 2% lower, when global temperatures are 2°C higher than pre-industrial levels, compared to what they would have been in the complete absence of global warming. This involves a trivial decline in the predicted rate of economic growth between 2014 and when the 2°C increase occurs, even at the upper bound of 2%.

Given the impact that economists have had on public policy towards climate change, and the immediacy of the threat we now face from climate change, this work could soon be exposed as the most significant and dangerous hoax in the history of science.

Fictional Empirics

The numerical relationships that economists assert exist between global temperature change and GDP change were summarized in Figure 1 of the chapter “Key Economic Sectors and Services” (Arent, Tol et al. 2014) in the 2014 IPCC Report Climate Change 2014: Impacts, Adaptation, and Vulnerability (Field, Barros et al. 2014). It is reproduced below as Figure 1.

Figure 1: Figure 10.1 from Chapter 10 “Key Economic Sectors and Services” of the IPCC Report Climate Change 2014 Impacts, Adaptation, and Vulnerability

The sources of these numbers—as I explain below, they cannot be called “data points”—are given in Table SM10-1 from the supplement to this report (Arent 2014). Four classifications of the approaches used were listed by the IPCC: “Enumeration” (ten studies); “Statistical” (5 studies); “CGE” (“Computable General Equilibrium”: 2 studies—one with 2 results); and “Expert Elicitation” (1 study).

Enumeration: It’s what you don’t count that counts

The bland description of what the “Enumeration” approach entails given by Tol makes it seem unobjectionable:

In this approach, estimates of the “physical effects” of climate change are obtained one by one from natural science papers, which in turn may be based on some combination of climate models, impact models, and laboratory experiments. The physical impacts must then each be given a price and added up. For agricultural products, an example of a traded good or service, agronomy papers are used to predict the effect of climate on crop yield, and then market prices or economic models are used to value the change in output. (Tol 2009, pp. 31-32)

However, this analysis commenced from the perspective, in the very first reference in this tradition (Nordhaus 1991), that climate change is a relatively trivial issue:

it must be recognised that human societies thrive in a wide variety of climatic zones. For the bulk of economic activity, non-climate variables like labour skills, access to markets, or technology swamp climatic considerations in determining economic efficiency. (Nordhaus 1991, p. 930. Emphasis added)

If there had been a decent evaluation process in place at this time for research into the economic impact of climate change, this paragraph alone should have raised alarm bells: yes, it is quite likely that climate today is a less important determinant of “economic efficiency” today than “labour skills, access to markets, or technology”, when one is comparing one region or country with another today. But what is the relevance of this cross-sectional comparison to assessing the impact of drastically altering the entire planet’s climate over time, via the retention of additional solar energy from additional greenhouse gases?

Nordhaus then excludes 87% of US industry from consideration, on the basis that it takes place “in carefully controlled environments that will not be directly affected by climate change”:

Table 5 shows a sectoral breakdown of United States national income, where the economy is subdivided by the sectoral sensitivity to greenhouse warming. The most sensitive sectors are likely to be those, such as agriculture and forestry, in which output depends in a significant way upon climatic variables. At the other extreme are activities, such as cardiovascular surgery or microprocessor fabrication in ‘clean rooms’, which are undertaken in carefully controlled environments that will not be directly affected by climate change. Our estimate is that approximately 3% of United States national output is produced in highly sensitive sectors, another 10% in moderately sensitive sectors, and about 87% in sectors that are negligibly affected by climate change. (Nordhaus 1991, p. 930. Emphasis added)

The examples of “cardiovascular surgery or microprocessor fabrication in ‘clean rooms'” might seem reasonable activities to describe as taking place in “carefully controlled environments”. However, Nordhaus’s list of industries that he simply assumed would be negligibly impacted by climate change is so broad, and so large, that it is obvious that what he meant by “not be directly affected by climate change” is anything that takes place indoors—or, indeed, underground, since he includes mining as one of the unaffected sectors. Table 1, which is an extract from Nordhaus’s Table 5 (Nordhaus 1991, p. 931), lists the subset of industries that he considered would be “negligibly affected by climate change”.

Table 1: Extract from Nordhaus’s breakdown of economic activity by vulnerability to climatic change in US 1991 $ terms (Nordhaus 1991, p. 931 )

Since this was the first paper in a research tradition, one might hope that subsequent researchers challenged this assumption. However, instead of challenging it, they replicated it. The 2014 IPCC Report repeats the assertion that climate change will be a trivial determinant of future economic performance:

For most economic sectors, the impact of climate change will be small relative to the impacts of other drivers (medium evidence, high agreement). Changes in population, age, income, technology, relative prices, lifestyle, regulation, governance, and many other aspects of socioeconomic development will have an impact on the supply and demand of economic goods and services that is large relative to the impact of climate change. (Arent, Tol et al. 2014, p. 662)

It also repeats the assertion that indoor activities will be unaffected. The one change between Nordhaus in 1991 and the IPCC Report 23 years later is that it no longer lumps mining in the “not really exposed to climate change” bracket. Otherwise it repeats Nordhaus’s assumption that anything done indoors will be unaffected by climate change:

Frequently Asked Questions

FAQ 10.3 | Are other economic sectors vulnerable to climate change too?

Economic activities such as agriculture, forestry, fisheries, and mining are exposed to the weather and thus vulnerable to climate change. Other economic activities, such as manufacturing and services, largely take place in controlled environments and are not really exposed to climate change. (Arent, Tol et al. 2014, p. 688)

All the intervening papers between Nordhaus in 1991 and the IPCC in 2014 maintain this assumption: neither manufacturing, nor mining, transportation, communication, finance, insurance and non-coastal real estate, retail and wholesale trade, nor government services, appear in the “enumerated” industries in the “Coverage” column in Table 3. All these studies have simply assumed that these industries, which account for of the order of 90% of GDP, will be unaffected by climate change.

There is a “poker player’s tell” in FAQ quoted above which implies that these Neoclassical economists are on a par with Donald Trump in their understanding of what climate change really entails. This is the statement that “Economic activities such as agriculture, forestry, fisheries, and mining are exposed to the weather and thus vulnerable to climate change“. Explicitly, they are saying that if an activity is exposed to the weather, it is vulnerable to climate change, but if it is not, it is “not really exposed to climate change”. They are equating the climate to the weather.

This is a harsh judgment to pass on academics, who are supposed to have sufficient intellect to not make such mistakes. But there is no other way to make sense of their collective decision to exclude almost 90% of GDP from their enumeration of damages from climate change. Nor is there any other way to interpret the core assumption of their other dominant method of making up numbers for the models, the so-called “statistical” or “cross-sectional” method.

The “Statistical approach”

While locating the fundamental flaw in the “enumeration” approach took some additional research, the flaw in the statistical approach was obvious in the first reference I read on it, Richard Tol’s much-corrected (Tol 2014) and much-criticised paper (Gelman 2014; Gelman 2015; Nordhaus and Moffat 2017, p. 10; Gelman 2019), “The Economic Effects of Climate Change”:

An alternative approach, exemplified in Mendelsohn’s work (Mendelsohn, Morrison et al. 2000; Mendelsohn, Schlesinger et al. 2000) can be called the statistical approach. It is based on direct estimates of the welfare impacts, using observed variations (across space within a single country) in prices and expenditures to discern the effect of climate. Mendelsohn assumes that the observed variation of economic activity with climate over space holds over time as well; and uses climate models to estimate the future effect of climate change. (Tol 2009, p. 32)

If the methodological fallacy in this reasoning is not immediately apparent—bearing in mind that numerous academic referees have let pass papers making this assumption—think what it would mean if this assumption were correct.

Within the United States, it is generally true that very hot and very cold regions have a lower level of per capita income than median temperature regions. Using the States of the contiguous continental USA for those regions, Florida (average temperature 22.5°C) and North Dakota (average temperature 4.7°C), for example, have lower per capita incomes than New York (average temperature 7.4°C). But the difference in average temperatures is far from the only reason for differences in income, and in the greater scheme of things, the differences are trivial anyway: as American States, at the global level they are all in the high per capita income range (respectively $26,000, $26,700 and $43,300 per annum in 2000 US dollars). A statistical study of the relationship between “Gross State Product” (GSP) per capita and temperature will therefore find a weak, nonlinear relationship, with GSP per capita rising from low temperatures, peaking at medium ones, and falling at higher temperatures.

If you then assume that this same relationship between GDP and temperature will apply as global temperatures rise with Global Warming, you will conclude that Global Warming will have a trivial impact on global GDP. Your conclusion is your assumption.

This is illustrated by Figure 2, which shows a scatter plot of deviations from the national average temperature by State in °C, against the deviations from the national average (GDP per capita) of Gross State Product per capita in percent of GDP (the source data is in Table 4), and a quadratic fit to this data, which has a coefficient of -0.00318, and, as expected, a weak correlation coefficient of 0.31.

 

Figure 2: Correlation of temperature and USA Gross State Product per capita

This regression thus yields a very poor, but not entirely useless, “in-sample” model of how of temperature deviations from the USA average affect deviations from average US GDP per capita today:

         

In words, this asserts that Gross State Product per capita falls by 0.318% (of the national average GDP per capita) for every 1°C difference in temperature (from the national average temperature) squared.

An absurd “out of sample” policy recommendation from this model would be that the US’s GDP would increase if hotter and colder States could move towards the average temperature for the USA. This absurd recommendation could be “refined” by using this same data to calculate the optimum temperature for the USA’s GDP, and then proposing that all States move to that temperature. Of course, these “policies” are clearly impossible, simply because the States can’t change their location on the planet.

However, the economists doing these studies reasoned that Global Warming would achieve the same result over time (with the drawback that it would be applied equally to all regions). So they did indeed calculate optimum temperatures for each of the sectors they expected to be affected by climate change—and their calculations excluded the same list of sectors that the “enumeration” approach assumed would be unaffected (manufacturing, mining, services, etc.):

Both the reduced-form and cross-sectional response functions imply that the net productivity of sensitive economic sectors is a hill-shaped function of temperature (Mendelsohn, Schlesinger et al. 2000). Warming creates benefits for countries that are currently on the cool side of the hill and damages for countries on the warm side of the hill. The exact optimum temperature varies by sector. For example, according to the Ricardian model, the optimum temperatures for agriculture, forestry, and energy are 14.2, 14.8 and 8.6°C, respectively. With the reduced form model, the optimum temperatures for agriculture and energy are 11.7 and 10.0. (Mendelsohn, Morrison et al. 2000, p. 558)

They then estimated the impact on GDP of increasing global temperatures, assuming that the same coefficients they found for the relationships between temperature and output today (using what Tol called “the statistical” and Mendelsohn called the “cross-sectional” approach) could be used to estimate the impact of global warming. This resulted in more than one study which concluded that increasing global temperatures via global warming would be beneficial to the economy. Here, for example, is Meldelsohn, Schlesinger et al. on the impact of a 2.5°C increase in global temperatures:

Compared to the size of the economy in 2100 ($217 trillion), the market effects are small… The Cross-sectional climate-response functions imply a narrower range of impacts across GCMs: from $97 to $185 billion of benefits with an average of $145 billion of benefits a year. (Mendelsohn, Schlesinger et al. 2000, p. 41. Italics added)

The, once more, explicit assumption these economists are making is that it doesn’t matter how you alter temperature. Whether this is hypothetically done by altering a region’s location on the planet—which is impossible—or by altering the temperature of the entire planet—which is what Climate Change is going—they assumed that the impact on GDP would be the same.

Expert Opinions—Real and Imagined

Nordhaus conducted the only two surveys of “expert opinions” to estimate the impact of global warming on GDP, in 1994 (Nordhaus 1994), and 2017 (Nordhaus and Moffat 2017). The former asked people from various academic backgrounds to give their estimates of the impact on GDP of three global warming scenarios: (A) a 3°C rise by 2090; (B) a 6°C rise by 2175; and (C) a 6°C rise by 2090. The numbers used by the IPCC from this study in Figure 1 were a 3°C temperature rise for a 3.6% fall in GDP.

Expert opinions are a valid procedure to aggregate knowledge in areas that require a large number of disparate fields to be aggregated, as the climate scientist Tim Lenton and co-authors explained in their paper “Tipping elements in the Earth’s climate system” (Lenton, Held et al. 2008):

formal elicitations of expert beliefs have frequently been used to bring current understanding of model studies, empirical evidence, and theoretical considerations to bear on policy-relevant variables. From a natural science perspective, a general criticism is that expert beliefs carry subjective biases and, moreover, do not add to the body of scientific knowledge unless verified by data or theory. Nonetheless, expert elicitations, based on rigorous protocols from statistics and risk analysis, have proved to be a very valuable source of information in public policymaking. It is increasingly recognized that they can also play a valuable role for informing climate policy decisions. (Lenton, Held et al. 2008, p. 1791)

I cite this paper in contrast to Nordhaus’s here for two reasons: (1) it shows how expert opinion surveys should be conducted; (2) Nordhaus later cites this survey in support of his use of a “damage function” for climate change which lacks tipping points, when this survey explicitly rejects such functions.

Lenton et al.’s survey was sent to 193 scientists, of whom 52 responded. Respondents were specifically instructed to stick to their area of knowledge, rather than to speculate more broadly: “Participants were encouraged to remain in their area of expertise” (Lenton, Held et al. 2008, p. 10). These are listed in Table 2.

Table 2: Fields of expertise for experts surveyed in (Lenton, Held et al. 2008); abridged from Table 1 in (Lenton, Held et al. 2008, p. 10)

Nordhaus’s survey began with a letter requesting 22 people to participate, 18 of whom fully complied, and one partially. Nordhaus describes them as including 10 economists, 4 “other social scientists”, and 5 “natural scientists and engineers”, but also describes eight of the economists as coming from “other subdisciplines of economics (those whose principal concerns lie outside environmental economics)” (Nordhaus 1994, p. 48)—which ipso facto should rule them out from taking part in this expert survey in the first place.

One of them was Larry Summers—who is probably the source of the choicest quotes in the paper, such as “For my answer, the existence value [of species] is irrelevant—I don’t care about ants except for drugs” (Nordhaus 1994, p. 50).

Lenton’s survey combined the expertise of its interviewees in specific fields of climate change to compile a list of large elements of the planet’s climate system (>1,000km in extent) which could be triggered by increases in global temperature of between 0.5°C (disappearance of Arctic summer sea ice) and 6°C (amplified En Nino causing drought in Southeast Asia and elsewhere), on timescales varying from 10 years (Arctic summer sea ice) to 300 years (West Antarctic Ice Shelf disintegration) (Lenton, Held et al. 2008, p. 1788).

Nordhaus’s survey was summarised by a superficially bland pair of numbers—3°C temperature rise and a 3.6% fall in GDP—but that summary hides far more than it reveals. There was extensive disagreement, well documented by Nordhaus, between the relatively tiny cohort of actual scientists surveyed, and in particular the economists “whose principal concerns lie outside environmental economics”. The quotes from the economists surveyed also reveal the source of the predisposition by economists in general to dismiss the significance of climate change.

As Nordhaus noted, “Natural scientists’ estimates [of the damages from climate change] were 20 to 30 times higher than mainstream economists'” (Nordhaus 1994, p. 49). The average estimate by “Non-environmental economists” (Nordhaus 1994, Figure 4, p. 49) of the damages to GDP a 3°C rise by 2090 was 0.4% of GDP; the average for natural scientists was 12.3%, and this was with one of them refusing to answer Nordhaus’s key questions:

Also, although the willingness of the respondents to hazard estimates of subjective probabilities was encouraging, it should be emphasized that most respondents proffered these estimates with reservations and a recognition of the inherent difficulty of the task. One respondent (19), however, was a holdout from such guesswork, writing:

I must tell you that I marvel that economists are willing to make quantitative estimates of economic consequences of climate change where the only measures available are estimates of global surface average increases in temperature. As [one] who has spent his career worrying about the vagaries of the dynamics of the atmosphere, I marvel that they can translate a single global number, an extremely poor surrogate for a description of the climatic conditions, into quantitative estimates of impacts of global economic conditions. (Nordhaus 1994, pp. 50-51)

Comments from economists lay at the other end of the spectrum from this self-absented scientist. Because they had a strong belief in the ability of “human societies” to adapt—born of their acceptance of the Neoclassical model of capitalism, in which “the economy” always returns to equilibrium after a “exogenous shock”—they could not imagine that climate change itself could do significant damage to the economy, whatever it might do to the biosphere itself:

One respondent suggested whimsically that it was hardly surprising, given that the economists know little about the intricate web of natural ecosystems, whereas natural scientists know equally little about the incredible adaptability of human societies

There is a clear difference in outlook among the respondents, depending on their assumptions about the ability of society to adapt to climatic changes. One was concerned that society’s response to the approaching millennium would be akin to that prevalent during the Dark Ages, whereas another respondent held that the degree of adaptability of human economies is so high that for most of the scenarios the impact of global warming would be “essentially zero”.

An economist explains that in his view energy and brain power are the only limits to growth in the long run, and with sufficient quantities of these it is possible to adapt or develop new technologies so as to prevent any significant economic costs. (Nordhaus 1994, pp. 48-49. All emphases added)

Given this extreme divergence of opinion between economists and scientists, one might imagine that Nordhaus’s next survey would examine the reasons for it. In fact, the opposite applied: his methodology excluded non-economists entirely.

Rather than a survey of experts, this was a literature survey (Nordhaus and Moffat 2017), which ipso facto is another legitimate method to provide data for a topic subject that is difficult to measure, and subject to high uncertainty. He and his co-author searched for relevant articles using the string “”(damage OR impact) AND climate AND cost” (Nordhaus and Moffat 2017, p. 7), which is reasonable, if rather too broad (as they themselves admit in the paper).

The key
flaw in this research was where they looked: they executed their search string in Google, which returned 64 million results, Google Scholar, which returned 2.8 million, and the economics-specific database Econlit, which returned just 1700 studies. On the grounds that there were too many results in Google and Google Scholar, they ignored the results from Google and Google Scholar, and simply surveyed the 1700 articles in Econlit (Nordhaus and Moffat 2017, p. 7). These are, almost exclusively, articles written by economists.

Nordhaus and Moffat read the abstracts of these 1700 to rule out all but 24 papers from consideration. Reading these papers led to just 11 they included in their survey results. The supplemented this “systematic research synthesis (SRS)” with:

a second approach, known as a “non-systematic research summary.” In this approach, the universe of studies was selected by a combination of formal and informal methods, such as the SRS above, the results of the Tol survey, and other studies that were known to the researchers. (Nordhaus and Moffat 2017, p. 8)

Their labours resulted in the addition of just five studies which had not been used either by the IPCC or by Tol in his aggregation papers (Tol 2009; Tol 2018; Tol 2018), with additional 6 results, and 4 additional authors—Cline, Dellink, Kemfert and Hambel—who had not already cited in the empirical estimates literature (though Cline was one of Nordhaus’s interviewees in his 1994 survey).

Remarkably, given that Nordhaus was the lead author of this study, one of the previously unused studies was by Nordhaus himself in 2010 (Nordhaus 2010). (Nordhaus and Moffat 2017)
does not provide details of this paper, or any other paper they uncovered, but I presume it is (Nordhaus 2010), given the date, and the fact that the temperature and damages estimates given in it—a 3.4°C increase in temperature causing a 2.8% fall in GDP—are identical to those given in this paper’s Table 2.

It may seem strange that Nordhaus did not notice that a paper by himself, estimating the damages from climate change, was not included in previous studies. But in fact, there is a good reason for this omission: (Nordhaus 2010) was not an enumerative study, nor a statistical one, let alone the results of an “expert elicitation”, but the output of a run of Nordhaus’s own “Integrated Assessment Model” (IAM), DICE! Treating this as a “data point” is using an output of a model to calibrate the model itself. Nonetheless, these numbers—and the five additional pairs from the four additional studies uncovered by their survey—were added to the list of numbers from which economists like Nordhaus could calibrate what they call their “damage functions”.

Damage Functions

“Damage functions” are the way in which Neoclassical economists connect estimates from scientists of the change in global temperature to their own, as shown in previous sections, utterly unsound estimates of future GDP, given this change in temperature. They reduce GDP from what they claim it would have been in the total absence of climate change, to what they claim it will be, given different levels of temperature rise. The form these damage functions take is normally simply a quadratic:

         

Nordhaus justifies using a quadratic to describe such an inherently discontinuous as climate change by misrepresenting the scientific literature—specifically, the careful survey of expert opinions carried out by Lenton et al (Lenton, Held et al. 2008) and contrasted earlier to Nordhaus’s survey of largely non-experts (Nordhaus 1994). Nordhaus makes the following statement in his DICE manual, and repeats it in (Nordhaus and Moffat 2017, p. 35):

The current version assumes that damages are a quadratic function of temperature change and does not include sharp thresholds or tipping points, but this is consistent with the survey by Lenton et al. (2008) (Nordhaus and Sztorc 2013, p. 11. Emphasis added)

In The Climate Casino (Nordhaus 2013), Nordhaus states that:

There have been a few systematic surveys of tipping points in earth systems. A particularly interesting one by Lenton and colleagues examined the important tipping elements and assessed their timing… Their review finds no critical tipping elements with a time horizon less than 300 years until global temperatures have increased by at least 3°C. (Nordhaus 2013, p. 60)

These claims can only be described as blatant misrepresentations of “Tipping elements in the Earth’s climate system”(Lenton, Held et al. 2008). The very first element in the summary table of their findings meets two of the three criteria that Nordhaus claimed were not met: Arctic summer sea-ice could be triggered by global warming of between 0.5–2°C, and in a time span measured in decades—see Figure 3.

Figure 3: An extract from Table 1 of “Tipping elements in the Earth’s climate system”,(Lenton, Held et al. 2008, p. 1788)

Nordhaus justifies his omission of Arctic summer sea ice in his table N1 (Nordhaus 2013, p. 333) via a column headed “Level of concern (most concern = ***)”, where it receives the lowest ranking (*)—thus apparently justifying his statement that there was “no critical tipping point” in less than 300 years, and with less than a 3°C temperature increase.

However, no such column exists in Table 1 of Lenton, Held et al. (2008), while their discussion of the ranking of threats puts Arctic summer sea ice first, not last:

We conclude that the greatest (and clearest) threat is to the Arctic with summer sea-ice loss likely to occur long before (and potentially contribute to) GIS melt (Lenton, Held et al. 2008, pp. 1791-92. Emphasis added).

Their treatment of time also differs substantially from that implied by Nordhaus, which is that decisions about tipping elements with time horizons of several centuries can be left for decision makers several centuries hence. While Lenton et al, do give a timeframe of more than 300 years for the complete melting of the Greenland Ice Sheet (GIS), for example, they note that focused on tipping elements whose fate would be decided this century:

Thus, we focus on the consequences of decisions enacted within this century that trigger a qualitative change within this millennium, and we exclude tipping elements whose fate is decided after 2100. (Lenton, Held et al. 2008, p. 1787)

Thus, while the GIS might not melt completely for several centuries, the human actions that will decide whether that happens or not will be taken in this century, not in several hundred years from now.

Finally, the paper’s conclusion began with the warning that smooth functions should not be used, noted that discontinuous climate tipping points were likely to be triggered this century, and reiterated that the greatest threats were Arctic summer sea ice and Greenland:

Conclusion
Society may be lulled into a false sense of security by smooth projections of global change. Our synthesis of present knowledge suggests that a variety of tipping elements could reach their critical point within this century under anthropogenic climate change. The greatest threats are tipping the Arctic sea-ice and the Greenland ice sheet, and at least five other elements could surprise us by exhibiting a nearby tipping point. (Lenton, Held et al. 2008, p. 1792. Emphasis added)

There is thus no empirical or scientific justification for choosing a quadratic to represent damages from climate change—the opposite in fact applies. Regardless, this is the function that Nordhaus ultimately adopted. Given this assumed functional form, the only unknowns are the values of the coefficients a, b and c in Equation .

Ever since Nordhaus started using a quadratic, he has consistently reduced the value of its parameters, from an initial 0.0035 for the quadratic term—which means that global warming is assumed to reduce GDP by 0.35% times the temperature (change over pre-industrial levels) squared—to a final value of 0.00227 (see Equation ). Source documents here are (Nordhaus and Sztorc 2013, pp. 83, 86, 91 & 97 for the 1992, 1999, 2008 and 2013 versions of DICE.; Nordhaus 2017, p. 1 for 2017; Nordhaus 2018, p. 345 for 2018):

         

This reduction progressively reduced his already trivial predictions of damage to GDP from global warming. For example, his prediction for the impact on GDP of a 4°C increase in temperature—the level he describes as optimal in his “Nobel Prize” lecture, since according to his model, it minimises the joint costs of damage and abatement (Nordhaus 2018, Slides 6 & 7)—was reduced from a 7% fall in 1992 to a 3.6% fall in 2018 (see Figure 4).

Figure 4: How low can you go? Nordhaus’s downward revisions to his damage function

I now turn to doing what Nordhaus himself said a scientist should do, when deriding Forrester’s model—”require empirical validation of either the assumptions or the predictions of the model
before declaring its truth content” (Nordhaus 1973, p. 1183). This is clearly something neither Nordhaus nor other Neoclassical climate change economists did themselves—apart from the honourable mentions noted earlier.

Deconstructing Neoclassical Delusions: GDP and Energy

Nordhaus justified the assumption that 87% of GDP will be unaffected by climate change on the basis that:

for the bulk of the economy—manufacturing, mining, utilities, finance, trade, and most service industries—it is difficult to find major direct impacts of the projected climate changes over the next 50 to 75 years. (Nordhaus 1991, p. 932)

In fact, a direct effect can easily be identified by surmounting the failure of economists in general—not just Neoclassicals—to appreciate the role of energy in production. Almost all economic models use production functions that assume that “Labour” and “Capital” are all that are needed to produce “Output”. However, neither Labour nor Capital can function without energy inputs: “to coin a phrase, labour without energy is a corpse, while capital without energy is a sculpture” (Keen, Ayres et al. 2019, p. 41). Energy is directly needed to produce GDP, and therefore if energy production has to fall because of global warming, then so will GDP.

The only question is how much, and the answer, given our dependence on fossil fuels, is a lot. Unlike the trivial correlation between local temperature and local GDP used by Nordhaus and colleagues in the “statistical” method, the correlation between global energy production and global GDP is overwhelmingly strong. A simple linear regression between energy production and GDP has a correlation coefficient of 0.997—see Figure 5.

Figure 5: Energy determines GDP

GDP in turn determines excess CO2 in the atmosphere. A linear regression between GDP and CO2 has a correlation coefficient of 0.998—see Figure 6.

Figure 6: Without significant de-carbonization, GDP determines CO2

Lastly, CO2 very tightly determines the temperature excess over pre-industrial levels. A linear regression between CO2 and the Global Temperature Anomaly has a correlation of 0.992 using smoothed data (which excludes the effect of non-CO2 fluctuations such as the El Nino effect).

Figure 7: CO2 determines Global Warming

Working in reverse, if climatic changes caused by the increase in global temperature persuade the public and policymakers that we must stop adding CO2 to the atmosphere “now”, whenever “now” may be, then global GDP will fall roughly proportionately to the ratio of fossil-fuel energy production to total energy production at that time.

As of 2020, fossil fuels provided roughly 85% of energy production. So, if 2020 were the year humanity decided that the growth in CO2 had to stop, GDP would fall by of the order of 85%. Even if the very high rate of growth of renewables in 2015 were maintained—when the ratio of renewables to total energy production was growing at about 3% per annum—renewables would still yield less than 40% of total energy production in 2050—see Figure 8. This implies a drop in GDP of about 50% at that time. The decision by Neoclassical climate change economists to exclude “manufacturing, mining, utilities, finance, trade, and most service industries” from any consequences from climate change is thus utterly unjustified.

Figure 8: Renewable energy as a percentage of total energy production

Deconstructing Neoclassical Delusions: Statistics

The “cross-sectional approach” of using the coefficients from the geographic temperature:GDP relationship as a proxy for the global temperature:GDP relationship is similarly unjustified. It assumes that it doesn’t matter how one alters temperature: the effect on GDP will be the same. This belief was defended by Tol in an exchange on Twitter between myself, the Climate scientist Daniel Swain, and the Professor of Computational Astrophysics Ken Rice on June 17-18 2019:

Richard Tol:    10K is less than the temperature distance between Alaska and Maryland (about equally rich), or between Iowa and Florida (about equally rich). Climate is not a primary driver of income. https://twitter.com/RichardTol/status/1140591420144869381?s=20

Daniel Swain:    A global climate 10 degrees warmer than present is not remotely the same thing as taking the current climate and simply adding 10 degrees everywhere. This is an admittedly widespread misconception, but arguably quite a dangerous one. https://twitter.com/Weather_West/status/1140670647313584129?s=20

Richard Tol:    That’s not the point, Daniel. We observe that people thrive in very different climates, and that some thrive and others do not in the same climate. Climate determinism therefore has no empirical support. https://twitter.com/RichardTol/status/1140928458853421057?s=20

Richard Tol:    And if a relationship does not hold for climate variations over space, you cannot confidently assert that it holds over time. https://twitter.com/RichardTol/status/1140928893878263808?s=20

Steve Keen:    The cause of variations over space is utterly different to that over time. That they are comparable is the most ridiculous and dangerous “simplifying assumption” in the history of economics. https://twitter.com/ProfSteveKeen/status/1140941982082244608?s=20

Ken Rice:    Can I just clarify. Are you actually suggesting that a 10K rise in global average surface temperature would be manageable? https://twitter.com/theresphysics/status/1140661721633308673?s=20

Richard Tol:    We’d move indoors, much like the Saudis have. https://twitter.com/RichardTol/status/1140669525081415680?s=20

As with the decision to exclude ~90% of GDP from damages from climate change, Tol’s assumed equivalence of weather changes across space with climate change over time ignores the role of energy in causing climate change. This can be illustrated by annotating his third tweet above with respect to the amount of energy needed to bring about a 10°C temperature increase for the atmosphere:

And if a relationship does not hold for climate variations over space [without changing the energy level of the atmosphere], you cannot confidently assert that it holds over time [as the Solar energy retained in the atmosphere rises by more than 50,000 million Terajoules]. (Trenberth 1981)

To put this level of energy in more comprehensible terms, this is the equivalent of 860 million Hiroshima atomic bombs. That amount of additional energy in the atmosphere would lead to sustained “wet bulb” temperatures that would be fatal for humans in the Tropics and much of the sub-tropics (Raymond, Matthews et al. 2020; Xu, Kohler et al. 2020). A 10°C temperature increase is of the order of that which caused the end-Permian extinction event, the most extreme mass-extinction in Earth’s history (Penn, Deutsch et al. 2018). It is five times the level of global temperature increase that climate scientists fear could trigger “tripping cascades” could transform the planet into a “Hothouse Earth” (Steffen, Rockström et al. 2018; Lenton, Rockström et al. 2019), which could potentially be incompatible with human existence:

Hothouse Earth is likely to be uncontrollable and dangerous to many, particularly if we transition into it in only a century or two, and it poses severe risks for health, economies, political stability (especially for the most climate vulnerable), and ultimately, the habitability of the planet for humans. (Steffen, Rockström et al. 2018, p. 8256)

It therefore very much does matter how one alters the temperature. At the planetary level, there are 3 main determinants of the temperature at any point on the globe:

  1. Variations in the solar energy reaching the Earth;
  2. Variations in the amount of this energy retained by greenhouse gases; and
  3. Differences in location on the planet—primarily differences in distance from the Equator

What the “cross-sectional method” did was derive parameters for the third factor, and then simply assume that the same parameters applied to the second. This is comparable to carefully measuring the terrain of a mountain in the North-South direction, and then using that information to advise on the safety of traversing it East to West.

Econometrics before Ecology

This weakness of the “cross-sectional approach” has been admitted in a more recent paper in this tradition:

Firstly, the literature relies primarily on the cross-sectional approach (see, for instance, Sachs and Warner 1997, Gallup et al. 1999, Nordhaus 2006, and Dell et al. 2009), and as such does not take into account the time dimension of the data (i.e., assumes that the observed relationship across countries holds over time as well). (Kahn, Mohaddes et al. 2019, p. 2. Emphasis added)

This promising start was unfortunately neutered by their eventual simple linear extrapolation of the change in the relationship temperature to GDP relationship between 1960 and 2014 forward to 2100:

We start by documenting that the global average temperature has risen by 0:0181 degrees Celsius per year over the last half century… We show that an increase in average global temperature of 0:04°C per year— corresponding to the Representative Concentration Pathway (RCP) 8.5 scenario (see Figure 1), which assumes higher greenhouse gas emissions in the absence of mitigation policies— reduces world’s real GDP per capita by 7.22 percent by 2100. (Kahn, Mohaddes et al. 2019, p. 4)

Their predictions for GDP change as a function of temperature change is the shaded region in Figure 9 (which reproduces their Figure 2). The linearity of their projection is evident: it presumes no structural change in the relationship between global temperature and GDP, even as temperature rises by 3.2°C over their time horizon of 80 years (0.04°C per year from 2020 till 2100).

Figure 9: Kahn and Mohaddes’s linear extrapolation of the temperature:GDP relationship from 1960-2014 out till 2100 (Kahn, Mohaddes et al. 2019, p. 6)

The failure of this paper to account for the obvious discontinuities such a temperature increase will wreak on the planet’s climate was acknowledged by one of the authors on Twitter on October 31st 2019:

Kamiar Mohaddes:    I also want to be clear that we cannot, and do not, claim that our empirical analysis allows for rare disaster events, whether technological or climatic, which is likely to be an important consideration. From this perspective, the counterfactual outcomes that we discuss… in Section 4 of the paper (see: https://ideas.repec.org/p/cam/camdae/1965.html) should be regarded as conservative because they only consider scenarios where the climate shocks are Gaussian, without allowing for rare disasters. https://twitter.com/KamiarMohaddes/status/1189846383307694084?s=20 ; https://twitter.com/KamiarMohaddes/status/1189846648366796800?s=20

Steve Keen:    Kamiar, the whole point of #GlobalWarming is that it shifts the entire distribution. What is “rare” in our current climate—like for example the melting of Greenland—becomes a certainty at higher temperatures. https://twitter.com/ProfSteveKeen/status/1189849936290029569?s=20

What Mohaddes called “rare disaster events”—such as, for example, the complete disappearance of the Arctic Ice sheet during summer—would indeed be rare at our current global temperature. But they become certainties as the temperature rises another 3°C (Steffen, Rockström et al. 2018, Figure 3, p. 8255). This forecast is as useful as a study of the relationship between temperature and speed skating, which concludes that it would be advantageous to increase the temperature of the ice from -2°C to +2°C.

This recent paper alerted me to one potentially promising study I had previously missed: the significant outlier in Figure 9 by Burke et al. (Burke, Hsiang et al. 2015). This was at least outside the economic ballpark, if not in that of scientists like Steffen, who expect a 4°C increase in temperature to lead to the collapse of civilisation (Moses 2020).

As its title “Global non-linear effect of temperature on economic production” implies, it did at least consider nonlinearities in the Earth’s climate. But once again, this was restricted to nonlinearities in the relationship between 1960 and 2010, and it was then extrapolated to a future planet with a vastly different climate:

We quantify the potential impact of warming on national and global incomes by combining our estimated non-linear response function with ‘business as usual’ scenarios of future warming and different assumptions regarding future baseline economic and population growth. This approach assumes future economies respond to temperature changes similarly to today’s economies—perhaps a reasonable assumption given the observed lack of adaptation during our 50-year sample… climate change reduces projected global output by 23% in 2100 relative to a world without climate change, although statistical uncertainty allows for positive impacts with probability 0.29 (Burke, Hsiang et al. 2015, pp. 237-38. Emphasis added)

As applies to so much of this research, these two recent papers show the authors delighting in the ecstasy of econometrics, while failing to appreciate the irrelevance of their framework to the question at hand.

GIGO: Garbage In, Garbage Out

When I began this research, I expected that the main cause of Nordhaus’s extremely low predictions of damages from climate change would be the application of a very high discount rate to climate damages estimated by scientists, and that a full critique of his work would require explaining why an equilibrium-based Neoclassical model like DICE was the wrong tool to analyse something as dynamic and far from equilibrium as climate change (DeCanio 2003). Instead, I found that the computing adage “Garbage In, Garbage Out” (GIGO) applied: it does not matter how good or how bad the actual model is, when it is fed “data” like that concocted by Nordhaus and his coterie of like-minded Neoclassical economists. The numerical estimates to which they fitted their inappropriate models are, as shown here, utterly unrelated to the phenomenon of global warming. Even an appropriate model of the relationship between climate change and GDP would return garbage predictions if it were trained on “data” like this.

This raises the key question: how did such transparently inadequate work get past academic referees?

Simplifying Assumptions and the Refereeing Process: the Poachers becomes the Gatekeepers

One undeniable reason why this research agenda was not drowned at birth was the proclivity for Neoclassical economists to make assumptions on which their conclusions depend, and then dismiss any objections to them on the grounds that they are merely “simplifying assumptions”.

As Paul Romer observed, the standard justification for this is “Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions” (Romer 2016, p. 5). Those who make this defence do not seem to have noted Friedman’s footnote that “The converse of the proposition does not of course hold: assumptions that are unrealistic (in this sense) do not guarantee a significant theory” (Friedman 1953, p. 14).

A simplifying assumption is something which, if it is violated, makes only a small difference to your analysis. Musgrave points out that “Galileo’s assumption that air-resistance was negligible for the phenomena he investigated was a true statement about reality, and an important part of the explanation Galileo gave of those phenomena” (Musgrave 1990, p. 380). However, the kind of assumptions that Neoclassical economists frequently make, are ones where if the assumption is false, then the theory itself is invalidated (Keen 2011, pp. 158-174).

This is clearly the case here with the core assumptions of Nordhaus and his Neoclassical colleagues. If activities that occur indoors are, in fact, subject to climate change; if the temperature to GDP relationships across space cannot be used as proxies for the impact of global warming on GDP, then their conclusions are completely false. Climate change will be at least one order of magnitude more damaging to the economy than their numbers imply—working solely from the spurious assumption that 90% of the economy will be unaffected by it. It could be far, far worse.

Unfortunately, referees who accept Friedman’s dictum that “a theory cannot be tested by the “realism” of its “assumptions”” (Friedman 1953, p. 23) are unlikely to reject a paper because of its assumptions, especially if that paper otherwise makes assumptions that Neoclassical economists accept. Thus, Nordhaus’s initial sorties in this area received a free pass.

After this, a weakness of the refereeing process took over. As any published academic knows, once you are published in an area, you will be selected by journal editors as a referee for that area. Thus, rather than peer review providing an independent check on the veracity of research, it can allow the enforcement of a hegemony. As one of the first of the very few Neoclassical economists to work on climate change, and the first to proffer empirical estimates of the damages to the economy from climate change, this put Nordhaus in the position to both frame the debate, and to play the role of gatekeeper. One can surmise that he relishes this role, given not only his attacks on Forrester and the Limits to Growth (Meadows, Randers et al. 1972; Nordhaus 1973; Nordhaus 1992), but also his attack on his fellow Neoclassical economist Nicholas Stern for using a low discount rate in The Stern Review (Nordhaus 2007; Stern 2007).

The product has been a degree of conformity in this community that even Tol acknowledged:

it is quite possible that the estimates are not independent, as there are only a relatively small number of studies, based on similar data, by authors who know each other well… although the number of researchers who published marginal damage cost estimates is larger than the number of researchers who published total impact estimates, it is still a reasonably small and close-knit community
who may be subject to group-think, peer pressure, and self-censoring. (Tol 2009, pp. 37, 42-43)

Indeed.

Conclusion: Drastically underestimating damages from Global Warming

Were climate change an effectively trivial area of public policy, then the appallingly bad work done by Neoclassical economists on climate change would not matter greatly. It could be treated, like the intentional Sokal hoax (Sokal 2008), as merely a salutary tale about the foibles of the Academy.

But the impact of climate change upon the economy, human society, and the viability of the Earth’s biosphere in general, are matters of the greatest importance. That work this bad has been done, and been taken seriously, is therefore not merely an intellectual travesty like the Sokal hoax. If climate change does lead to the catastrophic outcomes that some scientists now openly contemplate (Kulp and Strauss 2019; Lenton, Rockström et al. 2019; Wang, Jiang et al. 2019; Yumashev, Hope et al. 2019; Lynas 2020; Moses 2020; Raymond, Matthews et al. 2020; Xu, Kohler et al. 2020), then these Neoclassical economists will be complicit in causing the greatest crisis, not merely in the history of capitalism, but potentially in the history of life on Earth.

Appendix

Table 3: Table SM10-1, p. SM10-4 of “Key Economic Sectors, plus other studies by economists

Table 4: USA average temperature, GDP/GSP and Population data

References

Arent, D. J., R.S.J. Tol, E. Faust, J.P. Hella, S. Kumar, K.M. Strzepek, F.L. Tóth, and D. Yan, (2014). Key economic sectors and services – supplementary material. . Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. C. B. Field, V. R. Barros, D. J. Dokkenet al.

Arent, D. J., R. S. J. Tol, et al. (2014). Key economic sectors and services. Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. C. B. Field, V. R. Barros, D. J. Dokkenet al. Cambridge, United Kingdom, Cambridge University Press: 659-708.

Bosello, F., F. Eboli, et al. (2012). “Assessing the economic impacts of climate change.” Review of Energy Environment and Economics
1(9).

Burke, M., S. M. Hsiang, et al. (2015). “Global non-linear effect of temperature on economic production.” Nature
527(7577): 235.

Cline, W. (1996). “The impact of global warming on agriculture: Comment.” The American Economic Review
86(5): 1309-1311.

Darwin, R. (1999). “The Impact of Global Warming on Agriculture: A Ricardian Analysis: Comment.” American Economic Review
89(4): 1049-1052.

DeCanio, S. J. (2003). Economic models of climate change : a critique. New York, Palgrave Macmillan.

Fankhauser, S. (1995). Valuing Climate Change: The economics of the greenhouse. London, Earthscan.

Field, C. B., V. R. Barros, et al. (2014). IPCC, 2014: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom, Cambridge University Press, .

Forrester, J. W. (1971). World Dynamics. Cambridge, MA, Wright-Allen Press.

Forrester, J. W. (1973). World Dynamics. Cambridge, MA, Wright-Allen Press.

Friedman, M. (1953). The Methodology of Positive Economics. Essays in positive economics. Chicago, University of Chicago Press: 3-43.

Gelman, A. (2014). “A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”.” Statistical Modeling, Causal Inference, and Social Science https://statmodeling.stat.columbia.edu/2014/05/27/whole-fleet-gremlins-looking-carefully-richard-tols-twice-corrected-paper-economic-effects-climate-change/.

Gelman, A. (2015). “More gremlins: “Instead, he simply pretended the other two estimates did not exist. That is inexcusable.”.” Statistical Modeling, Causal Inference, and Social Science https://statmodeling.stat.columbia.edu/2015/07/23/instead-he-simply-pretended-the-other-two-estimates-did-not-exist-that-is-inexcusable/.

Gelman, A. (2019). “The climate economics echo chamber: Gremlins and the people (including a Nobel prize winner) who support them.” https://statmodeling.stat.columbia.edu/2019/11/01/the-environmental-economics-echo-chamber-gremlins-and-the-people-including-a-nobel-prize-winner-who-support-them/.

Hope, C. (2006). “The marginal impact of CO2 from PAGE2002: an integrated assessment model incorporating the IPCC’s five Reasons for Concern.” Integrated Assessment
6(1): 19-56.

Kahn, M. E., K. Mohaddes, et al. (2019) “Long-Term Macroeconomic Effects of Climate Change: A Cross-Country Analysis.” DOI: https://doi.org/10.24149/gwp365.

Kaufmann, R. K. (1997). “Assessing The Dice Model: Uncertainty Associated With The Emission And Retention Of Greenhouse Gases.” Climatic Change
35(4): 435-448.

Kaufmann, R. K. (1998). “The impact of climate change on US agriculture: a response to Mendelssohn et al. (1994).” Ecological Economics
26(2): 113-119.

Keen, S. (2011). Debunking economics: The naked emperor dethroned? London, Zed Books.

Keen, S., R. U. Ayres, et al. (2019). “A Note on the Role of Energy in Production.” Ecological Economics
157: 40-46.

Kulp, S. A. and B. H. Strauss (2019). “New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding.” Nature Communications
10(1): 4844.

Lenton, T., J. Rockström, et al. (2019). “Climate tipping points – too risky to bet against.” Nature
575(7784): 592-595.

Lenton, T. M., H. Held, et al. (2008). “Supplement to Tipping elements in the Earth’s climate system.” Proceedings of the National Academy of Sciences
105(6).

Lenton, T. M., H. Held, et al. (2008). “Tipping elements in the Earth’s climate system.” Proceedings of the National Academy of Sciences
105(6): 1786-1793.

Lynas, M. (2020). Our Final Warning: Six Degrees of Climate Emergency. London, HarperCollins Publishers.

Maddison, D. (2003). “The amenity value of the climate: the household production function approach.” Resource and Energy Economics
25(2): 155-175.

Maddison, D. and K. Rehdanz (2011). “The impact of climate on life satisfaction.” Ecological Economics
70(12): 2437-2445.

Meadows, D. H., J. Randers, et al. (1972). The limits to growth. New York, Signet.

Mendelsohn, R., W. Morrison, et al. (2000). “Country-Specific Market Impacts of Climate Change.” Climatic Change
45(3): 553-569.

Mendelsohn, R., M. Schlesinger, et al. (2000). “Comparing impacts across climate models.” Integrated Assessment
1(1): 37-48.

Mirowski, P. (2020). The Neoliberal Ersatz Nobel Prize. Nine Lives of Neoliberalism. D. Plehwe, Q. Slobodian and P. Mirowski. London, Verso: 219-254.

Moses, A. (2020). ‘Collapse of civilisation is the most likely outcome’: top climate scientists. Voice of Action. Melbourne, Australia.

Musgrave, A. (1990). ‘Unreal Assumptions’ in Economic Theory: The F-Twist Untwisted. Milton Friedman: Critical assessments. Volume 3. J. C. Wood and R. N. Woods. London and New York, Routledge: 333-342.

Nordhaus, W. (1994). “Expert Opinion on Climate Change.” American Scientist
82(1): 45–51.

Nordhaus, W. (2007). “Critical Assumptions in the Stern Review on Climate Change.” Science
317(5835): 201-202.

Nordhaus, W. (2008). A Question of Balance. New Haven, CT, Yale University Press.

Nordhaus, W. (2013). The Climate Casino: Risk, Uncertainty, and Economics for a Warming World. New Haven, CT, Yale University Press.

Nordhaus, W. (2018). “Nobel Lecture in Economic Sciences. Climate Change: The Ultimate Challenge for Economics.” from https://www.nobelprize.org/uploads/2018/10/nordhaus-slides.pdf.

Nordhaus, W. (2018). “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies.” American Economic Journal: Economic Policy
10(3): 333–360.

Nordhaus, W. and J. G. Boyer (2000). Warming the World: Economic Models of Global Warming. Cambridge, Massachusetts, MIT Press.

Nordhaus, W. and P. Sztorc (2013). DICE 2013R: Introduction and User’s Manual.

Nordhaus, W. D. (1973). “World Dynamics: Measurement Without Data.” The Economic Journal
83(332): 1156-1183.

Nordhaus, W. D. (1991). “To Slow or Not to Slow: The Economics of The Greenhouse Effect.” The Economic Journal
101(407): 920-937.

Nordhaus, W. D. (1992). “Lethal Model 2: The Limits to Growth Revisited.” Brookings Papers on Economic Activity(2): 1-43.

Nordhaus, W. D. (1994). “Expert Opinion on Climatic Change.” American Scientist
82(1): 45-51.

Nordhaus, W. D. (1994). Managing the global commons : the economics of climate change / William D. Nordhaus. Cambridge, Mass., Cambridge, Mass.

Nordhaus, W. D. (2006). “Geography and macroeconomics: New data and new findings.” Proceedings of the National Academy of Sciences of the United States of America
103(10): 3510-3517.

Nordhaus, W. D. (2010). “Economic aspects of global warming in a post-Copenhagen environment.” Proceedings of the National Academy of Sciences of the United States of America
107(26): 11721-11726.

Nordhaus, W. D. (2017). “Revisiting the social cost of carbon Supporting Information.” Proceedings of the National Academy of Sciences
114(7): 1-2.

Nordhaus, W. D. and A. Moffat (2017). A Survey Of Global Impacts Of Climate Change: Replication, Survey Methods, And A Statistical Analysis. New Haven, Connecticut, Cowles Foundation. Discussion Paper No. 2096.

Nordhaus, W. D. and Z. Yang (1996). “A Regional Dynamic General-Equilibrium Model of Alternative Climate-Change Strategies.” The American Economic Review
86(4): 741-765.

Penn, J. L., C. Deutsch, et al. (2018). “Temperature-dependent hypoxia explains biogeography and severity of end-Permian marine mass extinction.” Science
362(6419): eaat1327.

Plambeck, E. L. and C. Hope (1996). “PAGE95: An updated valuation of the impacts of global warming.” Energy Policy
24(9): 783-793.

Quiggin, J. and J. Horowitz (1999). “The impact of global warming on agriculture: A Ricardian analysis: Comment.” The American Economic Review
89(4): 1044-1045.

Raymond, C., T. Matthews, et al. (2020). “The emergence of heat and humidity too severe for human tolerance.” Science Advances
6(19): eaaw1838.

Rehdanz, K. and D. Maddison (2005). “Climate and happiness.” Ecological Economics
52(1): 111-125.

Romer, P. (2016). “The Trouble with Macroeconomics.” https://paulromer.net/trouble-with-macroeconomics-update/WP-Trouble.pdf.

Roson, R. and D. v. d. Mensbrugghe (2012). “Climate change and economic growth: impacts and interactions.” International Journal of Sustainable Economy
4(3): 270-285.

Sokal, A. D. (2008). Beyond the hoax : science, philosophy and culture / Alan Sokal. Oxford, Oxford : Oxford University Press.

Steffen, W., J. Rockström, et al. (2018). “Trajectories of the Earth System in the Anthropocene.” Proceedings of the National Academy of Sciences
115(33): 8252-8259.

Stern, N. (2007). The Economics of Climate Change: The Stern Review. Cambridge, Cambridge University Press.

Tol, R. S. J. (1995). “The damage costs of climate change toward more comprehensive calculations.” Environmental and Resource Economics
5(4): 353-374.

Tol, R. S. J. (2002). “Estimates of the Damage Costs of Climate Change. Part 1: Benchmark Estimates.” Environmental and Resource Economics
21(1): 47-73.

Tol, R. S. J. (2009). “The Economic Effects of Climate Change.” The Journal of Economic Perspectives
23(2): 29–51.

Tol, R. S. J. (2014). “Correction and Update: The Economic Effects of Climate Change.” The Journal of Economic Perspectives
28(2): 221-226.

Tol, R. S. J. (2018). “The Economic Impacts of Climate Change.” Review of Environmental Economics and Policy
12(1): 4-25.

Tol, R. S. J. (2018). “The Economic Impacts of Climate Change Appendix.” Review of Environmental Economics and Policy
12(1): 4-25.

Trenberth, K. E. (1981). “Seasonal variations in global sea level pressure and the total mass of the atmosphere.” Journal of Geophysical Research: Oceans
86(C6): 5238-5246.

Wang, X. X., D. Jiang, et al. (2019). “Extreme temperature and precipitation changes associated with four degree of global warming above pre-industrial levels.” International Journal Of Climatology
39(4): 1822-1838.

Weitzman, M. L. (2011). “Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change.” Review of Environmental Economics and Policy
5(2): 275-292.

Weitzman, M. L. (2011). “Revisiting Fat-Tailed Uncertainty in the Economics of Climate Change.” REEP Symposium on Fat Tails
5(2).

Xu, C., T. A. Kohler, et al. (2020). “Future of the human climate niche.” Proceedings of the National Academy of Sciences: 201910114.

Yumashev, D., C. Hope, et al. (2019). “Climate policy implications of nonlinear decline of Arctic land permafrost and other cryosphere elements.” Nature Communications
10(1): 1900.

 

 

The macroeconomics of degrowth: can planned economic contraction be stable?

The Limits to Growth (Meadows, Randers et al. 1972) is infamous for many things, but above all for its “Standard Run” scenario, which predicted that, if there were no changes to the direction of economic development after 1972, then by some time in the early to mid-21st century, human civilisation would undergo a serious decline.

Figure 1: The Limits to Growth standard run (Meadows, Randers et al. 1972, p. 124). The legend for all plots is provided in Figure 2

Figure 2: The legend for all the Limits to Growth plots (Meadows, Randers et al. 1972, p. 123)

Less well known is its stabilised run, in which a range of policies were hypothetically introduced in 1975 to achieve a state of “global equilibrium … so that the basic material needs of each person on earth are satisfied and each person has an equal opportunity to realize his individual human potential” (Meadows, Randers et al. 1972, p. 24). The simulation concluded that no single policy was sufficient: population control on its own was not enough, nor pollution abatement without population control, and so on. But if all the policy changes they modelled were undertaken (they are described in Figure 3), then the world could achieve a sustainable future where average living standards for the globe as a whole were three times as high as they were in 1970—and much more equitably distributed.

Figure 3: The state of global equilibrium run (Meadows, Randers et al. 1972, p. 165)

As crucial as the need for a swathe of coordinated policies was the timing: if the changes were delayed for another 25 years till 2000, then they would fail: there would be overshoot of the biosphere’s capability to endure the pressure put upon it by humanity’s industrialized society. Both output and population would reach a peak in the mid-21st century and then decline.

Figure 4: Policies for global equilibrium introduced in 2000 instead of 1975 (Meadows, Randers et al. 1972, p. 169)

It is stating the obvious that policies to restrain humanity’s pressure on the biosphere were not put in place in 1975, nor 2000, nor even 2020. With research by Graham Turner (Turner 2008; Turner, Hoffman et al. 2011; Turner 2014) confirming that the world is still largely tracking the Standard Run of Limits to Growth, and studies like the Human Ecological Footprint (https://www.footprintnetwork.org/resources/data/) asserting that the human species alone is using 1.6 times the reproducible limit of the biosphere, we are well into ecological overshoot. Meadows et al noted that there were only three possibilities for the future, and that only two were possible:

All the evidence available to us, however, suggests that of the three alternatives—unrestricted growth, a self-imposed limitation to growth, or a nature-imposed limitation to growth—only the last two are actually possible. (Meadows, Randers et al. 1972, p. 168)

Since we have clearly failed to impose limits ourselves, we now face Nature doing that for us. Meadows et al deliberately avoided providing precise timing for their predictions, stating that:

We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known. (Meadows, Randers et al. 1972, pp. 123-24)

However, it is hardly being hyperbolic, at this point in 2020—with Australia’s wildfires behind us (Dowdy, Ye et al. 2019), Covid-19 all around us (Korolev 2020), India and Bangladesh suddenly reeling under the impact of Cyclone Amphan, and the prospect of catastrophic wildfires in the approaching American Summer—to feel that the deliberately vague timing of the Limits to Growth has proven to be precisely correct. Nature may be imposing its limits now.

But just as Covid-19 has severely jolted our consciousness, and led to policy changes that were unthinkable as recently as January 2020, what if these and subsequent ecological calamities shook humanity so much that we decided, belatedly but instantly, to impose the limits that Limits to Growth recommended we should implement 45 years ago? What would happen to global GDP?

Answering this question thoroughly would require updating the Limits to Growth study with current data. This should have been happening on a regular basis since 1972, but it was prevented in large measure by the ferocious attacks on the study’s credibility by economists in general—and by William Nordhaus in particular (Nordhaus 1973; Nordhaus 1992). These attack were based on misinformation and ignorance rather than knowledge (Forrester, Gilbert et al. 1974), but—or should I say “and”?—their impact was devastating. Though the book itself sold millions of copies, the group’s research funding evaporated. Whereas the original study was run on top of the line (for the time) mainframe computers at MIT, with a budget of the order of a million dollars in 1972, today Jorgen Randers, the one surviving author of the original study, is working without pay on developing an extended version of the World3 model called MODCAP (using the PC-based system dynamics program Vensim). In 2019 he was unable to raise funds to continue employing his one assistant.

In lieu of a complete answer therefore, I will provide a partial one by focusing on one of the many weaknesses of economics: the failure of both mainstream and non-mainstream economists to properly acknowledge the essential role of energy in production. What will happen to GDP if humanity realises that it has to immediately cease adding CO2 to the atmosphere, by ceasing to use carbon-based forms of energy, for everything from the production of electricity to transportation?

This question can’t be answered by turning to an unconventional economics textbook, let alone a conventional one. To date, Post Keynesian and Neoclassical economists alike have modelled GDP as being generated by a combination of labour and capital: Labour and Machinery inàGoods and services out. Some economists have attempted to incorporate energy by, to use Neoclassical terminology, treating Energy as a third “factor of production” on an equal footing with Labour and Capital (Solow 1974; Stiglitz 1974; Stiglitz 1974). But this is ontologically false. Energy cannot be added to a production process independently of labour and capital, and nor can a worker or a machine function without energy inputs. As I put it in “A Note on the Role of Energy in Production”:

Labour without energy is a corpse; capital without energy is a sculpture. (Keen, Ayres et al. 2019, p. 41)

This perspective casts the relationship between energy and GDP in a very different light to conventional economics, which even Larry Summers ridiculed for treating energy as no more important than its very small share of total GDP implied (less than 10% for most countries):

There’d be a set of economists who’d sit around explaining that electricity was only four percent of the economy, and so if you lost eighty percent of electricity you couldn’t possibly have lost more than three percent of the economy… It would be stupid. And we’d understand that somehow, even if we didn’t exactly understand in the model, that when there wasn’t any electricity there wasn’t really going to be much economy. (Summers 2013)

Treating energy as inputs to both labour and machinery (in different forms of course) implies a critical relationship between energy inputs and GDP: if there is no energy input, then there is no GDP, and if energy inputs fall, then ipso facto, so will GDP.

There are only two ways in which these implications could be countered: if there was over time a trend for GDP to “decouple” from energy, so that more GDP was produced per unit of energy; and if there was a strong trend for renewable forms of energy to replace carbon-based forms, so that the overall fall in energy could be attenuated.

Both these counters are false hopes. Though there has been a trend to a falling level of energy per unit of GDP in some countries, at the global level, the relationship between energy consumption and GDP between 1971 and 2017 is stunningly linear—see Figure 5. A rising global GDP requires a rising level of energy, and since energy is the motive force, if we are forced to abandon carbon-based energy forms, then GDP has to fall by much the same fraction as carbon-based energy is of total energy.

Figure 5: The relationship between global energy and global GDP between 1971 and 2016

The remaining hope is that progress in renewable energy has been such that it makes up a much larger proportion of total energy production than in the past. But again, the data dashes hope: though there has been a rapid increase in renewable energy as a percentage of total energy production since 2007, by 2016, it had only just pushed the renewable percentage past the peak it reached in 1983—see Figure 6. Even if the trend during 2015 had continued, renewables would still constitute less than 16% of total energy output in 2020.

 

Figure 6: The disappointing news on renewables as a percent of total energy supply

That implies that, if we decided to cease using carbon-based fuels at all at the end of 2020—so that we stopped adding to the level of CO2 in the atmosphere—then GDP would fall by about 85%.

There is no way that such a decision will be made in 2020—or at least, no way it will be made voluntarily. As Covid-19 has shown, a decision of a comparable magnitude can be compelled by Nature, but the mentality amongst decision-makers and the public is still to aim for a “return to normal” when—or rather if—the Covid-19 threat is contained.

But what if the wildfires, and the virus, and the floods, and the locusts(!) of 2020 are just the beginning of a series of ecological nightmares that will finally alert humanity to how far we have overstepped the carrying capacity of this planet? What if policymakers, with not merely the support of the public but pressure from it, adopt the UK Labour target of 2030 as the year in which CO2 emissions fall to zero? Given the linear relationship between energy usage and GDP shown in Figure 5, what would it take in terms of the replacement of carbon-based energy sources by renewable ones—primarily solar and wind—to preserve 2020’s global real output level in 2030?

Extrapolating the trends in both GDP and Energy from 2016 to 2020 yields an estimate of global GDP in 2020 of $86 trillion (in 2010 US$ terms), and energy production of 14 million KTOE (KiloTonnes of Oil Equivalent, one of the standard measures of energy). If we assume that, as of 2020, renewable energy is providing 15% of total energy, then to reach 100% renewable energy by 2030—and thus achieve zero emissions of CO2—while maintaining GDP at the 2020 level, would require the installation of renewable energy sources capable of yielding the energy equivalent of roughly 12 billion tonnes of oil every year.

To put this in perspective, to generate the same amount of power from large (1,000 Megawatt) hydro-electric or nuclear power stations in 2030 would require building 16,000 of them between now and 2030, or more than 4 such stations per day, every day. It goes without saying that we do not have the capacity to achieve either goal.

Solar cells and wind farms are technologically far less complicated than nuclear power stations, and capable of being installed in far more locations than hydroelectric dams. However, solar only generates power during daylight hours, wind farms are more restricted in location, and both require an enormous area to generate the same power as a nuclear or coal-fired power station. Since the energy the Earth’s surface receives from the Sun peaks at about 10 Megawatts per hectare, a 1000MW solar power station would require roughly 1,000 hectares of land with optimistic assumptions about efficiency and availability (wind power is even more diffuse). To completely replace our planetary energy production with solar power would require an area of solar panels roughly equivalent to one third the area of Spain.

That amount of land could be provided by the world’s road networks and rooftops, with farms also hosting wind generation, so the goal is not unreachable. But are we getting to this goal fast enough, so that by 2030, renewable energy could provide 100% of our energy needs? Unfortunately, no—and not by a long shot. The red and blue plots in Figure 7 are the same as in Figure 6, just with the time horizon pushed further out: at the current growth rate of renewable energy, it would take until 2082 to achieve 100% renewable energy at the 2020 level of GDP. To get to 100% renewable energy by 2030 and thus be able to sustain 2020’s real GDP level, we need a growth rate of the renewable (predominantly wind and solar energy) to total energy ratio of 20% per year—almost seven times as high as the actual growth rate in 2015. Call me a pessimist, but I can’t see that happening. Individual countries (such as the UK) might be able to get there, but at the planetary level, we will not.

This means that we won’t be able to maintain 2020 GDP levels in 2030 with an economy which is completely powered by renewable energy, and hence has net zero CO2 emissions. Though we should endeavour to expand renewable and non-carbon-based power in general as much as possible, if we are to have a zero carbon economy by 2030, then we have to accept that GDP will fall substantially. Even a four-fold increase in the rate of growth of renewable energy will result in energy input levels—and hence GDP output levels—that are 50% below 2020 levels in 2030.

 

 

Figure 7: Extending the optimistic extrapolation of the trend in Figure 6 till it reaches 100% of energy production

How could such a reduction in output be undertaken deliberately and, as much as possible, peacefully? We need a mechanism for GDP reduction, and for the encouragement of the shift to renewable energy, that falls primarily on the wealthy. A price for carbon, as championed by Neoclassical economists like William Nordhaus, will afflict the poor disproportionately compared to the rich. The riots with which the Gilles Janes movement began in France in response to carbon pricing less than 2 years ag should make it obvious that the burden of adjustment must fall on the rich rather than the poor—both within nations and between them.

One system that could work is a dual-price mechanism based on carbon rationing, as proposed by Total Carbon Rationing. Currently, for other reasons, many Central Banks are exploring the concept of “Central Bank Digital Currencies” (CBDCs), which would give every resident in a country an account at the Central Bank. Rather than being a means to create and store the national currency, these accounts could be used to provide a “Universal Carbon Credit” (UCC) to every resident of a country on an equal per capita basis per recipient—so that billionaires would receive the same annual UCC as paupers. To buy any commodity, a consumer would need to pay both its money price, as now, and its CO2 content as well, using UCCs.

The ration could initially be set well above the average per capita CO2 consumption of the country, so that the vast majority of the population would never exhaust their allowance, and would therefore be able to sell their excess UCCs to the rich—who, at their current consumption levels, would definitely exhaust their allowance, and thus need to buy UCCs from the poor. It would work as a redistributive mechanism, as well as a means to reduce consumption and hence GDP and CO2 output as it was reduced over time.

An economically and politically stable route to reduced GDP is thus conceivable. Is it realistic? My expectation is that it is not. The far more likely outcome is that humanity in general and the powerful in particular will delay the decision to act, hoping instead that GDP can return to pre-Covid-19 growth rates, while ignoring the dependence of this growth rate on an increasing use of carbon-based energy that will accelerate Global Warming. We will, in effect, let Nature make the decision for us.

Dowdy, A. J., H. Ye, et al. (2019). “Future changes in extreme weather and pyroconvection risk factors for Australian wildfires.” Scientific Reports
9(1).

Forrester, J. W., W. L. Gilbert, et al. (1974). “The Debate on “World Dynamics”: A Response to Nordhaus.” Policy Sciences
5(2): 169-190.

Keen, S., R. U. Ayres, et al. (2019). “A Note on the Role of Energy in Production.” Ecological Economics
157: 40-46.

Korolev, I. (2020). Identification and Estimation of the SEIRD Epidemic Model for COVID-19, SUNY at Binghamton, Department of Economics.

Meadows, D. H., J. Randers, et al. (1972). The limits to growth. New York, Signet.

Nordhaus, W. D. (1973). “World Dynamics: Measurement Without Data.” The Economic Journal
83(332): 1156-1183.

Nordhaus, W. D. (1992). “Lethal Model 2: The Limits to Growth Revisited.” Brookings Papers on Economic Activity(2): 1-43.

Solow, R. M. (1974). “The Economics of Resources or the Resources of Economics.” The American Economic Review
64(2): 1-14.

Stiglitz, J. (1974). “Growth with Exhaustible Natural Resources: Efficient and Optimal Growth Paths.” The Review of Economic Studies
41: 123-137.

Stiglitz, J. E. (1974). “Growth with Exhaustible Natural Resources: The Competitive Economy.” Review of Economic Studies
41(5): 139.

Summers, L. (2013). Larry Summers at IMF Economic Forum, November 8th 2013.

Turner, G. M. (2008). “A comparison of The Limits to Growth with 30 years of reality.” Global Environmental Change
18(3): 397-411.

Turner, G. M. (2014). Is Global Collapse Imminent? An Updated Comparison of The Limits to Growth with Historical Data. Melbourne, Melbourne Sustainable Society Institute.

Turner, G. M., R. Hoffman, et al. (2011). “A tool for strategic biophysical assessment of a national economy – The Australian stocks and flows framework.” Environmental Modelling & Software
26: 1134-1149.

 

 

Personal #Coronavirus Update 03 May 23rd 2020

It’s been a long time—in Coronavirus days—since my last update on March 21st. At that stage, we had been in Thailand for just 2 days, and we were staying in a hotel in Trang (we stayed in the only 5-star hotel in this town of 60,000, which is about 500 miles south of Bangkok, and 300km south of Phuket, far off the beaten track for foreign tourists to Thailand).

The main motivation for the move from The Netherlands to Thailand was to buy myself time to finish the expose I want to write on the appallingly bad garbage that has passed for mainstream climate change economics. I still fully expected that I’d get the virus; I just didn’t want it to get me before I “got”, as best I can, Nordhaus and Tol and all their fellow-travellers for their arguably criminally negligent trivialization of the dangers of climate change. I was buying time, but I thought, not health or freedom.

Now, we’re renting a 4-bedroom house in a gated community of about 500 homes on the outskirts of Trang, and the odds are that I won’t get the virus at all.

Thailand is one of about 40 countries that, to use the pandemic diseases expert and complex systems theorist Yaneer Bar-Yam‘s phrase, are not merely “flattening”, but “crushing the curve”: eliminating the virus from within their borders. When we arrived, Thailand’s daily case count was 56 on a 3-day moving average basis, and was still rising—see the screenshot below from the excellent Coronavirus visualisation tool build by one of my Patrons, Nigel Goddard (the blue dot marks March 20, the day we arrived here).

Figure 1: Thailand is recording just 1 case a day now https://homepages.inf.ed.ac.uk/ngoddard/covid19/

Now it has fallen to 1 new case per day, and only one case in the last two weeks was of community transmission: the rest have come from Thai nationals in quarantine after returning from overseas. It is highly likely that Thailand will eliminate the virus entirely this month.

This has been due to strong personal hygiene, stringent controls from a central government that took the disease seriously from the outset and had recent experience of an epidemic with SARS in 2003, and plentiful supplies of personal protection equipment: when we arrived, every morning we’d line up at a local supermarket (with about 500 others) to buy a pack of 4 N95 surgical masks for 10 Baht—or about 10 US cents each.

Figure 2: The standard pack of 4 masks for US$10 cents each

The price was government controlled, but included a 20% profit for the local Thai manufacturer. This experience alone demonstrated to me the folly of the West offshoring production under “globalization”. That meant cheap goods for consumers and large profits for capitalists that no longer had to pay their local workers decent wages. But it also meant that the USA, and the UK, and most of Europe, weren’t able to produce enough masks for their own people during this pandemic, while Asian countries were able to churn them out cheaply, and make them available en masse.

Figure 3: The orderly queue to purchase the masks. People were a bit closer than the 1.5 metre rule, but literally everyone was wearing a mask to begin with

We now have a personal stock of about 25 packs of these, plus alcohol gels and sprays. This would be replicated across Thailand (though not necessarily to the same scale per household). The bottom line is that pretty much everyone in the country has the personal protection equipment (and social practices) needed to drastically reduce person to person transmission of the virus.

Figure 4: Part of our household stock of masks, gels and sprays.

It has certainly been eliminated in the province we’re living in, Trang (in the capital city of the same name). There were 3 cases here when we arrived in Thailand, then 4, 6 and finally 7—all from one family so I’m told, of a 24-year-old who had been working in Phuket.

Phuket is a major tourist destination, and has had a total of 224 cases out of a population of 420,000—or about 1 case per 2000 residents (that’s about half as bad as The Netherlands). The province of Trang has had 7 cases amongst it 700,000 residents—or 1 case per 100,000. The last new case was over a month ago. All the most recent cases have been in Bangkok, a sprawling city of 8 million that I was sure would be a viral hotspot. Instead, it has recorded just 1548 cases: about 19 cases per 100,000, versus 260 per hundred thousand in the Netherlands and close to 400 in the UK.

Figure 5: Bangkok’s total case count at https://covidtracker.5lab.co/en?fbclid=IwAR2FoTEMKOjtADGTM7Bv1EGiqP2-mppEKtpdx9zv9ZO45FM0qi04yxxEAKk

The personal impact of this is palpable. Even though people are still practicing personal caution here, the mood is relaxed: you’re no longer afraid of your fellow human being. I noticed this at a restaurant earlier this week, when the owner came up and clinked glasses with us over a meal. Even a month ago, that was unthinkable. Now, it feels like old times—as in, like six months ago. I wouldn’t even have noted such an event back then. Now, it’s significant. I feel like someone who almost drowned, noticing the air in a way that everyone else takes for granted.

Thailand won’t let this relaxed mood lead to a resurgence of cases, however. It is still locking down provinces—you can’t travel from one to another without a health clearance, a good reason to travel (tourism doesn’t qualify!), and a clearance to travel from the provincial government; you have to scan a QR code when you enter and leave a shop, to enable case tracking; everyone everywhere wears a mask when they are in contact with people they don’t know; and a curfew still applies, but now from 11pm till 4am rather than the original 10pm till 4am.

So, I’m confident that Thailand will get to where Taiwan already is: to zero new cases for more than 2 weeks, which confirms that the virus is not within its borders. There are, according to Yaneer’s excellent EndCoronavirus.org website, 46 countries that are on their way to that state.

Figure 6: Yaneer’s personal profile from the https://www.endcoronavirus.org/ website

They include China, the original source of the virus, which is down to a mere 7 new cases a day in a population of 1.4 billion. It is very much the whale amongst minnows here, in both numerical and economic terms. The next biggest country in population terms is Vietnam, with a population of 95 million—and the incredible success story of having just 302 cases and no deaths. Next is Thailand (70 million), Australia (25 million), (Taiwan) 24 million, then Sri Lanka (22 million), and after that, way down to countries like Norway (5 million). Economically, China is also first at over US$13 trillion, with Australia in second place at one tenth that level.

Figure 7: The 46 countries that are on their way to eliminating the virus completely

Another 25 are “nearly there”. These include most of the countries of the EU, Japan, and South Korea. Of these, both Japan and South Korea look likely to join Yaneer’s winners circle—where they will form a welcome counterpoint to China’s economic and population dominance. Many of the others, I fear, will spark a second wave if they succumb to the pressure to re-open before the virus has been eradicated. That includes Italy for example, which is still recording almost a thousand new cases every day, and the Netherlands, which I left in order to come here, which has over 44,000 cases and about 200 new cases every day.

Figure 8: Countries that are “nearly there”

Then there are the 36 countries where the virus is still rampant—including crucially the United States of America and the UK, which both completely bungled their fight with the virus, and which seem highly likely to experience a second wave of infections after their pathetically managed lockdowns are ended as well.

 

Figure 9: The “virus forever?” countries?

The worst country may well be Brazil, which seems to be skipping the odds of a second wave by completely failing to contain the first. Cases are doubling every ten days.

Figure 10: Brazil’s cases are doubling every 10 days. See https://homepages.inf.ed.ac.uk/ngoddard/covid19/

So I find myself in part of the world that is virus-free, and watching a New World Order evolve that no-one anticipated—not even Huxley or Orwell. It’s a “fractured planet”, with two enormously disparate fractions: China, Southeast Asia and Oceania in the “virus free” segment, and the rest of the world in the “virus afflicted”. I’m glad to be in the virus-free part, but I do have some trepidation about the future politics of this block, in which China is by far the major power economically and militarily.

Figure 11: The “winners and losers” from EndCoronavirus.org at https://www.endcoronavirus.org/map-visualization

That worry aside, I’m relaxed and working well, though enormously behind on numerous projects thanks to the time I lost in the move. Initially, getting settled here took total precedence: finding a place to rent (we rapidly located an unfurnished 4 bedroom house in a gated community on the outskirts of Trang, for US$300 a month), furnishing it, buying the essentials for mobility in a region where the temperature never drops below 24°C and frequently hits 37°C (a car, motorbike, and bicycles for exercise before the sun rises too high). That took about six weeks all up. It came after spending two weeks visiting my family in Sydney for what I was sure would be the last time for at least a year, after working with Russell Standish on Minsky for two weeks in late February.

All of March, all of April, and part of May was thus lost to the personal impact of the virus. I finally got down to solid work about two weeks ago. So far, I’ve just finished two major tasks with tight deadlines: a chapter for a book on system dynamics modelling in economics, and a paper collating my work on macroeconomics for the Review of Political Economy. With those two out of the way, there’s still a ton of work to do—summarised in part by this “to-do” list:

Figure 12: My current to-do list

My daily routine is a 10-30km bike ride starting somewhere between 5.45-6.30am on the very safe internal roads of this community, then a brief 5BX exercise routine at home, before breakfast and getting into work from about 8-8.30am. I work till around 6pm, when my partner and I head out for a meal (why cook, when dinner for two costs about US$4 at a local market?).

Speaking of which, it’s her birthday, so we’re off for a slightly more expensive meal with her cousin and her partner somewhere in town.

Keep safe everyone. And many, many thank to my Patrons: your support enabled me to make this shift, from somewhere where I was worried every day about getting the virus, to somewhere where my emotional and intellectual energies can be focused on skewering the bad economics—and bad economists—that got us into this trap in the first place.

Figure 13: My office. It’s sparese, with no books, since there was no room in the luggage on our flight from Amsterdam (bar one on managing back pain, and another by Richard Tol which I intend using in my case against him and his fellow climate change trivializers) –but a nice backdrop of the local beach, which hopefully we’ll be allowed to visit again next month

Figure 14: The bikes in the house’s vestibule. My partner uses the motorbike far more often than I do, and I use one of the bicycles far more often than she does.

Figure 15: The car and house. The car is essential with no public transport to speak of, a 2km walk to the town, and a standard daytime temperature of 34°C. Paying a rent of US$300 a month for a 4-bedroom 2-storey house (which would cost maybe $120,000 to buy) makes Michael Hudson’s point that high houses benefit rentiers and make the West uncompetitive with the East, thanks to the wages that are needed to pay exorbitant rents and mortgages.

Figure 16: The 1km long internal main road of the community on which I do 5-15 laps by bicycle every morning

Figure 17: The entrance to the community, next to a 7-11, on a busy 4-lane highway into Trang

Figure 18: Temperature scanning takes place at every shop, and you have to use LINE to scan a QR code on entry and exit, to enable tracking in case a community transmission occurs

Figure 19: Personal protection equipment is plentiful and cheap. N95 masks for $1.25, 100cc alcohol sprays for $2

Figure 20: Everyone wears masks everywhere that they’re in contact with the public

Figure 21: My cartoon collaborator and good friend Miguel Guerra’s excellent cartoon on the folly of putting the economy before health

Emergent Macroeconomics: Deriving Minsky’s Financial Instability Hypothesis Directly from Macroeconomic Definitions

Abstract

Though Minsky developed a compelling verbal model of the “Financial Instability Hypothesis” (FIH), he abandoned his early attempts to build a mathematical model of financial instability (Minsky 1957). While many mathematical models of the FIH have been developed since, the criticism that these models are “ad hoc” lingers.

In this paper I show that the essential characteristics of Minsky’s hypothesis are emergent properties of a complex systems macroeconomic model which is derived directly from macroeconomic definitions, augmented by the simplest possible assumptions for relations between system states, and the simplest possible behavioural postulates.

I also show that credit, which I define as the time derivative of private debt, is an essential component of aggregate demand and aggregate income, given that bank lending creates money (Holmes 1969; Moore 1979; McLeay, Radia et al. 2014).

Minsky’s Financial Instability Hypothesis is thus derived from sound macrofoundations. This stylized complex-systems model reproduces not only the core predictions of Minsky’s verbal hypothesis, but also empirical properties of the real world which have defied Neoclassical understanding, and which were also not predictions of Minsky’s verbal model: the occurrence of a “Great Moderation”—a period of diminishing cycles in employment, inflation, and economic growth—prior to a “Minsky Moment” crisis; and a tendency for inequality to rise over time.

The simulations in this paper use the Open Source system dynamics program Minsky, which was named in Minsky’s honour.

Keywords

Minsky, Financial Instability Hypothesis, Complexity, System Dynamics, Credit, Debt, Macroeconomics

JEL Codes

C60, C61, C62, E11, E12, E31, E40, E44, F47, G01, G12, G51, N11, N12, Y1,

Introduction

Though Minsky developed a compelling verbal model of the “Financial Instability Hypothesis” (FIH), he abandoned his early attempts to build a mathematical model of financial instability (Minsky 1957). Many mathematical models of the FIH have been developed since (Taylor and O’Connell 1985; Jarsulic 1989; Keen 1995; Charles 2005; Cruz 2005; Tymoigne 2006; Charles 2008; Fazzari, Ferri et al. 2008; Santos and Macedo e Silva 2009), and Minsky collaborated in some of them (Gatti, Delli Gatti et al. 1994), but the criticism that these models are “ad hoc” lingers (Rosser 1999, p. 83).

In this paper I show that the essential characteristics of Minsky’s hypothesis are emergent properties of a complex systems macroeconomic model which is derived directly from macroeconomic definitions, augmented by the simplest possible assumptions for relations between system states, and the simplest possible behavioural postulates.

I also show that credit—which I define as the time derivative of private debt (see Appendix A)—is an essential component of aggregate demand and aggregate income, given that bank lending creates money (Holmes 1969; Moore 1979; McLeay, Radia et al. 2014).

Minsky’s Financial Instability Hypothesis is thus derived from sound macrofoundations. This stylized complex-systems model reproduces not only the core predictions of Minsky’s verbal hypothesis, but also empirical properties of the real world which have defied Neoclassical understanding, and which were also not predictions of Minsky’s verbal model: the occurrence of a “Great Moderation”—a period of diminishing cycles in employment, inflation, and economic growth—prior to a “Minsky Moment” crisis; and a tendency for inequality to rise over time (Piketty 2014).

Deriving Minsky directly from macroeconomic definitions

Minsky’s Financial Instability Hypothesis is one of the rarest things in the history of economics: a powerful and accurate intuition. Neoclassical economists from Jevons onward have portrayed capitalism as a system that tends to equilibrium—while ignoring both history, and mathematical results like the Perron-Frobenius theorem (Jorgenson 1960; Jorgenson 1961; Jorgenson 1963; McManus 1963; Blatt 1983, , pp. 111-146), that establish otherwise. Marxists predict a perpetual tendency towards stagnation, via simplistic applications of Marx’s “tendency for the rate of profit to fall” (Marx 1894, Chapter 13), or its obverse that “that surplus must have a strong and persistent tendency to rise” (Baran 1968, p. 67). Starting from the still-disputed proposition—not just in Neoclassical economics (Bernanke 2000, p. 24), but in Post Keynesian economics as well (Fiebiger 2014; Keen 2014; Lavoie 2014; Palley 2014; Keen 2015)—that there was a “relation between debt and income” (Minsky 1982, p, 66), Minsky instead deduced that “the fundamental instability of a capitalist economy is upward”. Given the proclivity of economists to model an economist’s theory without having understood it (Hicks 1937; Hicks 1981), this pivotal passage is worth quoting at length:

The natural starting place for analyzing the relation between debt and income is to take an economy with a cyclical past that is now doing well. The inherited debt reflects the history of the economy, which includes a period in the not too distant past in which the economy did not do well… As the period over which the economy does well lengthens, two things become evident in board rooms. Existing debts are easily validated and units that were heavily in debt prospered; it paid to lever. After the event it becomes apparent that the margins of safety built into debt structures were too great. As a result, over a period in which the economy does well, views about acceptable debt structure change. In the deal-making that goes on between banks, investment bankers, and businessmen, the acceptable amount of debt to use in financing various types of activity and positions increases. This increase in the weight of debt financing raises the market price of capital assets and increases investment. As this continues the economy is transformed into a boom economy.

Stable growth is inconsistent with the manner in which investment is determined in an economy in which debt-financed ownership of capital assets exists, and the extent to which such debt financing can be carried is market determined. It follows that the fundamental instability of a capitalist economy is upward. The tendency to transform doing well into a speculative investment boom is the basic instability in a capitalist economy. (Minsky 1982, p, 66. Emphasis added)

Minsky explained the source of his “preanalytic cognitive act … called Vision” (Schumpeter 1954, p. 41) that led to the Financial Instability Hypothesis as his desire to explain what causes Great Depressions:

Can “It”—a Great Depression—happen again? And if “It” can happen, why didn’t “It” occur in the years since World War II? These are questions that naturally follow from both the historical record and the comparative success of the past thirty-five years. To answer these questions it is necessary to have an economic theory which makes great depressions one of the possible states in which our type of capitalist economy can find itself. (Minsky 1982, p. xix. Emphasis added)

Though this was a compelling and ultimately successful Vision, the dominant Vision in macroeconomics remains the need to derive it from “good” foundations, where Neoclassical economists have defined “good” as the capacity to derive macroeconomics from microeconomics. As Robert Lucas, the father of “rational expectations macroeconomics”, put it in an address subtitled “My Keynesian Education”:

I also held on to Patinkin’s ambition somehow, that the theory ought to be microeconomically founded, unified with price theory. I think this was a very common view… Nobody was satisfied with IS-LM as the end of macroeconomic theorizing. The idea was we were going to tie it together with microeconomics and that was the job of our generation. Or to continue doing that. That wasn’t an anti-Keynesian view. (Lucas 2004, p. 20)

Despite the failure of the models derived from this Vision to anticipate the Great Recession, this remains the core Vision in economics. Even the relatively progressive mainstream economist Olivier Blanchard could see no alternative to deriving macroeconomics from microeconomics:

Starting from explicit microfoundations is clearly essential; where else to start from? (Blanchard 2018, p. 47)

The answer to Blanchard’s question is surprisingly simple: you can start directly from macroeconomics itself. The fundamentals of Minsky’s successful hypothesis can be derived directly from incontestable macroeconomic definitions, allied to the simplest possible definitions for both key economic relationships and essential behavioural functions.

The essential macroeconomic definitions needed are the employment rate , the wages share of GDP , the private debt to GDP ratio , the output to employment ratio , and the capital to output ratio :

         

When the first three of these are differentiated with respect to time, three true-by-definition dynamic statements result:

  • The employment rate will rise if economic growth exceeds the sum of change in the output to labour ratio and population growth;
  • The wages share of output will rise if the total wages grow faster than GDP; and
  • The private debt to GDP ratio will rise if private debt growth exceeds the rate of economic growth

These statements are shown in Equation , where is used to signify :

         

The following simplifying assumptions are used to turn these definitions into an economic model:

Table 1: Simplifying Assumptions

Assumption

Equation

Parameters & Initial Conditions

1. Exogenous growth in the output to labour ratio

2. Exogenous population growth

3. A constant capital (K) to output (Y) ratio

4. The rate of change of capital is net investment, which is gross investment minus depreciation

5. A uniform real wage

6. A linear wage change function driven by the employment rate . is the slope of the wage-change function and is the employment rate at which wage change equals zero.

7. A linear gross investment function driven by the profit rate . is the slope of the investment function and is the profit rate at which gross investment equals profit .

8. Credit, which I define as the annual change in debt (see Appendix D), finances gross investment in excess of profits

9. Profit is output net of wages and interest payments

10. Initial conditions for

 

Applying these assumptions, and signifying the real growth rate as , leads to the model shown in Equation

         

As shown in Appendix B, this inherently nonlinear model has two meaningful equilibria: one with a positive employment rate, positive wages share, and debt to GDP ratio, which Grasselli & Costa Lima dubbed the “good equilibrium”; another with zero employment, zero wages share, and an infinite debt to GDP ratio, which they dubbed the “bad equilibrium” (Grasselli and Costa Lima 2012).

The key parameter that determines the stability of these two equilibria is the slope of the investment function. With a low desire to invest ()—which, on the surface, would appear to imply a poorer level of economic performance—the “good equilibrium” is stable with equilibrium values of (see Figure 1), with the system converging over a large number of cycles.

Figure 1: Simulation with : convergence to the “good equilibrium”

However, with a high desire to invest ()—which, on the surface, would appear to imply a higher level of economic performance—the “good equilibrium” is unstable, with equilibrium values of (see Figure 2).

Figure 2: Simulation with : a “Great Moderation” followed by rising cycles and breakdown

The approach to, and then repulsion from, the good equilibrium follows what is known as the “intermittent route to chaos” (Pomeau and Manneville 1980), in which systemic turbulence appears to decline, only to subsequently rise once more. This reproduces several of the stylized facts of recent macroeconomic data:

  • A rising level of private debt compared to GDP;
  • An initial apparent decline in the volatility of employment, growth and wage demands—a “Great Moderation”—followed by increasing volatility and (ultimately) an economic collapse—a “Great Recession”; and
  • Rising inequality, as the increased share going to bankers (in this three-class system) comes at the expense not of capitalists—who are the only ones borrowing in this simple model—but at the expense of the workers’ share of income.

A simple model derived directly from macroeconomic definitions thus reproduces the essence of Minsky’s FIH: the faster cyclical growth of debt relative to income over a series of credit-driven boom and bust cycles, leading to a period of increasing volatility and, in this model without bankruptcy or government, ultimate a terminal economic breakdown.

One essential aspect of this model is the proposition that the change in debt finances part of investment, and thus part of aggregate demand—loans are not “pure redistributions” (Bernanke 2000, p. 24) as portrayed in Neoclassical literature, but increases in bank assets which simultaneously create both money and additional aggregate demand and income. This can be proven using the key macroeconomic identity that expenditure is income.

The role of credit in aggregate demand and aggregate income

Central Banks have recently relieved Post Keynesian economists of the necessity of insisting that their “Endogenous Money” model of banking behaviour is structurally correct (Holmes 1969; Moore 1979; Moore 1988; Moore 1988; Dow 1997; Rochon 1999; Fullwiler 2013), while the Neoclassical models of “Loanable Funds” and the “Money Multiplier” are incorrect (McLeay, Radia et al. 2014; Deutsche Bundesbank 2017).

However, though the endogeneity of money is fully accepted in Post Keynesian and MMT circles, the macroeconomic significance of Endogenous Money is not. For instance, in a recent blog post, Wray argued that “in retrospect the endogenous money literature is trivial for several reasons”, with its implications largely being confined to how central banks set interest rates (Wray 2019). Similarly, in the debate in the Review of Keynesian Economics over an earlier, initially flawed, and more complicated expression of the arguments in this section (Fiebiger 2014; Keen 2014; Lavoie 2014; Palley 2014; Keen 2015), Fiebiger treated passages in which Minsky attempted to establish a role in aggregate demand for a change in the money supply (caused by a change in debt) as simply expressing a tautology:

Given the parameters specified, Minskys (Minsky 1975, p. 133) deduction that ΔMt must be the source of growth allows Yt ex ante > Yt1 ex post to be viewed as a tautology. (Fiebiger 2014, p. 295)

In the following tables (which I term “Moore Tables” in honour of Basil Moore), I use the key macroeconomic identity that expenditure is income to show that endogenous money is far from macroeconomically trivial, and that Minsky’s comments in (Minsky 1975, p. 133) and (Minsky 1982, pp. 3-6 in a section entitled “A sketch of a model”) were not tautological, but critical insights whose expression was hampered by the use of period analysis (Fontana 2003; Fontana 2004), as were Fiebiger’s attempts to interpret them. These tables show flows in continuous time, including credit, which I define as the time derivative of debt:

         

As in any differential equation, these flows are measured instantaneously, and dimensioned in the relevant time unit, which is dollars per year. I know that this is a foreign concept to a discipline accustomed to thinking in terms of periods, normally of a year: doesn’t one have to measure for a year to speak of, for example, credit per year? In fact, one does not. A monetary flow can be measured at an instant in time in terms of dollars per year, just as a car’s speedometer measures velocity at an instant in time in terms of kilometres per hour: since velocity is the time derivative of distance, if an instantaneous velocity of 100km/hr were maintained for an hour, then the vehicle would cover 100 kilometres in that hour. The same principle applies to the near instantaneous measurement of financial flows, even though they are the sum of a large number of discrete but asynchronous transactions sampled across a very small instant of time.

Each row in a Moore Table shows expenditure by one sector on the others in an economy. Expenditure is shown as a negative entry on the diagonal of the table, and a positive entry on the off-diagonal, with the two necessarily summing to zero on each row and the overall table. The negative sum of the diagonal of the table is aggregate demand , while the sum of the off-diagonal elements is aggregate income . The two are necessarily equal: expenditure is income.

Figure 3 shows a monetary economy in which neither lending nor borrowing can occur. The flows A to F represent the turnover of an existing and constant money stock, and in this sense are comparable to Friedman’s mythical “Optimum Quantity of Money” economy (Friedman 1969), though minus the helicopters dispensing money. The sum of these monetary flows can thus be substituted by the velocity of money V times the stock of money M, as in Equation .

Figure 3: Moore Table for a monetary economy with no lending

  

Sector 1

Sector 2

Sector 3

Sum

Sector 1

-(A+B)

A

B

0

Sector 2

C

-(C+D)

D

0

Sector 3

E

F

-(E+F)

0

Sum

(C+E)-(A+B)

(A+F)-(C+D)

(B+D)-(E+F)

0

         

Figure 4 shows the mythical (McLeay, Radia et al. 2014) Neoclassical model of Loanable Funds, in which lending is between one non-bank agent and another. Lending is shown as a flow across the diagonal of the Moore Table. Without loss of generality, I show Sector 2 lending Credit dollars per year to Sector 1, which Sector 1 then spends buying the output of Sector 3; Sector 1 also has to pay Interest $/year to Sector 1, to service the outstanding stock of debt. The flow of lending affects the spending power of the lender as well as the borrower: the flow of Credit $/Year from Sector 2 to Sector 1 reduces the amount that Sector 2 can spend on Sector 3.

Figure 4: Moore Table for Loanable Funds

  

Sector 1

Sector 2

Sector 3

Sum

Sector 1

-(A+B+Credit +Interest)

A+Interest

B+Credit

0

Sector 2

C

-(C+D-Credit)

D-Credit

0

Sector 3

E

F

-(E+F)

0

Sum

(C+E)-(A + B + Credit + Interest)

(A+ F+ Interest) – (C+D-Credit)

(B+D)-(E+F)

0

The sum of either the off-diagonal elements of Figure 4 (or the negative of the sum of the diagonal) confirms the belief of Neoclassical economists, that if banks were just intermediaries, then credit would be a pure redistribution, and it would play no role in aggregate demand and income. However, one interesting result is that (gross) interest payments are part of aggregate demand and aggregate income—see Equation .

         

Figure 5 shows the real-world situation that Credit money is created by bank lending. The Table is now expanded to show the Assets, Liabilities and Equity of the Banking Sector, and monetary flows now include the matching increase of Assets and Liabilities when a new loan (Credit, denominated in $/Year) is made, as well as transfers between Liabilities (predominantly deposit accounts), and also Bank Equity. The Credit money created by the loan is used by Sector 1 to buy goods from Sector 3, and Sector 1 is obliged to service the stock of outstanding loans by paying the flow of Interest $/Year to the Bank (which is recorded in its Equity account).

Figure 5: Moore Table for Endogenous Money (“Bank Originated Money and Debt”)

  

Assets

Liabilities (Deposit Accounts)

Equity

  

  

Debt

Sector 1

Sector 2

Sector 3

Bank

Sum

Sector 1

Credit

-(A+B + Credit + Interest)

A

B + Credit

Interest

0

Sector 2

  

C

-(C+D)

D

  

0

Sector 3

  

E

F

-(E+F)

  

0

Bank

  

G

H

I

-(G+H+I)

  

Sum

  

(C+E+G) – (A+B + Credit + Interest)

(A+F+H)-(C+D)

(B+D+I+Credit)-(E+F)

Interest-(G+H+I)

0

The crucial result here is that Credit is part of both aggregate demand and aggregate income, in the real world of Endogenous Money in which bank lending creates money:

         

This realisation strengthens the underlying Post Keynesian and MMT methodologies. Not only is “Endogenous Money/BOMD” a more realistic description of banking than “Loanable Funds”, it has an enormous impact on macroeconomics as well. Macroeconomic models that omit banks, debt, money—and therefore the role of credit in aggregate demand and income—omit the “causa causans that factor which is most prone to sudden and wide fluctuation” (Keynes 1936, p. 221), and are utterly misleading models of the macroeconomy. This judgment applies to the entire corpus of Neoclassical economics, bar the work of Michael Kumhof (Kumhof and Jakab 2015; Kumhof, Rancière et al. 2015).

Simulating Loanable Funds and BOMD

The macroeconomic significance of BOMD can be easily illustrated by converting a simple model of Loanable Funds in Minsky to a model of BOMD. The Loanable Funds model is fashioned on the model in (Eggertsson and Krugman 2012), where the consumer sector lends to the investment sector via a bank which operates as an intermediary, and which charges an introduction fee to the consumer sector. The model is completed by the employment of workers by both sectors, intermediate goods purchases by each sector from the other, and purchases of goods by workers and bankers.

The five accounts in the banking sector’s Godley Table are Reserves on its Assets side, three deposit accounts on its Liabilities—one each for the Consumer Sector , Investment Sector and workers —and the Bank’s Equity account . The transaction of lending, repayment, interest payments and the bank fee all operate through the Liability side of the Banking sector’s ledger: its Assets are unaffected (see Figure 6).

Figure 6: Banking sector view of Loanable Funds

Conversely, the financial operations all occur on the Asset side of the Consumer (lending) sector’s Godley Table (see Figure 7). Lending reduces the amount of money in the consumer sector’s deposit account, and increases the debt that the investment sector owes to it (see Figure 7).

Figure 7:Consumer (lender) sector view of Loanable Funds

For the borrower, the financial operations alter its Assets and its Liabilities equally. Credit increases its Asset the deposit account it has with the Banking Sector, and identically increases its liability of , its debt to the consumer sector (see Figure 8).

Figure 8: The Investment Sector’s (borrower’s) view of the economy

The core differential equations of this model, shown in Equation , can be derived directly by summing the columns of Figure 6 and Figure 7 (the flows that will be affected by the later conversion of this model to BOMD are highlighted in red):

         

All flows are defined in terms of first-order time lags related to the relevant account. In particular, lending by the consumer sector is shown as being based on the amount left in its bank account , while repayment by the Investment Sector is based upon the level of outstanding debt :

         

The parameters and are time constants, which can be varied during a simulation: reducing increases the speed of lending while reducing increases the speed of repayment. These are varied in the simulation shown in Figure 10. Substantial variations in the speed of lending and repayment dramatically alter the private debt to GDP ratio, but only transiently affect economic activity.

This simulation confirms the Neoclassical conditional logic that, if banks were mere intermediaries as Loanable Funds portrays them to be, then banks, debt and money could be ignored in macroeconomics. Large changes in credit have negligible impact upon GDP growth—and in fact credit and GDP growth move in opposite directions in this simulation, because the borrower has been given a lower overall propensity to spend than the lender, so that an increase in lending actually reduces GDP via a fall in the velocity of money (and vice-versa: see Figure 9).

Figure 9: Loanable Funds in Minsky. Credit has no significant impact on macroeconomics

This was done to illustrate Bernanke’s assertion that, when lending is simply a “pure redistribution” (Bernanke 2000, p. 24), any macroeconomic impact of lending depends on differences in the marginal spending propensities of the lender and borrower. With the macroeconomic impact of credit depending on idiosyncratic characteristics of the borrower and lender, there is no systemic benefit for including banks, debt, and arguably, money, in macroeconomic theory for a world in which Loanable Funds is true.

Figure 10: Varying lending & repayment rates in Loanable Funds; no significant macroeconomic effects

However, in the real world, banks originate money and debt, and the impact of banks, debt and money on macroeconomics is highly significant. This can be illustrated by making the technically minor but systemically huge changes needed to convert this model of Loanable Funds into Bank Originated Money and Debt (BOMD)—by shifting debt from being an asset of the Consumer Sector to an asset of the Banking Sector (and deleting the superfluous “Fee”, since the Banking Sector now gets its income from the flow of interest). Credit thus increases the Assets of the banking sector and its Liabilities (the sum in the Investment Sector’s account ) by precisely the same amount.

Figure 11: Banking sector view of BOMD

The financial equations of this system are shown in Equation . These are simpler than the equations for Loanable Funds: the mythical intermediation is deleted, the three financial operations are removed from the equation for , and the interest payment now goes to the Banking Sector’s Equity account

         

These structural changes are the only differences between the two models in this paper. Strictly speaking, the flow of new debt should have been redefined, but this was left as is, to illustrate that the change in the structure of lending alone is sufficient to drastically transform macroeconomics from a discipline in which banks, debt and money can be ignored, into one in which they are critical.

Figure 12: Bank Originated Money and Debt in Minsky. Credit plays a critical role in macroeconomics

These simple structural changes lead to credit having an enormous impact on the economy. Credit and GDP growth now move in the same direction, and GDP grows when credit is positive and falls when it is negative. In keeping with the logical analysis of the previous section, credit adds to aggregate demand and income when it is positive, and subtracts from it when it is negative.

Figure 13: Varying lending & repayment parameters in BOMD: significant macroeconomic effects

Accounting for the Great Moderation & the Great Recession

The models and logical analysis of the previous three sections provide a causal argument for a relationship between the levels of debt and credit and macroeconomics, and in particular, the experience of severe economic crises like the 2008 “Great Recession”. A rising level of debt relative to GDP, and a rising significance of credit relative to GDP, are Minskian warnings of a crisis, while the crisis is caused by a plunge in credit from strongly positive to strongly negative. The plunge in credit from a peak of 15% of GDP in late 2006 to a depth of -5% in late 2009 was the first experience of negative credit since the end of WWII, and this was the cause of the Great Recession.

Figure 14: The “Great Recession” was the first negative credit event in post-WWII economic history

The empirical relationship between credit and the level of unemployment rises as the level of private debt rises, and by the time of the recovery from the 1990s recession, it is overwhelming: in a ridiculously strong contrast to Bernanke’s Neoclassical a priori dismissal of Fisher’s Debt Deflation explanation for the Great Depression on the grounds that “Absent implausibly large differences in marginal spending propensities among the groups, it was suggested, pure redistributions should have no significant macroeconomic effects” (Bernanke 2000, p. 24), the correlation between credit and unemployment since 1990 is a staggering -0.85 (see Figure 15).

Figure 15: Credit and Unemployment. Correlation -0.53 since 1970, -0.85 since 1990

––p

The role of negative credit in the USA’s major economic crises

Since credit has no role in mainstream economic theory, the collection of data on private debt and credit has been sporadic, depending more on the initiative of statisticians than the expressed needs of economists for data. This situation has improved dramatically in recent years thanks to the work of the Bank of International Settlements (Borio 2012; Dembiermont, Drehmann et al. 2013), the Bank of England (Hills, Thomas et al. 2010) and various non-mainstream economists (Jorda, Schularick et al. 2011; Schularick and Taylor 2012; Vague 2019), but much remains to be done to provide the comprehensive time series data that the significance of debt and credit warrants.

However, some data can still be retrieved that helps make sense of past economic crises (Vague 2019). In particular, a long term debt series can be derived for the USA from three very different time series: the post-1952 Federal Reserve Flow of Funds data; Census data for debt between 1916 and 1970; and a series on loans by selected banks between 1834 and 1970 (Census 1949; Census 1975).

Figure 16: Debt to GDP data from the BIS & US Census

Fortunately, the data series overlap, and the trends in the data show that, though the definitions differed, the same fundamental processes were being tracked by these three data series. This allows a composite time series to be assembled by rescaling the two Census data series to match the current BIS/Federal Reserve data set. When credit data is derived from this composite series, two phenomena stand out: firstly, America’s greatest economic crises are caused by sustained periods of negative credit; and secondly, the post-WWII regime has only one negative credit event—the “Great Recession”—while the pre-WWI regime had frequent, though smaller, negative credit experiences (see Figure 17). The two greatest were the Great Depression, and the “Panic of 1837” (Roberts 2012).

While Great Depression and the Great Recession are etched into our collective memories, I was personally unaware of the “Panic of 1837” until this credit data alerted me to the scale of negative credit at that time. Though the recorded level of private debt was low compared to post-WWII levels, the rate of decline of debt—the scale of negative credit—was both enormous and sustained. Credit was negative between mid-1837 and 1844, and hit a maximum rate of decline of 9% of GDP. It is little wonder that the “Panic of 1837” was described as “an economic crisis so extreme as to erase all memories of previous financial disorders” (Roberts 2012, p. 24).

Figure 17: Composite time series for private debt and credit derived from the data in Figure 16

Nonlinearity and Realism

The model in Equation generates symmetric cycles—booms that are as big as busts, before a final collapse—simply because of the unrealistic assumption, made for reasons of analytic tractability, of linear behavioural relations (assumptions 7 & 8 in Table 1). Realistically, workers wage demands given the level of employment are nonlinear, as Phillips insisted (Phillips 1954; Phillips 1958), as are the investment reactions of capitalists to the rate of profit, as Minsky insisted with his perceptive concept of “euphoric expectations” (Minsky 1982, p. 140).

Keen’s 1995 Minsky model (Keen 1995) used the hyperbolic nonlinear function suggested by Blatt (Blatt 1983, p. 213) to avoid unrealistic outcomes, such as the employment rate exceeding 100% in a nonlinear Goodwin model (Goodwin 1967). A generalized exponential function can be used instead (see Equation ), which could allow unrealistic values. However, these are avoided by suitable choices of input variables (the employment to population ratio rather than the unemployment rate in the “Phillips Curve” function).

         

The parameters shown in Table 2 (stable wages at 60% employment, a slope of 2 for the wage change function at 60% employment, and a maximum wage decline of 4% per annum; investment 3% of GDP at 3% profit rate, investment function slope at 3% of 2, and a minimum gross investment level of zero) generate an asymmetric process in which the ultimate downturns are deeper than the booms. Nonlinear behavioural assumptions thus improve the realism of the model, but do not change its fundamental properties, which emanate from the inherent structural nonlinearity of the model itself.

Table 2: Nonlinear behavioural functions for wage change and investment

Assumption

Parameters

11. Nonlinear wage change function parameters

, min=-4%

12. Nonlinear investment function parameters

, min=0

Extending the definitions to include inflation

A simple single-price-level nominal extension can be derived from definitions in the same fashion as the model in Equation , though it takes more assumptions to turn the definitional dynamic statements into a model. The definition of the employment rate is unchanged, while the definitions of the wages share of GDP and the debt to GDP ratio are both in monetary terms:

         

When differentiated with respect to time, this yields three definitionally true statements as before, but this time the rate of change of prices is a component of two of them:

  • The employment rate will rise if real economic growth exceeds the sum of population growth and growth in labor productivity;
  • The wages share of output will rise if money wage demands exceed the sum of inflation and growth in labor productivity; and
  • The private debt to GDP ratio will rise if the rate of growth of private debt exceeds the sum of inflation plus the rate of economic growth.

In equations, these statements are:

         

where the subscript R signifies “real” as opposed to monetary.

Conclusion: the macroeconomic foundations of macroeconomics

Minsky’s Financial Instability Hypothesis is thus not merely a particular Post Keynesian model, but a foundational model of macroeconomics, in the same sense that Lorenz’s model of turbulence in fluid dynamics is a foundational model for meteorology (Lorenz 1963). Though Minsky did not do this himself, a model of his hypothesis can be derived directly from the impeccably sound macroeconomic foundations of incontestable macroeconomic definitions. It can be extended in the same manner, by adding definitions for government spending, asset price dynamics that differ from commodity price dynamics, multi-sectoral production, etc. The structure and history of an economy are the primary drivers of its behaviour, rather than the behaviour of individual agents in it. “Agents” are, as Marx famously remarked, constrained by history:

Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living. (Marx 1852, Chapter 1)

System dynamics enables the modelling of the structure, the history, and the dynamics of the economy. Minsky’s genius was that he perceived, without this technology, the essential elements of all three that make capitalism prone to crises. Minsky and system dynamics therefore provide the foundations for a paradigmatic challenge to Neoclassical economics, whose development has been driven by the obsession with finding sound microfoundations for macroeconomics, all the while ignoring results that showed this was impossible (Gorman 1953; Anderson 1972; Sonnenschein 1972; Sonnenschein 1973). Macrofoundations, far-from-equilibrium complex systems dynamics, and monetary analysis—the polar opposites of the Neoclassical obsessions with microfoundations, equilibrium and barter—are the proper bases for economic theory.

Postscript: Minsky the Software

All the models in this paper have been built in Minsky, which is an Open Source system dynamics program with the unique feature enabling financial flows to be modelled easily—and their structure modified easily—using interlocking double-entry bookkeeping tables called “Godley Tables” (in honour of Wynne Godley). Minsky can be downloaded from SourceForge, or its Patreon page. The developers would appreciate it if specialists on Minsky the economist—and Post Keynesian economists in general—would download Minsky the software, and help to extend it further by providing user feedback.

Appendix

  1. Continuous versus discrete time

A referee suggested that discrete-time difference equations were more appropriate “for problems which are specified in terms of accounting relationships (which are discrete)”, and that continuous-time differential equations “gives rise to nonobvious relationships in the structure of delays. What kind of profit does finance investment? Obviously past profit; but the specification in continuous-time does not allow to make it evident.”

While each individual financial transaction is discrete, each is also asynchronous with other financial transactions. In a “top down” model, aggregate asynchronous phenomena are more realistically modelled using continuous time than discrete time: this is why aggregate population growth models use continuous time, even though each birth is a discrete event.

The time delays in discrete time economic models are also normally arbitrary. They are almost always in terms of years, which is reasonable for investment, but not for consumption, where the scale should be in terms of weeks or months rather than years. To do discrete-time modelling properly, consumption in period t should be modelled as depending on income in period t-2 (say), where the time period is measured in weeks, while investment in period t should be modelled as depending on the change in income between period t-26 and t-52 (say).

But firstly, no-one does this, because it is simply too complicated: in practice, lags of one year are commonplace in macroeconomic discrete time models. Secondly, if this were done, and empirical work later found that investment in period t actually depended on the change income between period t-40 and t-86 (say), then entire structure of the model would need to be re-written. This is not necessary for a continuous time model, where the equivalent function to a time delay is a time lag. The dependence of, for example, investment today on profits in the past, could be shown using a linear first order time lag: a new variable is defined (say, ) which is shown as converging to the actual variable with a time lag of , where the value of is measured in years:

         

The scalar can be altered in a continuous time model without having to alter the structure to the model itself.

Time lags were not used in the model derived from macroeconomic definitions, because the objective was to produce the simplest possible model working from those definitions, and because time lags introduce dynamics of their own, independent of the structural points made by that model. However, time lags were integral to the models of Loanable Funds and Bank Originated Money and Debt (BOMD), in which they linked the outstanding stocks to the flows. The full equation for the rate of change of private debt in the BOMD model is:

         

Here is the length of time that new lending would take to double D if it occurred at a linear rate, while tells how long repayment would take to reduce D to zero if it occurred at a linear rate. For more on discrete vs continuous time modelling, and time lags in economic modelling, see (Andresen 2018).

  1. Stability analysis of basic Minsky model

The basic Minsky model is:

         

The following shorthand expressions are used in this model with linear behavioural functions:

         

Spelling out these shorthand expressions yields the fully specified model, which makes it easier to identify the nonlinear feedbacks in this model. Instances where the system states interact nonlinearly with each other are highlighted in red: there are two dampening nonlinear feedbacks in the equation for , one amplifying feedback for , and two amplifying feedbacks for (including one term in ).

         

The “good” equilibrium of this model is more easily derived by solving for the zeros of these equations via the substitution that (with ):

         

This equilibrium is in terms of the profit share, employment rate and debt ratio: the wages share is a derivative of these, since . This residual role for the wages-share of output manifests itself in the model dynamics as well: before the crisis, the wages share falls as the debt level rises, while the profit share fluctuates around its equilibrium. This confirms Marx’s intuition in Capital I that wages are a dependent variable in capitalism: “To put it mathematically: the rate of accumulation is the independent, not the dependent, variable; the rate of wages, the dependent, not the independent, variable” (Marx 1867, Chapter 25, Section 1).

The stability of the system about its equilibria are given by its Jacobian, which, given the system’s 3 dimensions and 10 parameters, is very complicated. Making the substitutions that , it is

         

 

Substituting numerical values for all but the key parameter yields the characteristic polynomial of the Jacobian in terms of :

         

This has one real eigenvalue which is always negative, and two complex eigenvalues which have zero or negative real parts for values of , and positive real parts for : see Figure 18. The system thus bifurcates at this point, changing from one where the “good equilibrium” is stable and a cyclical attractor, to one where it is unstable and a cyclical repeller: the system converges towards it under the influence of the negative real eigenvalue until, in proximity to this equilibrium, the real parts of the complex eigenvalues repel the system, which then explosively converges to the “bad equilibrium”.

Figure 18: Eigenvalues for π_S=6 & 6.1, as calculated symbolically in Mathcad

  1. Loanable Funds & BOMD

The key differential equations for the models of Loanable Funds and BOMD as shown in Equations and respectively. The definitions they share are shown in Equation :

         

  1. Distinguishing Debt from Credit

Kalecki once famously remarked that economics was “the science of confusing stocks with flows” (Robinson 1982, p. 295). That is apparent in the confusion caused by the use of the word “Credit” to describe both the level of debt (in $) and its rate of change (in $/year). An outstanding example of this is the paper “The Economic Crisis from a Neoclassical Perspective” (Ohanian 2010), by the prominent “New Classical” economist Lee Ohanian, in which he rules out the “financial explanation” of the 2008 crisis on the basis of the following empirical argument:

The financial explanation also argues that the 2007-2009 recession became much worse because of a significant contraction of intermediation services. But some measures of intermediation have not declined substantially. Figure 4, which is updated from Chari, Christiano, and Kehoe (Chari, Christiano et al. 2008), shows that bank credit relative to nominal GDP rose at the end of 2008 to an all-time high. And while this declined by the first quarter of 2010, bank credit was still at a higher level at this point than any time before 2008… These data suggest that aggregate quantities of intermediation volumes have not declined markedly. (Ohanian 2010, p. 59)

Ohanian’s Figure 4 is reproduced below. It is obvious from the scale that the data he used recorded the stock of outstanding debt, rather than the flow of new debt: if new debt had indeed been between 1.2 and 2 times GDP every year since 1978, then private debt would have been many hundreds to thousands of times GDP by 2010. He—and Chari, Christiano, and Kehoe before him (Chari, Christiano et al. 2008; Troshkin 2008)—interpreted that stock as a flow, in part because the word “credit” was used to describe it—and also of course because this error suits the non-monetary analysis of Neoclassical economics. On the basis of this obvious error, Ohanian (and Chari, Christiano, and Kehoe before him) reject the argument that the financial crisis of 2008 was in fact a financial crisis.

To avoid this stock-flow confusion, I use the word “Debt” to describe the level of debt, dimensioned in currency units, and “Credit” to describe the flow of new debt, dimensioned in currency units per year. I recommend this practice to other Post Keynesians.

References

Anderson, P. W. (1972). “More Is Different.” Science
177(4047): 393-396.

Andresen, T. (2018). On the Dynamics of Money Circulation, Creation and Debt – a Control Systems Approach. Engineering Cybernetics, Norwegian University of Science and Technology.

Baran, P. A. (1968). Monopoly capital an essay on the American economic and social order / Paul A. Baran and Paul M. Sweezy. New York, New York : Monthly Review Press.

Bernanke, B. S. (2000). Essays on the Great Depression. Princeton, Princeton University Press.

Blanchard, O. (2018). “On the future of macroeconomic models.” Oxford Review of Economic Policy
34(1-2): 43-54.

Blatt, J. M. (1983). Dynamic economic systems: a post-Keynesian approach. Armonk, N.Y, M.E. Sharpe.

Borio, C. (2012) “The financial cycle and macroeconomics: What have we learnt?” BIS Working Papers.

Census, B. o. (1949). Historical Statistics of the United States 1789-1945. B. o. t. Census. Washington, United States Government.

Census, B. o. (1975). Historical Statistics of the United States Colonial Times to 1970. B. o. t. Census. Washington, United States Government.

Chari, V., L. Christiano, et al. (2008). “Facts and myths about the financial crisis of 2008.” IDEAS Working Paper Series from RePEc.

Charles, S. (2005). “A Note on Some Minskyan Models of Financial Instability.” Studi Economici
60(86): 43-51.

Charles, S. (2008). “Teaching Minsky’s Financial Instability Hypothesis: A Manageable Suggestion.” Journal of Post Keynesian Economics
31(1): 125-138.

Cruz, M. (2005). “A Three-Regime Business Cycle Model for an Emerging Economy.” Applied Economics Letters
12(7): 399-402.

Dembiermont, C., M. Drehmann, et al. (2013). “How much does the private sector really borrow? A new database for total credit to the private nonfinancial sector.” BIS Quarterly Review(March): 65-81.

Deutsche Bundesbank (2017). “The role of banks, non- banks and the central bank in the money creation process.” Deutsche Bundesbank Monthly Report: 13-33.

Dow, S. C. (1997). Endogenous Money. A “second edition” of The general theory. G. C. Harcourt and P. A. Riach. London, Routledge. 2: 61-78.

Eggertsson, G. B. and P. Krugman (2012). “Debt, Deleveraging, and the Liquidity Trap: A Fisher-Minsky-Koo approach.” Quarterly Journal of Economics
127: 1469–1513.

Fazzari, S., P. Ferri, et al. (2008). “Cash Flow, Investment, and Keynes-Minsky Cycles.” Journal of Economic Behavior and Organization
65(3-4): 555-572.

Fiebiger, B. (2014). “Bank credit, financial intermediation and the distribution of national income all matter to macroeconomics.” Review of Keynesian Economics
2(3): 292-311.

Fontana, G. (2003). “Post Keynesian Approaches to Endogenous Money: a time framework explanation.” Review of Political Economy
15(3).

Fontana, G. (2004). “Hicks on monetary theory and history: money as endogenous money.” Cambridge Journal of Economics
28: 73-88.

Friedman, M. (1969). The Optimum Quantity of Money. The Optimum Quantity of Money and Other Essays. Chicago, MacMillan: 1-50.

Fullwiler, S. T. (2013). “An endogenous money perspective on the post-crisis monetary policy debate.” Review of Keynesian Economics
1(2): 171–194.

Gatti, D. D., D. Delli Gatti, et al. (1994). Financial Institutions, Economic Policy, and the Dynamic Behavior of the Economy. Levy Economics Institute of Bard College Working Papers. Annandale-on-Hudson, NY, Levy Economics Institute.

Goodwin, R. M. (1967). A growth cycle. Socialism, Capitalism and Economic Growth. C. H. Feinstein. Cambridge, Cambridge University Press: 54-58.

Gorman, W. M. (1953). “Community Preference Fields.” Econometrica
21(1): 63-80.

Grasselli, M. and B. Costa Lima (2012). “An analysis of the Keen model for credit expansion, asset price bubbles and financial fragility.” Mathematics and Financial Economics
6: 191-210.

Hicks, J. (1981). “IS-LM: An Explanation.” Journal of Post Keynesian Economics
3(2): 139-154.

Hicks, J. R. (1937). “Mr. Keynes and the “Classics”; A Suggested Interpretation.” Econometrica
5(2): 147-159.

Hills, S., R. Thomas, et al. (2010). “The UK recession in context — what do three centuries of data tell us?” Bank of England Quarterly Bulletin
2010 Q4: 277-291.

Holmes, A. R. (1969). Operational Constraints on the Stabilization of Money Supply Growth. Controlling Monetary Aggregates. F. E. Morris. Nantucket Island, The Federal Reserve Bank of Boston: 65-77.

Jarsulic, M. (1989). “Endogenous credit and endogenous business cycles.” Journal of Post Keynesian Economics
12: 35-48.

Jorda, O., M. Schularick, et al. (2011). “Financial Crises, Credit Booms, and External Imbalances: 140 Years of Lessons.” IMF Economic Review
59(2): 340-378.

Jorgenson, D. W. (1960). “A Dual Stability Theorem.” Econometrica
28(4): 892-899.

Jorgenson, D. W. (1961). “Stability of a Dynamic Input-Output System.” The Review of Economic Studies
28(2): 105-116.

Jorgenson, D. W. (1963). “Stability of a Dynamic Input-Output System: A Reply.” The Review of Economic Studies
30(2): 148-149.

Keen, S. (1995). “Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.’.” Journal of Post Keynesian Economics
17(4): 607-635.

Keen, S. (2014). “Endogenous money and effective demand.” Review of Keynesian Economics
2(3): 271–291.

Keen, S. (2015). “The Macroeconomics of Endogenous Money: Response to Fiebiger, Palley & Lavoie.” Review of Keynesian Economics
3(2): 602 – 611.

Keynes, J. M. (1936). The general theory of employment, interest and money. London, Macmillan.

Kumhof, M. and Z. Jakab (2015). Banks are not intermediaries of loanable funds — and why this matters. Working Paper. London, Bank of England.

Kumhof, M., R. Rancière, et al. (2015). “Inequality, Leverage, and Crises.” The American Economic Review
105(3): 1217-1245.

Lavoie, M. (2014). “A comment on ‘Endogenous money and effective demand’: a revolution or a step backwards?” Review of Keynesian Economics
2(3): 321 – 332.

Lorenz, E. N. (1963). “Deterministic Nonperiodic Flow.” Journal of the Atmospheric Sciences
20(2): 130-141.

Lucas, R. E., Jr. (2004). “Keynote Address to the 2003 HOPE Conference: My Keynesian Education.” History of Political Economy
36: 12-24.

Marx, K. (1852). The Eighteenth Brumaire of Louis Bonaparte. Moscow, Progress Publishers.

Marx, K. (1867). Capital. Moscow, Progress Press.

Marx, K. (1894). Capital Volume III, International Publishers.

McLeay, M., A. Radia, et al. (2014). “Money creation in the modern economy.” Bank of England Quarterly Bulletin
2014 Q1: 14-27.

McManus, M. (1963). “Notes on Jorgenson’s Model.” The Review of Economic Studies
30(2): 141-147.

Minsky, H. P. (1957). “Monetary Systems and Accelerator Models.” The American Economic Review
47(6): 860-883.

Minsky, H. P. (1975). John Maynard Keynes. New York, Columbia University Press.

Minsky, H. P. (1982). Can “it” happen again? : essays on instability and finance. Armonk, N.Y., M.E. Sharpe.

Moore, B. J. (1979). “The Endogenous Money Stock.” Journal of Post Keynesian Economics
2(1): 49-70.

Moore, B. J. (1988). “The Endogenous Money Supply.” Journal of Post Keynesian Economics
10(3): 372-385.

Moore, B. J. (1988). Horizontalists and Verticalists: The Macroeconomics of Credit Money. Cambridge, Cambridge University Press.

Ohanian, L. E. (2010). “The Economic Crisis from a Neoclassical Perspective.” Journal of Economic Perspectives
24(4): 45-66.

Palley, T. (2014). “Aggregate demand, endogenous money, and debt: a Keynesian critique of Keen and an alternative theoretical framework.” Review of Keynesian Economics
2(3): 312–320.

Phillips, A. W. (1954). “Stabilisation Policy in a Closed Economy.” The Economic Journal
64(254): 290-323.

Phillips, A. W. (1958). “The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861-1957.” Economica
25(100): 283-299.

Piketty, T. (2014). Capital in the Twenty-First Century. Harvard, Harvard College.

Pomeau, Y. and P. Manneville (1980). “Intermittent transition to turbulence in dissipative dynamical systems.” Communications in Mathematical Physics
74: 189-197.

Roberts, A. (2012). America’s first Great Depression: economic crisis and political disorder after the Panic of 1837. Ithaca, Cornell University Press.

Robinson, J. (1982). “Shedding darkness.” Cambridge Journal of Economics
6(3): 295-296.

Rochon, L.-P. (1999). “The Creation and Circulation of Endogenous Money: A Circuit Dynamique Approach.” Journal of Economic Issues
33(1): 1-21.

Rosser, J. B. (1999). Chaos Theory. Encyclopedia of Political Economy. P. A. O’Hara. London, Routledge. 2: 81-83.

Santos, C. H. D. and A. C. Macedo e Silva (2009). ‘Revisiting (and Connecting) Marglin-Bhaduri and Minsky–An SFC Look at Financialization and Profit-led Growth’, Levy Economics Institute, The, Economics Working Paper Archive.

Schularick, M. and A. M. Taylor (2012). “Credit Booms Gone Bust: Monetary Policy, Leverage Cycles, and Financial Crises, 1870-2008.” American Economic Review
102(2): 1029-1061.

Schumpeter, J. A. (1954). History of economic analysis / Joseph A. Schumpeter / edited from manuscript by Elizabeth Boody Schumpeter. London, London : Allen & Unwin.

Sonnenschein, H. (1972). “Market Excess Demand Functions.” Econometrica
40(3): 549-563.

Sonnenschein, H. (1973). “Do Walras’ Identity and Continuity Characterize the Class of Community Excess Demand Functions?” Journal of Economic Theory
6(4): 345-354.

Taylor, L. and S. A. O’Connell (1985). “A Minsky Crisis.” Quarterly Journal of Economics
100(5): 871-885.

Troshkin, M. (2008). “Facts and myths about the financial crisis of 2008 (Technical notes).” IDEAS Working Paper Series from RePEc.

Tymoigne, E. (2006). ‘The Minskyan System, Part III: System Dynamics Modeling of a Stock Flow-Consistent Minskyan Model’, Levy Economics Institute, The, Economics Working Paper Archive.

Vague, R. (2019). A Brief History of Doom: Two Hundred Years of Financial Crises. Philadelphia, University of Pennsylvania Press.

Wray, L. R. (2019). “Response to Doug Henwood’s Trolling in Jacobin.” New Economic Perspectives
http://neweconomicperspectives.org/2019/02/response-to-doug-henwoods-trolling-in-jacobin.html.

 

 

Professor Steve Keen on Canadian Real Estate & Household Debt

I had a very nice chat with the Canadian blogger Steve Saretsky about my economic analysis, largely as applied to Canada. Click on the link above to see it; I’ve copied it here in case Patreon stuffs up as it did on my “Eve of Destruction” post.

I shared my screen a few times to show data relevant to Canada. One of the obvious questions was “how did Canada manage to avoid a crisis in 2008 while its southern neighbour had such a big one?”. The answer was “because you kept your private debt bubble going while the USA’s ended”. Here are the charts that I put together to support that case. I’ve also attached the data to this post.

Burying Samuelson’s Multiplier-Accelerator and resurrecting Goodwin’s Growth Cycle in Minsky

Friends, Romans, countrymen, lend me your ears;
I come to bury Caesar, not to praise him.
The evil that men do lives after them;
The good is oft interred with their bones;
So let it be with Caesar. (Marc Antony’s funeral oration in Shakespeare’s Julius Caesar, Act III, Scene II)

Introduction

As aficionados of Shakespeare know, Marc Antony’s speech was ironic: his aim really was to praise Caesar. My intention, however, really is burial: the multiplier-accelerator model of business cycles really does need to be buried, as an “evil that men do” which should not have lived on even during the lives of the Caesars of economics who gave birth to it—specifically, Alvin Hansen, Paul Samuelson (Samuelson 1939; Samuelson 1939) and John Hicks (Hicks 1949; Hicks 1950). We should instead resurrect in its place the “Growth Cycle” model of one of the neglected greats of economics, Richard M. Goodwin (Goodwin 1966; Goodwin 1967). Difference equation methods, which are integral to the multiplier-accelerator model and which still dominate economic modelling, should also give way to differential equations and system dynamics. In making this argument, I showcase Goodwin’s model and extensions of it in the Open Source system dynamics program Minsky.

Zombienomics: the undead Multiplier-Accelerator model should die

The Australian economist John Quiggin coined the term Zombie Economics to characterize ideas that should have died out in economics, but nonetheless persist (Quiggin 2010). Most of Quiggin’s nominations of Zombie economic ideas were the application of faulty economic theories (such as the “Efficient Markets Hypothesis”) to economic policy (such as the deregulation of finance markets). The multiplier-accelerator model has not led to specific policies. However, it has helped to mislead economists about the manner in which economic dynamics should be practised, and in so doing it has possibly caused as much harm as any specific misguided economic policy.

The continuing citations of the source articles by Samuelson (Samuelson 1939; Samuelson 1939)—64 citations for “Synthesis” and 356 for “Interactions”, according to the Web of Science database, with the latter paper clearly undergoing a unfortunate revival in popularity since the start of the millennium—are proof that the Multiplier-Accelerator model still walks amongst us is (see Figure 1).

Figure 1: Citations over time of “Interactions between the multiplier analysis and the principle of acceleration”

Equally significant is the model’s presence as Chapter 11 in Quantitative Economics with Python (Sargent and Stachurski 2020), an Open Source online textbook co-authored by the “Nobel” Prize-winning economist , Thomas Sargent. The multiplier-accelerator model does not play a significant role in modern economics in general, or DSGE modelling in particular, but it is indicative of the poor attention to realism, and obsession with faux and dated analytic techniques, that Paul Romer (another recipient of what I prefer to call the “Faubel Prize in Economics”) (Mirowski 2020) rightly savages in his brilliant but unfortunately unpublished monograph “The Trouble with Macroeconomics” (Romer 2016).

Much of the equilibrium-fixated methodology and pedagogy that our book hopes to replace with system dynamics methods and training originate in the teaching materials written and seminars run by Sargent and his followers. As well as being dominated by equilibrium methods, the fundamental mathematical techniques in both Quantitative Economics with Python and its “advanced” companion Advanced Quantitative Economics with Python (Sargent and Stachurski 2020) are restricted to difference equations, rather than differential equations. In fact, differential equations are not mentioned at all in the “Advanced” text, and are noted just once, in passing, in Quantitative Economics with Python, when complex numbers are discussed. The mention is in relation to the use of complex numbers in determining the properties of the multiplier-accelerator model:

“Useful and interesting in its own right, these concepts reap substantial rewards when studying dynamics generated by linear difference equations or linear differential equations. For example, these tools are keys to understanding outcomes attained by Paul Samuelson (1939) [93] in his classic paper on interactions between the investment accelerator and the Keynesian consumption function, our topic in the lecture Samuelson Multiplier Accelerator.” (Sargent and Stachurski 2020, p. 47).

Samuelson’s paper was indeed a classic: a classic mistake, which should finally be laid to rest. Sargent and Stachurski set out the model’s derivation as follows:

“The model combines the consumption function

         

with the investment accelerator

         

and the national income identity

         

• The parameter 𝑎
is peoples’ marginal propensity to consume out of income…

• The parameter > 0 is the investment accelerator coefficient…

Equations (1), (2), and (3) imply the following second-order linear difference equation for national income:

         ”

(Sargent and Stachurski 2020, p. 189)

To analyze the model, they set 𝐺𝑡 and 𝛾 to zero, and apply the method of characteristic equations, which converts the difference equation:

         

into the quadratic:

         

The roots of this quadratic are:

         

These roots generate several different classes of behaviour—damped oscillations, explosive growth, explosive oscillations, explosive growth—depending on the magnitudes of its parameters. But these are meaningless characteristics of a meaningless model, as can easily be shown using a more sophisticated mathematical technique, of converting a high order scalar difference equation to a system of first order vector difference equations.

In general, to convert an nth order difference equation to a set of first order equations, you define a vector whose components are:

         

The model can now be restated using a matrix of coefficients:

         

The advantage of this technique over the characteristic equation approach is that it imposes a check on the validity of a model, before a characteristic equation is derived: for a model to be valid, the determinant of , where I is the identity matrix, must be zero. If it is not, then the model’s equilibrium is zero. If we consider an equilibrium vector , then it must be true that:

         

If then cannot be inverted, and can have non-trivial values. If then can be inverted, and the only solution is “the trivial solution”, and therefore there is something wrong with the model: it is asking a question whose only general answer is “zero”.

The matrix form of the multiplier-accelerator model is shown in Equation :

         

This fails the non-trivial solution test:

         

Therefore, there must be something wrong with derivation of the multiplier-accelerator model itself.

There are in fact at least two things wrong, the most serious of which is the mis-specification of actual investment in Equation . The national income identity describes the sum of actual consumption and actual investment (and actual government spending , the last of which is omitted in the analysis). Actual investment is defined as the change in the capital stock, and in a discrete-time model investment in time is added to capital stock at time to create the capital stock for period +1:

         

If we want to relate this to the term in Equation , which fundamentally defines desired investment rather than actual investment, then we need to relate the capital stock to output . The simplest way to do this is to presume a linear capital to output ratio , so that:

         

To instead use Equation as the expression for investment in year t, as is done in the “multiplier-accelerator model”, the following equation must hold:

         

The only way this can be guaranteed to hold for any non-zero values of , are that ; the only way to guarantee that it holds for non-zero values (when in general ) are that . This is the primary reason why this model only has the trivial solution: it is asking the question “under what conditions can actual investment be guaranteed to equal desired investment, when both are different linear functions of change in output between different years?”. The answer is “when output is zero, and therefore when investment never occurs”. Clearly, this is not the basis on which to erect a model of cycles!

The other problem with the “model” is its treatment of time, which is an unavoidable consequence of the discrete-time formulation of the model. The definition of Consumption in Equation , relates it to the last time period’s income . This would be valid if the time period was of the order of a week to a month: it is reasonable to assume that aggregate consumption is a function of the previous week to month’s income, given that the vast majority of consumers are wage-earners living from paycheck to paycheck. But with a time-dimension measured in weeks or months, the relation for investment makes no sense: investment has a time horizon of years, not months—let alone weeks. Since investment is the driving factor in this “model”, its time dimension must dominate, in which case it makes no sense to relate consumption to last year’s income: it should be treated as related to this year’s income instead:

         

If we combine this reasonable postulate for consumption in a discrete-time model with the definition of actual investment in Equation via the truncated national income identity (ignoring for simplicity), we get

         

This is a first-order growth relationship, not a model of cycles.

If economics even remotely resembled a science, it would consign the multiplier-accelerator model to its rubbish heap. Since it is not, I have zero faith that this “model” will be abandoned: economists will continue teaching it, regardless of the fact that it has rotten foundations. True progress in dynamics will only come from those outside the mainstream paradigm, and in disciplines like system dynamics. In the remainder of this chapter I want to alert genuine practitioners of dynamics to the existence of a model that has impeccable foundations, which can be used as the basis for well-grounded structural models of capitalism, but which—predictably—has been ignored by mainstream economists. This is Richard Goodwin’s “Growth cycle” model (Goodwin 1967). Unlike the “multiplier-accelerator model”, this model is mathematically valid (the proof is left as an exercise for the reader!), and as such it can be used as the foundation for more realistic models.

Goodwin’s Growth Cycle model and the Keen-Minsky extension

In contrast to the prominence and fawning treatment given to Samuelson’s model, Goodwin’s model was published only in a single-page precis to a supplementary (conference proceedings) volume of the leading mainstream journal Econometrica (Goodwin 1966), as a brief chapter in an edited book (Goodwin 1967), and as a chapter in a book of Goodwin’s collected essays (Goodwin 1982). The Web of Science database records zero citations of this paper in any form. This is of course wrong—I have personally cited it in numerous times, as has Grasselli (Grasselli and Costa Lima 2012; Grasselli and Maheshwari 2017) and several other authors (Landesmann, Goodwin et al. 1994; Harvie, Kelmanson et al. 2007; Flaschel 2015; Giraud and Grasselli 2019)—but it indicates the risible lack of attention paid to this model compared to Samuelson’s.

Goodwin developed his model as a way of expressing Marx’s unexpected cyclical model of growth in Capital I, Chapter 25:

Or, on the other hand, accumulation slackens in consequence of the rise in the price of labour, because the stimulus of gain is blunted. The rate of accumulation lessens; but with its lessening, the primary cause of that lessening vanishes, i.e., the disproportion between capital and exploitable labour power. The mechanism of the process of capitalist production removes the very obstacles that it temporarily creates. The price of labour falls again to a level corresponding with the needs of the self-expansion of capital, whether the level be below, the same as, or above the one which was normal before the rise of wages took place. (Marx 1867, p. 437)

While its origins in Marx may put some readers off, far from being based on any necessarily “Marxist” vision of capitalism, and in contrast to the logically false “multiplier-accelerator model”, Goodwin’s growth cycle can be derived from the impeccable foundations of uncontestably true macroeconomic definitions. We start from the definitions of the employment rate , which is the ratio of the number of people with a job to the total population ; and the wages share of GDP , which is the total wage bill divided by GDP :

         

Simple calculus turns these static definitions into dynamic statements that are also true by definition: “the employment ratio will rise if employment grows faster than population”; and “the wages share of GDP will grow if wages grow faster than GDP”. Using for , these statements are:

         

These dynamic definitions can be turned into a model via one more definition, and a set of genuinely simplifying assumptions. We define the ratio of output to labour ; we assume a constant capital to output ratio ; we assume all profits are invested, so that gross investment equals profits (this simplifying assumption is relaxed in the subsequent model); depreciation occurs at a constant rate ; a uniform real wage applies; there is a linear relationship between the employment rate and the rate of change of real wages; and both the output to labour ratio and population grow at the exogenously given rates and respectively. We also define the profit share of output and the gross investment to output ratio :

         

These assumptions let us expand out the rates of change expressions for and (the rate of change of is by assumption):

         

Feeding these into Equation gives us the basic Goodwin model:

         

This model generates both growth and cycles, as the title of Goodwin’s paper states. Figure 2 illustrates the basic dynamics of this model in the Open Source system dynamics program Minsky (which can be downloaded from https://sourceforge.net/projects/minsky/).

Figure 2: The basic Goodwin model in Minsky

This is using a system dynamics program to express a set of differential equations. Precisely the same dynamics can be shown by using Minsky in the traditional causal diagram approach of system dynamics:

InvestmentàRate of change of CapitalàOutputàEmploymentàRate of change of the wageàWagesàProfitàInvestment

In this method, the employment rate and wages share of GDP become calculated variables, while the capital stock , wage rate , output to labor ratio and population are defined by integral blocks—see Figure 3.

Figure 3: Goodwin’s growth cycle model as a flowchart

I used Goodwin’s model as the foundation for my model of Minsky’s Financial Instability Hypothesis (Keen 1995), taking my lead from Blatt’s observation that Goodwin’s model was remarkably successful “Considering the extreme crudity of some of the assumptions underlying this model”, and that its main weakness of “an equilibrium which is not unstable (it is neutral) … can be remedied [by the] introduction of a financial sector, including money and credit as well as some index of business confidence” (Blatt 1983, p. 210).

Minsky thought of the economy in historical rather than merely logical time, with history determining current expectations, and with feedbacks from current conditions changing both the economy and expectations over a cycle:

The natural starting place for analyzing the relation between debt and income is to take an economy with a cyclical past that is now doing well. The inherited debt reflects the history of the economy, which includes a period in the not too distant past in which the economy did not do well. Acceptable liability structures are based upon some margin of safety so that expected cash flows, even in periods when the economy is not doing well, will cover contractual debt payments. As the period over which the economy does well lengthens, two things become evident in board rooms. Existing debts are easily validated and units that were heavily in debt prospered; it paid to lever… (Minsky 1977, p. 10)

A period of tranquil growth thus leads to rising expectations, and a tendency to increase leverage. As Minsky put it in his most famous sentence:

Stability ‒ or tranquility ‒ in a world with a cyclical past and capitalist financial institutions is destabilizing. (Minsky 1977, p. 10).

Minsky’s basic vision was clearly tailor-made for dynamic modelling, but he failed to do this himself, primarily because he chose a poor foundation for it—specifically, Samuelson’s multiplier-accelerator model! (Minsky 1957). I used the far better foundation of Goodwin’s model, and extended it to consider financial dynamics by:

  • Redefining profit to be net of interest payments as well as of wages; and
  • Introducing a nonlinear investment function based on the rate of profit.

As with Goodwin, a Minsky model can be derived directly from macroeconomic definitions, once it is accepted that banks create money when they create loans (McLeay, Radia et al. 2014), and that that newly created money adds to aggregate demand and income (Keen 2015). We then add the definition of to our fundamental macroeconomic identities, where D is the level of private debt.

The simplest assumption for the causal process behind change in debt is that new debt is used to finance investment in excess of profits—something which has been empirically confirmed by, of all people, Eugene Fama (Fama and French 1999), one of the main promulgators of the Efficient Markets Hypothesis (Fama 1970; Fama and French 2004). Profits are now net of interest payments on debt as well as of wages, while investment is a linear function of the rate of profit:

         

As with the wages share and the employment rate, the definition of the debt ratio is easily turned into a dynamic statement that “the debt ratio will rise if debt grows faster than GDP”:

         

We already have from Equation , so only needs to be derived:

         

Substituting this into Equation yields:

         

This gives us a 3-dimensional model:

         

 

As Li and Yorke proved almost half a century ago, “Period Three Implies Chaos” (Li and Yorke 1975), and that is what this model manifests. There are two meaningful equilibria, a “good equilibrium” with a positive employment rate, wages share of GDP, and a finite debt to GDP ratio, and a “bad equilibrium” (Grasselli and Costa Lima 2012, p. 208) with a zero rate of employment and wages share of GDP and an infinite debt ratio, The former equilibrium is stable for low values of the slope of the investment function, with one negative and two complex eigenvalues with zero real part. However, as the slope of the investment function steepens, the two complex eigenvalues develop a positive real part, and “good equilibrium” becomes unstable—but remains an attractor for the early part of the model’s trajectory. The model then demonstrates the chaotic behavior first observed in fluid dynamics and described as the “intermittent route to chaos” by Pomeau and Manneville (Pomeau and Manneville 1980): cycles diminish for a while, only to rise later on—see Figure 4.

Figure 4: The basic Minsky cycle, modelled in Minsky

When I first developed this model (in August of 1992), I saw its characteristic of diminishing and then rising cycles as its most striking feature. This was not a prediction of Minsky’s verbal model: though he expected a set of cycles before an ultimate one that would lead to a debt-deflationary crisis like the Great Depression—in the absence of “big government” (Minsky 1982, p. xxix)— he made no claim that the cycles themselves would get smaller before the crisis. Nor was it a feature of the economic data at that time. This coincidence inspired what I thought at the time was the rhetorical flourish with which that paper concluded:

From the perspective of economic theory and policy, this vision of a capitalist economy with finance requires us to go beyond that habit of mind that Keynes described so well, the excessive reliance on the (stable) recent past as a guide to the future. The chaotic dynamics explored in this paper should warn us against accepting a period of relative tranquility in a capitalist economy as anything other than a lull before the storm. (Keen 1995, p. 634)

Then reality imitated the model: there was indeed a period of diminishing cycles in the employment rate, growth rate and inflation rate, which Neoclassical economists dubbed “The Great Moderation”, and for which they took the credit (Stock and Watson 2002; Bernanke 2004; Srinivasan 2008). Then there was a crisis, which Neoclassical economists dubbed “The Great Recession”, and which they blamed on exogenous shocks (Ireland 2011).

In contrast, rather that requiring two independent explanations for real world phenomena, this simple model captures the essential features of recent economic data—including the rise in the private debt ratio, and the fall in workers’ share of GDP (see Figure 5)—in one inherently nonlinear system.

Figure 5: Reality imitates the Keen-Minsky model

This extremely simple and well-grounded model clearly provides a better foundation for dynamics than the equilibrium-fixated, difference equation models that dominate Neoclassical macroeconomics today. That will not stop Neoclassicals persisting with DSGE modelling, but the unreasonable effectiveness of these simple models should encourage critics and student rebels to ignore the intellectual backwater of Neoclassical economics, and to develop well-grounded system dynamics models of capitalism instead.

Modelling Money, and the Coronavirus, in Minsky

Though Minsky has some obvious advantages over existing system dynamics programs—the mathematics-oriented GUI, using variables names as well as wires to build equations, the unique capacity to format text using LATEX, overloading of mathematical operators to reduce clutter, the embedding of plots in the design canvas, plots that update dynamically where parameters can be varied during a simulation, etc.—its main is its capacity to model financial dynamics using interlocking double-entry bookkeeping tables called Godley Tables. These take statements of flows between financial accounts and generate systems of differential equations that are guaranteed to be correct. It is also much easier to edit financial flows using a Godley Table than it is to edit them when they are defined using the standard system dynamics flowchart interface.

For example, Figure 6 shows the financial flows in a simple model of the reality that bank loans create deposits—the opposite of the textbook story in which banks are passive “intermediators” who lend out deposits (McLeay, Radia et al. 2014). The table simply records flows between one bank account and another, with Minsky checking that “Assets minus Liabilities minus Equity equals Zero” (the final column in the table).

Figure 6: Godley Table for a simple model of the macroeconomics of Bank Originated Money and Debt (McLeay, Radia et al. 2014)

Neither differential equations nor the customary stock and flow blocks of standard system dynamics programs appear in a Godley Table, but in the background, Minsky turns these flow entries into ODEs of the financial system that are guaranteed to be consistent. Each equation below is the sum of the relevant column in the Godley Table above:

         

The model is completed by defining the individual flow components using flowchart operators. Figure 7 shows a simulation run of this “Bank Originated Money and Debt” model in which flows are related to the levels of accounts via the engineering concept of time constants. The rates of lending and repayment are varied during the simulation by altering the value of the time constants for lending and repayment, using sliders that are intrinsic to every parameter in Minsky.

Figure 7: A simulation run of the Bank Originated Money and Debt model

In addition to being an excellent foundation for monetary stock-flow consistent models (Lavoie and Godley 2001; Lavoie 2014), Godley Tables can be used whenever a model has the requirement that the entities being modelled cannot be in two states at one time, so that flows between system states are exclusive: a flow must go from one state to another. In the context of the current Coronavirus crisis, they can be repurposed to model a pandemic, since the “Susceptible, Exposed, Infected, Recovered, Dead” or SIERD epidemiological models also have that requirement.

The basic SIR model of susceptibility, infection and recovery (Kermack, McKendrick et al. 1927 [1997]) can be seen as an extension of the predator-prey model, in which the consequence of predation is not death but infection. In the basic predator-prey model, an assumed exponential growth of the prey population x at a rate is reduced by a constant times , the prey population, so that . Interaction between the predator and prey is thus shown as a multiplicative relationship.

In modelling a pandemic, the population N is normally treated as a constant, since it changes far less rapidly than the epidemic spreads. The rate of change of the fraction of the population that is infected depends on the interactions of those infected I with those susceptible S, which in turn depends on the frequency of both groups in the overall population, and , and the transmissibility of the disease, which is modelled by the parameter :

         

Since population is treated as constant, this reduces to:

         

Since the increase in those infected is equal to the fall in those who are susceptible, the rate of growth of those infected is the negative of the rate of decline of those susceptible, minus the recovery rate R, which is modelled as a parameter times the number infected:

         

Figure 8 shows this model implemented in Minsky, using time constants rather than parameters. The Godley Table’s tabular format makes it easy to see the interrelations between the Susceptible, Infected and Recovered compartments. Flowchart tools are only needed to define the flows themselves (and future versions will allow these to be defined off the canvas, using standard LATEX equation notation).

Figure 8: Simple SIR model of a pandemic

The Godley Table interface also makes it very easy to extend this model to a more realistic situation in which there is a more complicated transmission chain—see Figure 9. Other comparmentalizations—such as dividing the susceptible population into the general public and medical staff, including quarantined versus non-quarantined, hospitalized versus non-hospitalized, etc.—are equally straightforward to add and define.

Figure 9: SEIRD model developed by editing Godley Table of SIR model

A 3rd order revision of the multiplier-accelerator model

I hesitate to write this section, because I wish to bury, not merely the invalid multiplier-accelerator model itself, but also the practice of using difference equations to model the economy, when continuous time system dynamics methods are so much more suitable (Keen 2006). However, it would be intellectually dishonest not to note that the multiplier-accelerator model can be saved to some degree by doing what Hansen, Samuelson and Hicks did not do: by paying proper attention to the causal process between their hypothetical equation for desired investment, and actual investment, capital stock, and output.

In this model, I use Samuelson’s (Samuelson 1939) function for desired investment, which he modelled as a lagged response to changes in consumption in the previous two years:

         

Here c is the desired “incremental capital to output ratio” (ICOR) (Walters 1966) and s the savings rate. I then assume that these investment plans are carried out, so that this becomes the change in capital in year , which is added to the existing stock in year to yield the capital stock in year .

         

Using the accelerator relation between capital and output, his results in a third order difference equation for Y:

         

Though I do not wish to encourage the further development of this model, it certainly has many interesting characteristics when compared to Samuelson’s invalid 2nd order model. Firstly, it is a meaningful model: the determinant of minus the first-order vector form of this equation is zero, as is required. Secondly, its characteristic equation is

         

This is easily factored into three components which are also easily interpreted: the first means that any sustained level of output is an equilibrium; the second root determines the growth rate, and the third creates cycles which remain smaller than and proportional to the growth rate.

Thirdly, in a very non-Neoclassical (and pro-Keynesian!) result, an increase in the savings rate causes a fall in the rate of economic growth. Also, for sustained growth to occur, c—which determines desired investment—must substantially exceed the actual ICOR. With a lower level for c or a slightly higher value for s, a non-equilibrium set of initial conditions leads to a convergence to a new, higher equilibrium level. See Figure 10 for a comparison of two simulations with slightly different savings ratios.

Figure 10: Cyclical growth in the 3rd order Multiplier-Accelerator Model, with a higher savings rate meaning lower growth

 

Conclusion

I fervently hope that dynamics has a better future in economics than it has had a past. But the odds are not good. We should be under no illusion that the methodology we champion will be resisted by Neoclassical economists, who have, over time, and largely unconsciously, turned equilibrium from the unfortunate compromise in methodology that Jevons, Marshall and even Walras knew it to be, into a religion about the tendencies of actual capitalism.

Almost fifty years ago, the authors of the Limits to Growth (Meadows, Randers et al. 1972) naively expected that the system dynamics methodology that Forrester had developed (Forrester 1968; Forrester 1971) and that they had applied would be welcomed by economists, as a way of escaping from the dead-end of having to pretend that equilibrium applied in order to model dynamic processes. They were unprepared for the ferocity of the attack on their methodology by Neoclassical economists, and most prominently by William Nordhaus (Nordhaus 1973; Forrester, Gilbert et al. 1974; Nordhaus, Stavins et al. 1992).

Since then, economics has, if anything, gone backwards. As the 2018 recipient of “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” Paul Romer observed, in his working paper “The Trouble With Macroeconomics”, mainstream macroeconomic modelling is so divorced from reality today that it deserves to be called not merely “post-modern” but “post-real”:

Lee Smolin begins The Trouble with Physics (Smolin 2007) by noting that his career spanned the only quarter-century in the history of physics when the field made no progress on its core problems. The trouble with macroeconomics is worse. I have observed more than three decades of intellectual regress…

Macroeconomists got comfortable with the idea that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take, after Kydland and Prescott (1982) launched the real business cycle (RBC) model…

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite. The noncommittal relationship with the truth revealed by these methodological evasions … goes so far beyond post-modern irony that it deserves its own label. I suggest “post-real.” (Romer 2016).

Critical perspectives like Romer’s from within the mainstream might give us hope, but the belief that he criticizes—that the economy can and indeed should be described as an equilibrium system occasionally disturbed by exogenous shocks—is still the majority belief within the mainstream. An example of the ferocity with which this belief is held is V.V. Chari’s defence of DSGE (“Dynamic Stochastic General Equilibrium”) models before the US Congress in 2010, after their abject failure to anticipate the “Great Recession” of 2008:

All the interesting policy questions involve understanding how people make decisions over time and how they handle uncertainty. All must deal with the effects on the whole economy. So, any interesting model must be a dynamic stochastic general equilibrium model. From this perspective, there is no other game in town… A useful aphorism in macroeconomics is: “If you have an interesting and coherent story to tell, you can tell it in a DSGE model. If you cannot, your story is incoherent.” (Chari 2010, p. 2)

From the perspective of a genuine mathematician, this is nonsense: the addiction to equilibrium modelling is the main weakness of economics, not its strength. The brilliant applied mathematician John Blatt put it this way in 1983:

In defense of this concentration on equilibrium and the neglect of true dynamics, there are two arguments:

1. Statics or (what comes to much the same thing) balanced proportional growth is much easier to handle theoretically than true dynamic phenomena. A good understanding of statics is a necessary prerequisite for the study of dynamics. We must learn to walk before we can attempt to run.

2. In any case, while no economic system is ever in strict equilibrium, the deviations from such a state are small and can be treated as comparatively minor perturbations. The equilibrium state is, so to speak, the reference state about which everything turns and toward which the system gravitates. Market prices fluctuate up and down, but there exist “natural prices” about which this fluctuation occurs, and these natural prices can be determined directly, by ignoring the fluctuations altogether and working as if strict equilibrium obtained throughout.

Such arguments did carry a great deal of conviction two hundred years ago, when the basic ideas of the science of economics were being formulated for the first time. However, it is impossible to ignore the passage of two hundred years. A baby is expected to first crawl, then walk, before running. But what if a grown-up man is still crawling? At present, the state of our dynamic economics is more akin to a crawl than to a walk, to say nothing of a run. Indeed, some may think that capitalism as a social system may disappear before its dynamics are understood by economists. (Blatt 1983, pp. 4-5. Emphasis added)

When I first read this passage, I regarded it as an excellent piece of hyperbole. Now, almost 40 years later, as capitalism itself is in intensive care thanks to Covid-19, and with signs of climate change abounding in phenomena like the unprecedented wildfires in Australia in 2019-20 (Dowdy, Ye et al. 2019), it is beginning to feel amazingly prescient. This makes the task of establishing genuinely dynamic, non-equilibrium methods in economics even more pressing, despite the near-religious defence of equilibrium modelling by mainstream economists themselves.

References

Bernanke, B. S. (2004). The Great Moderation: Remarks by Governor Ben S. Bernanke At the meetings of the Eastern Economic Association, Washington, DC February 20, 2004. Eastern Economic Association. Washington, DC, Federal Reserve Board.

Blatt, J. M. (1983). Dynamic economic systems: a post-Keynesian approach. Armonk, N.Y, M.E. Sharpe.

Chari, V. V. (2010). Building a science of economics for the real world. Subcommittee on investigations and oversight , Committee on science and technology. U. Congress. Washington, U.S. Government printing office.

Dowdy, A. J., H. Ye, et al. (2019). “Future changes in extreme weather and pyroconvection risk factors for Australian wildfires.” Scientific Reports
9(1).

Fama, E. F. (1970). “Efficient Capital Markets: A Review of Theory and Empirical Work.” The Journal of Finance
25(2): 383-417.

Fama, E. F. and K. R. French (1999). “The Corporate Cost of Capital and the Return on Corporate Investment.” Journal of Finance
54(6): 1939-1967.

Fama, E. F. and K. R. French (2004). “The Capital Asset Pricing Model: Theory and Evidence.” The Journal of Economic Perspectives
18(3): 25-46.

Flaschel, P. (2015). “Goodwin’s MKS system: a baseline macro model.” Cambridge Journal Of Economics
39(6): 1591-1605.

Forrester, J. W. (1968). “Industrial Dynamics-After the First Decade.” Management Science
14(7): 398-415.

Forrester, J. W. (1971). World Dynamics. Cambridge, MA, Wright-Allen Press.

Forrester, J. W., W. L. Gilbert, et al. (1974). “The Debate on “World Dynamics”: A Response to Nordhaus.” Policy Sciences
5(2): 169-190.

Giraud, G. and M. Grasselli (2019). “Household debt: The missing link between inequality and secular stagnation.” Journal of Economic Behavior & Organization.

Goodwin, R. M. (1966). “Cycles and Growth: A growth cycle.” Econometrica
34(5 Supplement): 46.

Goodwin, R. M. (1967). A growth cycle. Socialism, Capitalism and Economic Growth. C. H. Feinstein. Cambridge, Cambridge University Press: 54-58.

Goodwin, R. M. (1982). Essays in economic dynamics by R. M. Goodwin. London, London : Macmillan.

Grasselli, M. and B. Costa Lima (2012). “An analysis of the Keen model for credit expansion, asset price bubbles and financial fragility.” Mathematics and Financial Economics
6: 191-210.

Grasselli, M. R. and A. Maheshwari (2017). “A comment on ‘Testing Goodwin: growth cycles in ten OECD countries’.” Cambridge Journal of Economics
41(6): 1761-1766.

Harvie, D., M. A. Kelmanson, et al. (2007). “A Dynamical Model of Business-Cycle Asymmetries: Extending Goodwin.” Economic Issues
12(1): 53-92.

Hicks, J. R. (1949). “Mr. Harrod’s Dynamic Theory.” Economica
16(62): 106-121.

Hicks, J. R. (1950). A Contribution to the Theory of the Trade Cycle. Oxford, Clarendon.

Ireland, P. N. (2011). “A New Keynesian Perspective on the Great Recession.” Journal of Money, Credit, and Banking
43(1): 31-54.

Keen, S. (1995). “Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.’.” Journal of Post Keynesian Economics
17(4): 607-635.

Keen, S. (2006). The Need and Some Methods for Dynamic Modelling in Post Keynesian Economics. Complexity, Endogenous Money and Macroeconomic Theory: Essays in Honour of Basil J. Moore. M. Setterfield. Edward Elgar, Cheltenham: 36-59.

Keen, S. (2015). “The Macroeconomics of Endogenous Money: Response to Fiebiger, Palley & Lavoie.” Review of Keynesian Economics
3(2): 602 – 611.

Kermack, W. O., A. G. McKendrick, et al. (1927 [1997]). “A contribution to the mathematical theory of epidemics.” Proceedings of the Royal Society of London A
115: 700–721.

Landesmann, M., R. Goodwin, et al. (1994). Productivity Growth, Structural Change and Macroeconomic Stability. Economic growth and the structure of long-term development: Proceedings of the IEA conference held in Varenna, Italy. New York, St. Martin’s Press: 205-240.

Lavoie, M. (2014). SFC modeling isn’t always easy! UMKC Conference on the Stock Flow Consistent Approach. UMKC, UMKC.

Lavoie, M. and W. Godley (2001). “Kaleckian Models of Growth in a Coherent Stock-Flow Monetary Framework: A Kaldorian View.” Journal of Post Keynesian Economics
24(2): 277-311.

Li, T.-Y. and J. A. Yorke (1975). “Period Three Implies Chaos.” The American Mathematical Monthly
82(10): 985-992.

Lucas Jr, R. E. and T. J. Sargent (1978). After Keynesian Macroeconomics. After The Phillips Curve: Persistence of High Inflation and High Unemployment. F. E. Morris. Boston, Federal Reserve Bank of Boston.

Marx, K. (1867). Capital. Moscow, Progress Press.

McLeay, M., A. Radia, et al. (2014). “Money creation in the modern economy.” Bank of England Quarterly Bulletin
2014 Q1: 14-27.

Meadows, D. H., J. Randers, et al. (1972). The limits to growth. New York, Signet.

Minsky, H. P. (1957). “Monetary Systems and Accelerator Models.” The American Economic Review
47(6): 860-883.

Minsky, H. P. (1977). “The Financial Instability Hypothesis: An Interpretation of Keynes and an Alternative to ‘Standard’ Theory.” Nebraska Journal of Economics and Business
16(1): 5-16.

Minsky, H. P. (1982). Can “it” happen again? : essays on instability and finance. Armonk, N.Y., M.E. Sharpe.

Mirowski, P. (2020). The Neoliberal Ersatz Nobel Prize. Nine Lives of Neoliberalism. D. Plehwe, Q. Slobodian and P. Mirowski. London, Verso: 219-254.

Nordhaus, W. D. (1973). “World Dynamics: Measurement Without Data.” The Economic Journal
83(332): 1156-1183.

Nordhaus, W. D., R. N. Stavins, et al. (1992). “Lethal Model 2: The Limits to Growth Revisited.” Brookings Papers on Economic Activity
1992(2): 1-59.

Pomeau, Y. and P. Manneville (1980). “Intermittent transition to turbulence in dissipative dynamical systems.” Communications in Mathematical Physics
74: 189-197.

Prescott, E. C. (1999). “Some Observations on the Great Depression.” Federal Reserve Bank of Minneapolis Quarterly Review
23(1): 25-31.

Quiggin, J. (2010). Zombie economics : how dead ideas still walk among us / John Quiggin. Oxford, Princeton University Press.

Romer, P. (2016). “The Trouble with Macroeconomics.” https://paulromer.net/trouble-with-macroeconomics-update/WP-Trouble.pdf.

Samuelson, P. A. (1939). “Interactions between the multiplier analysis and the principle of acceleration.” Review of Economic and Statistics
20: 75-78.

Samuelson, P. A. (1939). “A Synthesis of the Principle of Acceleration and the Multiplier.” Journal of Political Economy
47(6): 786-797.

Sargent, T. J. and J. Stachurski (2020). Advanced Quantitative Economics with Python.

Sargent, T. J. and J. Stachurski (2020). Quantitative Economics with Python.

Sargent, T. J. and N. Wallace (1976). “Rational Expectations and the Theory of Economic Policy.” Journal of Monetary Economics
2(2): 169-183.

Srinivasan, N. (2008). “From the Great Inflation to the Great Moderation: A Literature Survey.” Journal of Quantitative Economics, New Series
6(1-2): 40-56.

Stock, J. H. and M. W. Watson (2002). Has the business cycle changed and why? NBER Working Papers, National Bureau of Economic Research.

Walters, A. A. (1966). “Incremental Capital-Output Ratios.” The Economic Journal
76(304): 818-822.

 

 

Blatt’s Dynamic Economic Systems, long out of print, is now available on Kindle. Buy it now!

I am writing a chapter on for a book on economic dynamics right now, and as I was doing so, I cited John Blatt‘s Dynamic Economic Systems, and lamented the fact that this brilliant book was out of print. But you never know, so I decided to check.

I was delighted to find that this is no longer a fact. On July 29, 2019, Dynamic Economic Systems: A Post Keynesian Approach was republished in Kindle format by Routledge (its Taylor and Francis division): here’s the link to it on their website: https://www.taylorfrancis.com/books/e/9781315496290.

I was a bit dubious when I first saw the link, because the abstract of the book—both on Amazon, and on Taylor & Francis’s website—was “The future of the Common Law judicial system in Hong Kong depends on the perceptions of it by Hong Kong’s Chinese population, judicial developments prior to July 1, 1997, when Hong Kong passes from British to Chinese control, and the Basic Law. These critical issues are addressed in this book.”. Really. But Blatt’s book is SO good that I thought I’d risk wasting the £24.66 and becoming more informed than I needed to be on Hong Kong.

Fortunately, there it was in all its glory, and brilliantly typeset—far better than the scanned version I’ve been relying upon for many years.

I urge my Patrons to buy a copy: I knew a lot of the history of economic thought and the mathematical methods that Blatt covers before I read this book, but Blatt taught me many things I didn’t know, and he gave me insights that I probably would never have acquired on my own. In particular, I had read Goodwin’s “A growth cycle” paper {Goodwin, 1967 #279} before reading Blatt, but really couldn’t make head nor tail of it: Goodwin was not a good writer. Fortunately, Blatt’s explanation of it was so clear, and so encouraging, that I used Goodwin’s model as the basis of my model of Minsky’s Financial Instability Hypothesis {Keen, 1995 #3316}. That model has been the basis of everything I’ve done in economics since. I wouldn’t have managed it without Blatt.

The book is superbly written, and Blatt handles the problem of communicating mathematics to economists—who know far less mathematics than they think they do—very well. Even if you can’t cope with the mathematics, it provides the best overview of the history of economic thought and the development (and retardation!) of economics that I have ever read. I aspire to writing as good a book as this when I finally pen my “magnum opus”.

I quote the first part of the book’s introduction below, to give you a feeling for its content, and Blatt’s extraordinarily clear writing style. Before it, I’ll provide one of my favorite anecdotes, in which Blatt stars: I wasn’t there to witness this event unfortunately, but I heard about it from many colleagues when I was studying both economics and mathematics while working at the University of New South Wales in the late 1980s and early 1990s. The antagonist in this story, Murray Kemp, was also a colleague then, and I have to emphasise, Murray was—and is—a thorough gentleman, and someone I was very glad to call a friend when I was there (he also beat me at tennis every time we played, so he was a top class sporting companion too). Murray suffers in the anecdote, and I can think of many other people whom I’d rather had been on the sharp end of Blatt’s famously sharp tongue than the very decent Murray. But the story is too good not to share, in this context.

Murray Kemp and John Blatt were both Professors at the University of New South Wales, Murray of Economics and John of Mathematics. Murray had been nominated for the “Nobel Prize in Economics” for his work on international trade theory, while Blatt had been nominated for the Nobel Prize in Physics. Neither got the award, but because of this coincidence, Murray regarded John as his one true peer at UNSW, and invited John to attend a seminar of his at the Department of Economics.

After Murray finished delivering his paper, he asked John what he thought of it. Blatt replied, in his heavily Austrian-accented English:
    “Zat is ze greatest load of rubbish that I have sat through in my professional career. If this is advanced economics, then there is something seriously wrong with the state of economics, and I intend finding out what it is.”

Some years later, this brilliant book was born. Buy a copy: you won’t regret it.

Introduction A. Purpose and limitations

The bicentenary of The Wealth of Nations has passed, and so has the centenary of the neoclassical revolution in economics. Yet the present state of dynamic economic theory leaves very much to be desired and appears to show little sign of significant improvement in the near future.

From the time of Adam Smith, economic theory has developed in terms of an almost universal concentration of thought and effort on the study of equilibrium states. These may be either states of static equilibrium, or states of proportional and balanced growth. Truly dynamic phenomena, of which the most striking example is the trade cycle, have been pushed to the sidelines of research.

In defense of this concentration on equilibrium and the neglect of true dynamics, there are two arguments:

1. Statics or (what comes to much the same thing) balanced proportional growth is much easier to handle theoretically than true dynamic phenomena. A good understanding of statics is a necessary prerequisite for the study of dynamics. We must learn to walk before we can attempt to run.

2. In any case, while no economic system is ever in strict equilibrium, the deviations from such a state are small and can be treated as comparatively minor perturbations. The equilibrium state is, so to speak, the reference state about which everything turns and toward which the system gravitates. Market prices fluctuate up and down, but there exist “natural prices” about which this fluctuation occurs, and these natural prices can be determined directly, by ignoring the fluctuations altogether and working as if strict equilibrium obtained throughout.

Such arguments did carry a great deal of conviction two hundred years ago, when the basic ideas of the science of economics were being formulated for the first time.

However, it is impossible to ignore the passage of two hundred years. A baby is expected to first crawl, then walk, before running. But what if a grown-up man is still crawling? At present, the state of our dynamic economics is more akin to a crawl than to a walk, to say nothing of a run. Indeed, some may think that capitalism as a social system may disappear before its dynamics are understood by economists.

It is possible, of course, that this deplorable lack of progress is due entirely to the technical difficulty of investigating dynamic systems and that economists, by following up the present lines of research, will eventually, in the long long long run, develop a useful dynamic theory of their subject.

However, another possibility must not be ignored. It is by no means true that all dynamic behavior can be understood best, or even understood at all, by starting from a study of the system in its equilibrium state. Consider the waves and tides of the sea. The equilibrium state is a tideless, waveless, perfectly flat ocean surface. This is no help at all in studying waves and tides. We lose the essence of the phenomenon we are trying to study when we concentrate on the equilibrium state. Exactly the same is true of meteorology, the study of the weather. Everything that matters and is of interest to us happens because the system is not in equilibrium.

In the first example, the equilibrium state is at least stable, in the sense that the system tends to approach equilibrium in the absence of disturbances. But there is no such stability in meteorology. The input of energy from the sun, the rotation of the earth, and various other effects keep the system from getting at all close to equilibrium. Nor, for that matter, would we wish it to approach equilibrium. The true equilibrium state, in the absence of heat input from the sun, is at a temperature where all life comes to a stop! The heat input from the sun is the basic power source for winds, clouds, etc., for everything that makes our weather. The heat input is very steady, but the resulting weather is not steady at all. None of this can be understood by concentrating on a study of equilibrium.

There exist known systems, therefore, in which the important and interesting features of the system are “essentially dynamic,” in the sense that they are not just small perturbations around some equilibrium state, perturbations which can be understood by starting from a study of the equilibrium state and tacking on the dynamics as an afterthought.

If it should be true that a competitive market system is of that kind, then the lack of progress in dynamic economics is no longer surprising. No progress can then be made by continuing along the road that economists have been following for two hundred years. The study of economic equilibrium is then little more than a waste of time and effort.

This is the basic contention of Dynamic Economic Systems. Its main purpose is to present arguments for this contention and to start developing the tools which are needed to make progress in understanding truly dynamic economic systems.

A subsidiary purpose, related to the main one but not identical with it, is to present a critical survey of the more important existing dynamic economic theories-in particular, the theories of balanced proportional growth and the theories of the trade cycle. This survey serves three purposes:

1. It is useful in its own right, as a summary of present views, and it can be used as such by students of economics.

2. It establishes contact between the new approach and the literature.

3. It is necessary to clear the path to further advance, which is currently blocked by beliefs which are very commonly held, but which are not in accordance with the facts.

The third point needs some elaboration. The main enemy of scientific progress is not the things we do not know. Rather, it is the things which we think we know well, but which are actually not so! Progress can be retarded by a lack of facts. But when it comes to bringing progress to an absolute halt, there is nothing as effective as incorrect ideas and misleading concepts. “Everyone knows” that economic models must be stable about equilibrium, or else one gets nonsense. 1 So, models with unstable equilibria are never investigated! Yet, in this as in so much else, what “everyone knows” happens to be simply wrong. Such incorrect ideas must be overturned to clear the path to real progress in dynamic economics.

This is a book on basic economic theory, addressed to students of economics. It is not a book on “mathematical economics,” and even less so a book on mathematical methods in economics. On the contrary, the mathematical level of this book has been kept deliberately to an irreducible, and extremely low, minimum. Chapters are literary, with mathematical appendices. The level of mathematics in the literary sections is the amount of elementary algebra reached rather early in high school; solving two linear equations in two unknowns is the most difficult mathematical operation used. In the mathematical appendices, the level is second year mathematics in universities; the meaning of a matrix, of eigenvalues, and of a matrix inverse are the main requirements. Whenever more advanced mathematics is required-and this is very rarely indeed-the relevant theorems are stated without proof, but with references to suitable textbooks. This very sparing use of mathematics should enable all economics students, and many laymen, to read and understand this book fully. Those who cannot follow the mathematical appendices must take the mathematics for granted, but if they are prepared to do that, they will lose nothing of the main message.

This does not mean that mathematics is unimportant or of little help, when used properly. This unfortunately all too common belief is incorrect. To make real progress in dynamic economics, researchers must know rather more, and somewhat different, mathematics from what is commonly taught to students of economics. But while mathematics is highly desirable, probably essential, to making further progress, the progress that has already been made can be phrased in terms understandable to people without mathematical background. This reliance on nonmathematical diction has not been easy for someone to whom mathematics is not some arcane foreign language, but rather his normal mode of thinking; only time can tell to what extent the effort has been successful.

Another limitation of this work is the restriction to classically competitive conditions in most cases. There is no discussion of monopoly, oligopoly, restrictions on entry, or related matters which are stressed, quite rightly, in the so-called post-Keynesian literature. This omission is not to be interpreted as disagreement with, or lack of sympathy for, the contentions of that school. Rather, the post-Keynesians have been entirely too kind and indulgent toward the neoclassical doctrine. The assumption of equilibrium has indeed been attacked (Robinson 1974, for example), but only in rather general terms. The more prominent part of the postKeynesian critique has been that conventional economic theory bases itself on assumptions (e.g., perfect competition, perfect market clearing, certainty of the future) which are invalid in our time, though some of them (not perfect certainty of the future!) may have been appropriate a century ago. We agree with this criticism, but that is not the point we wish to make in this book.

Rather, even under its own assumptions, conventional theory is incorrect. A competitive economic system with market clearing and certainty of the future does not behave in the way that theory claims it should behave. It is not true that, under these assumptions, the equilibrium state is stable and a natural center of attraction to which the system tends to return of its own accord. Obviously, in any such discussion, questions of oligopoly, imperfect market clearing, etc. are irrelevant. It is for that reason, and only for that reason, that such matters are ignored here.

In this view, the rise of oligopoly toward the end of the nineteenth century was not just an accident or an aberration of the system. Rather, it was a natural and necessary development, to be expected on basic economic grounds. John D. Rockefeller concealed his views on competition and paid lip service to prevailing ideas when it suited him. But he was a genius, who understood the system very well indeed and proved his understanding through phenomenal practical success. Alfred Marshall’s Principles of Economics was written and refined at the same time that Rockefeller established the Standard Oil trust and piloted it to an absolute dominance of the oil industry. There can be little doubt who had the better understanding of the true dynamics of the system.

It follows that the theory of this book should not be applied directly to conditions of monopoly or oligopoly, which are so prevalent in the twentieth century. However, the theory is directly relevant to something equally prevalent, namely the creation of economic myths and fairy tales, to the effect that all our present-day ills, such as unemployment and inflation, are due primarily to the mistaken intervention by the state in the working of what would otherwise be a perfect, self-adjusting system of competitive capitalism. This system was in power in the nineteenth century. It is wellknown that it failed to ensure either common equity (read Charles Dickens on the conditions under which little children were worked to death!) or economic stability: There were “panics” every ten years or so. The theory of this book shows that the failure of stability was not an accident, but rather was, and is, an inherent and inescapable feature of a freely competitive system with perfect market clearing. The usual equilibrium analysis assumes stability from the start, whereas actually the equilibrium is highly unstable in the long run. The economic myths pushed by so many interested parties are not only in contradiction to known history, but also to sound theory.

 

Blatt, John M.. Dynamic Economic Systems (pp. 4-8). Taylor and Francis. Kindle Edition.