The Mathematical Model of Modern Monetary Theory 2

The previous post finished with the observation that, since the sum of all Equity is zero, and the banking sector must be in positive equity, part of the remainder of the economy—the government, or the non-bank public—must be in negative equity. So which sector is better equipped to handle that: the non-bank public, or the government?

This will raise the age-old question that is perennially thrown at those who advocate government spending: “How are you going to pay for it?”, or “Where’s the money going to come from?”. To answer both these questions, we first need to know how money is created in general.

Since money, exclusively in this model and primarily in the real world, is the sum of the Banking Sector’s Liabilities and Equity, any action which increases Bank Liabilities and Equity creates money. Since every transaction is recorded twice, operations which increase the money supply must therefore occur on both the Assets and the Liabilities/Equity sides of the banking sector’s ledger. Operations which occur exclusively on either the Assets side, or the Liabilities/Equity side, shift money between accounts and do not create money.

So the answer to the “where’s the money going to come from?” question is “from any operation which increases the Assets and Liabilities/Equity of the banking sector”. Equally, any operation that reduces Assets and Liabilities/Equity destroys money.

Only 5 of the 13 flows in this model affect both the Asset and Liabilities/Equity sides of the banking sector’s ledger: Lending (and Repayment); Government spending (and Taxation); and interest payments on Treasury Bonds. These are shown in the top 5 rows of Figure 1. All other operations are either Liability/Equity swaps, or Asset swaps, and they don’t create money.

Figure 1: The Banking Sector’s Ledger

Operations that don’t create money include the usual suspects—payment of interest on loans by firms (which transfers money from the Firm Sector to the Banking Sector), purchases of goods from the Firm Sector by workers, capitalists and bankers. But they also include two operations that figure large in conventional arguments about how to pay for government spending: sales of Treasury Bonds to the finance sector; and purchases of Treasury Bonds from the finance sector by the Central Bank in “Open Market Operations” (which are primarily used to control the rate of interest). Neither of these operations create money.

They are asset swaps: the sale of Treasury Bonds to the finance sector (the “Banking Sector” in this model) reduces Reserves and increases the Banking Sector’s stock of Treasury Bonds. The purchase of Treasury Bonds from the Banking Sector by the Central Bank reduces the Banking Sector’s stock of Treasury Bonds and increases the Banking Sector’s Reserves. Neither operation affects the amount of money in the economy—the sum of the Liabilities plus Equity of the Banking Sector.

Since it’s an asset swap for the Banking sector to buy Treasury Bonds using Reserves, the answer to “where’s the money going to come from?” is “from operations that increase the sum of Reserves and Bonds”. You can work this out by adding up the entries in the Reserves and Bonds columns of Figure 1: the Banking sector assets Reserves plus Bonds will grow if government spending plus interest payments on Treasury Bonds exceed taxation:

         

The Reserves that are used to buy the Bonds thus come from the deficit itself. The deficit—the extent to which government spending and interest payments exceeds taxation—creates money in private sector bank accounts (the Firm Sector and Bank Equity only in this model). This boosts the Liabilities and Equity of the Banking Sector. The corresponding Asset that is increased by the deficit is the Reserve accounts of the banking sector. If no interest is paid on Reserves—the “usual” situation, and sometimes made worse by charging negative interest on Reserves in the false belief that this will encourage Banks to lend more—then the Banks have a positive incentive to use these reserves to buy Treasury Bonds instead.

So the answer to the “Where’s the money going to come from?” to pay for the deficit question is that it comes from the deficit itself: the deficit creates money that can be spent in the private sector; and it creates Reserves, which are (usually) non-interest earning Assets of the Banking Sector that are liabilities of the Central Bank—see Figure 2.

Figure 2: The Central Bank’s Ledger

Reserves are Assets of the Banking Sector that earn it no income. But if the Treasury issues bonds to cover the deficit—if the total value of Treasury Bonds offered for sale is equal to the deficit itself—then the Banking Sector is offered a deal to swap a non-income-earning asset for an income-earning one.

What do you think the banks would do, when offered that deal? They take it, of course: this is why Treasury Bond Auctions have always been over-subscribed. Selling the bonds themselves is not a problem. It is, as Kelton emphasises in The Deficit Myth, a “no brainer” swap of a zero-income-asset for a positive-income-asset.

The money needed to buy the bonds has already been created, and is sitting in the Reserve account. Banks then transfer their funds from one Asset that earns no income (Reserves) to another Asset that does (Treasury Bonds). This Asset is a liability, not of the Central Bank, but of the Treasury itself—see Figure 3.

Figure 3: The Treasury’s Ledger

This table also shows that the increase in Reserves is caused by the fall in the Government’s equity: Reserves (and hence money) rise if the Government runs a deficit; Reserves and money fall if the Government runs a surplus:

         

How does the Treasury pay the interest on the bonds? The same way it pays for spending in excess of taxation: it runs up its own negative equity—see the final column in Figure 3.

This can be done in two ways, as indeed can overall deficit-financing itself: by the Treasury having a negative balance in its account at the Central Bank; or if the Treasury borrows from the Central Bank to pay Interest on the Bonds (the second last row of Figure 3). If it does so, the negative equity from paying interest on the bonds remains, but the Treasury Account at the Central Bank can be kept non-negative.

A third option is that the Central Bank buys the bonds off the Banking Sector in Open Market Operations (or QE). If it does so, the amount of interest the Treasury needs to pay falls. Also, since the Treasury is the effective owner of the Central Bank, it doesn’t need to pay interest to the Central Bank on its holdings of Treasury Bonds—or if it does, the interest payments come back to the Treasury in Central Bank profits.

So the Government can run a deficit, and pay interest on the Treasury Bonds issued to cover it (not finance it: the deficit is self-financing in that it creates money), so long as it is willing to countenance being in negative equity. As noted in the previous post, because Banks must be in positive equity, the non-Banking sectors (and that includes the Government) must be in negative equity. For the Government to achieve positive equity therefore—which seems desirable, if you take the partial view of the economy epitomised by Representative Hern’s motion that “deficits are unsustainable, irresponsible, and dangerous”—the non-Bank private sector must be pushed into even greater negative equity.

Does that sound like a good idea?

To show the consequences for the economy of the government running a deficit or a surplus, it’s necessary to go beyond the purely structural equations used so far, and enable simulations. In what follows, I’ve built an extremely simple model of monetary dynamics in which all expenditures depend on the amounts of money in relevant bank accounts. There are many other ways these expenditures could be modelled, but this way ensures fairly easily that crazy outcomes—like workers have negative bank balances, but still spending their wages—don’t happen in this very simple model.

For example, I relate consumption by workers to the level of workers deposit accounts at the private banks, using the engineering concept of a “time constant”. This is a number that tells you how long an action would take to run an account down to zero, if it’s removing money from the account, or to double it, if it’s adding money to an account. A time constant of 0.02 for workers consumption says that if workers consumed at a constant rate of there were no other inflows or outflows in their accounts, then they would run their accounts down to zero in 1/50th of a year—roughly a week. Higher values for capitalists and bankers—say 2 for capitalists and 5 for bankers—assert that it would take 2 years and 5 years respectively for them to run their accounts down to zero through consumption alone.

You divide the relevant stock (the level of bank accounts, in the case of consumption) by the time constant to get the annual flow—see Figure 4.

Figure 4: Equations for consumption

Similarly, a time constant of 9 for bank lending says that if this was the only inflow affecting the level of debt, and it went on at a constant rate, debt would double in 9 years. Repayment with the same time constant means that the level of (private) debt remains constant.

Figure 5: Modelling interest payments, lending and repayment

GDP is treated as the turnover of money in the firm sector (it can also be treated as the sums of consumption and investment—which investment is entirely debt-financed in this model—and net government spending, but that makes it possible to end up with negative sums in deposit accounts, unless a much more complicated model is constructed).

Figure 6: Modelling GDP and income distribution

Government spending and taxation are treated as simple percentages of GDP. For the moment, I have not considered how they are financed, so the flows BondsB, BondsCB and InterestBonds are not wired up. The equations that define the system’s flows at this point are shown in Figure 7. At this stage, sales of Treasury Bonds to the Banking Sector, Central Bank purchases of Bonds from the Banks, and Treasury borrowing from the Central Bank, are not defined.

Figure 7: All the flow equations in the model (without bond sales or interest on bonds)

With no explicit financing of a government deficit, the negative equity that this causes for the government turns up as a negative balance in the Treasury’s account at the Central Bank—see Figure 8, which shows the impact of a 2% of GDP deficit for 60 years on the Treasury. It runs accumulates negative equity of $160, and that is entirely due to its account at the Central Bank being negative to the tune of $160.

Figure 8: The Treasury’s Ledger with no bond sales: negative Treasury Account and Treasury Equity

Figure 9 shows the impact of a 2% of GDP deficit for 60 years from the Central Bank’s perspective, and the key point is that the negative equity of the Treasury is the same magnitude as the positive Reserves of the Banking Sector: the accumulated deficits of the Government are precisely equal to the accumulated Reserves of the Banking Sector.

Figure 9: The Central Bank’s Ledger with no Bond sales: negative Treasury precisely equals positive Reserves

How does the picture change if we require that the Treasury’s account at the Central Bank can’t go into overdraft? Then the Treasury has to sell Bonds equal to its Deficit, and borrow from the Central Bank to pay interest on those bonds.

Figure 10: Bond sales are equal to the deficit, no Central Bank purchases of Bonds from the Banking Sector

Running this model to the same point as the previous one, where the Treasury Equity reaches minus $160, this negative equity consists of $115 in Bonds and $45.2 in Loans from the Central Bank (see Figure 11; the Treasury account at the Central Bank remains at zero). The Bond sales represent the accumulated government deficit over time, while the Loans from the Central Bank represent the accumulated interest on those bonds.

Figure 11: Treasury books with Bond sales covering the deficit

Can the Central Bank afford to lend the money to pay the interest on Treasury Bonds to the Treasury? Yes: as Figure 12 shows, the loans to the Treasury are an Asset of the Central Bank. It has the capacity to expand its balance sheet indefinitely, and none of the operations shown in this model affect its equity at all.

Figure 12: Central Bank books with Bond sales and interest payments on Bonds

I have to note that this was a surprise to me: I had guessed that the government’s negative equity would be carried by the Central Bank, and the fact that a Central Bank—unlike a private one—can operate with negative equity {Bholat, 2016 #6037} would be what enabled the government as a whole to sustain negative equity. But it turned out that my guess was wrong: the Central Bank simply acts as an enabler of and conduit for the Treasury’s negative equity.

Does the fact that the Treasury is borrowing from the Central Bank (in this model) cause any difficulty? No, because while private individuals can’t pay interest on loans by borrowing from banks—without that interest ballooning with further interest and driving us bankrupt—the Treasury can pay interest on Bonds to the Banking Sector by borrowing from the Central Bank because, technically and legally, the Treasury owns the Central Bank. It therefore doesn’t have to pay interest on any loans from the Central Bank If it did, it would get that money back in dividend payments from the Central Bank anyway.

What happens if the Central Bank buys all the Bonds from the Banks? Then the negative equity of the Treasury consists entirely of its debt to the Central Bank (see Figure 13).

Figure 13: Treasury’s books with Central Bank Open Market Operations purchasing all Treasury Bonds

Since those Bonds have been purchased off the Banking Sector, the Banking Sector has Reserves of 160 (see Figure 14). This is not as desirable a situation for the Banks as when they own the bonds, because with no Treasury Bonds they receive no interest payments from the Government. Far from the finance sector in general having “Bond Vigilantes” ready to deny the government bond sales if they’re worried about the state of government finances, the “Bond Vigilantes” are vigilantly on the lookout for sales of Treasury Bonds so that they can swap out of a non-income-earning asset and into an income-earning one.

Figure 14: Banking Sector’s books with OMOs buying 100% of Bonds

In the next post, I’ll consider what the impact is of different fiscal regimes: the government running a deficit (as the USA has for most of the last 120 years—roughly of 2.5% of GDP); the government running a surplus; the government running a surplus and the private sector borrowing from the Banking Sector; the government running a surplus and the private sector reducing its debt to the Banking Sector; and the government running a deficit while the private sector reduces its debt to the Banking Sector.

The Mathematical Model of Modern Monetary Theory

One Mathematical Model of Modern Monetary Operations

I confess immediately that I chose the title and subtitle for this post because their acronyms are palindromes.

The subtitle is more accurate than the title, because this model considers only the monetary aspects of MMT: the Job Guarantee and inflation management components are not yet incorporated. But the monetary assertions of MMT remain in dispute in economic and political circles, so it is worth putting these into a mathematical model where their veracity can be tested.

The primary stimulus for developing the model was the publication of Stephanie Kelton’s The Deficit Myth. Stephanie has written the book for non-technical readers, and she’s done a very good job: it’s a very easy read that explains why many conventional wisdoms about government spending are wrong. But MMT is facing heavy resistance in political and economic circles, with my favourite to date being a motion before the US Congress, posted by Representative Kevin Hern, to resolve:

That the House of Representatives (1) realizes that deficits are unsustainable, irresponsible, and dangerous; and (2) recognizes— (A) that the implementation of Modern Monetary Theory would lead to higher deficits and higher inflation; and (B) the duty of the House of Representatives to condemn Modern Monetary Theory.

The objective of this series of posts is to allow the assessment of the first part of this motion—the assertion that “deficits are unsustainable, irresponsible, and dangerous”.

The models in this post are built in the Open Source system dynamics program Minsky, whose unique feature is the capacity to build models of financial flows using what are called Godley Tables (in honour of Wynne Godley, the pioneer of stock-flow-consistent-modelling). These tables enforce the “law of accounting” that (see Figure 1).

Figure 1: A blank Godley Table

Once an account is flagged as an “Asset” for one entity, Minsky knows that it has to also be shown as a “Liability” for another entity. This
makes it possible to take an integrated look at the financial system, which allows us to assess Hern’s motion from the perspective of the entire monetary system, and not just the Government’s view of it.

An Integrated View of Deficits, Surpluses, and Equity

This Minsky model is a simple but complete model of a domestic monetary system. It has six sectors which can be divided into five components:

  • The Treasury, and the Central Bank, which together constitute the Government Sector;
  • The Banking Sector;
  • The Firm Sector, Capitalists and Workers, which constitute the “NonBank Private Sector“;
  • The NonBank Private Sector and the Banking Sector, which constitute the “NonGovernment Sector“; and
  • The sum of the Government and the NonBank Private Sector, which constitute the “NonBank Sector“.

For simplicity, taxes—and government spending—are levied only on the Firm Sector, and banks make loans only to firms (the aggregate outcome would be the same if the model were generalized to have taxes and spending and loans in all sectors—it would just be much harder to read the tables).

There are just thirteen financial flows:

  • Treasury spends on firms (Spend);
  • Treasury taxes firms (Tax);
  • Treasury sells bonds to the Banking Sector to cover any deficit (BondB);
  • Treasury pays interest on Treasury Bonds owned by the Banking Sector (InterestBonds);
  • The Central Bank buys and sells Treasury Bonds in “Open Market Operations” (BondCB);
  • Banks lend to firms (Lend);
  • Firms pay interest to Banks (Interest);
  • Firms repay some debt to banks (Repay);
  • Firms hire workers (Wages);
  • Firms pay dividends (Dividends);
  • Workers buy goods from firms (ConsW);
  • Capitalists buy goods from firms (ConsK); and
  • Bankers buy goods from firms (ConsB).

Minsky provides an integrated view of how these flows interact to determine the financial position of each of the six sectors in the model in interlocking double-entry bookkeeping tables. It generates differential equations from these flows that show how the stocks—the financial accounts—change over time.

The danger to which Hern alludes is immediately apparent when we look at the Treasury’s Equity—the final column in Figure 2: if Treasury spending plus interest payments on outstanding Treasury Bonds exceeds taxation revenue, then its Equity will fall.

Figure 2: The Treasury’s accounts

Mathematically, the rate of change of Treasury Equity is the sum of the flows Tax minus Spending minus Interest payments on Bonds held by the Banking Sector:

         

Given the initial conditions used in this post, in which Treasury Equity starts as zero, a deficit will immediately put the Treasury into negative equity.

This is not offset by the Central Bank, which comes out with no impacts on its equity position at all (note, this is not what I expected when I started building this model).

Figure 3: The Central Bank’s accounts

The rate of change of the Government’s Equity is therefore equal to its surplus—or the negative of its deficit, since the government has normally been in deficit for as long as records have been kept—see Figure 4. The only periods in which the Government has been in surplus for a sustained period are:

  • The 1920s, between mid-1920 and mid-1931, a period of 11 years;
  • The immediate post-WWII period, between early 1947 and mid-1949, a period of 2 years; and
  • The late 1990s till early 2000s, between 1998 and early 2002, a period of 4 years.

Figure 4: US Government Surplus Divided by GDP. The average value is minus 2.48% of GDP

Returning to the model in this post, the impact of the Government deficit on the Government itself is entirely borne by the Treasury:

         

Summing the Treasury and Central Bank Equity equations to show the dynamics of the Government’s equity yields Equation :

         

Looked at just from the point of view of the Government sector then, running deficits is clearly “unsustainable, irresponsible, and dangerous”. If the government wants to have positive equity, then it should run a surplus. It’s an open and shut case—or so it appears, when looking just at the government’s books.

But in this model (and the real economy itself), one entity’s Asset is another’s Liability. So, to know whether a government surplus is a good idea for the system as a whole, we have to ask what the impact is of a government surplus on the rest of the economy?

The rest of the economy is the NonGovernment Sector, the sum of the Banking Sector, and the “Non-Bank Private Sector“: the three non-Government and non-Bank sectors, Firms, Capitalists, and Workers. This model has been set up so that Capitalists and Workers are not directly affected by government spending and taxation, or interest payments on bonds, so we can answer this question just by looking at the economy from the Banking Sector’s point of view (Figure 5) and the Firm Sector’s point of view (Figure 6). The government actions that affect the equity of the Banking Sector and the Firm sector are shown at the top of each table.

The first line in Figure 5 shows that what is a negative for the equity of the Government Sector—paying interest to banks for their holdings of Treasury Bonds—is a positive for the Banking Sector.

Figure 5: The Banking Sector’s accounts

Similarly, first two lines of Figure 6 show that what is a negative for the equity of the Government sector is a positive for the Firm Sector, and that what is a positive for the Government is likewise a negative for the Firm Sector: Government spending increases the equity of the Firm Sector, and taxation reduces it.

Figure 6: The Firm Sector’s accounts

The non-Government net financial position is therefore the mirror image of the Government’s:

         

The deficit defines the flow, in dollars per year. Equity is the accumulation of that flow over time, in dollars. The NonGovernment sector’s equity is, like its deficit, the negative of the Government Sector’s Equity:

         

An integrated perspective on government finances thus reveals two undeniably uncomfortable truths:

  • For the Government to run a surplus, the NonGovernment sector must run a deficit; and
  • For the NonGovernment sector to be in positive equity, the Government Sector must be in identical negative equity.

Like two halves of a see-saw, both cannot be up at the same time. If the government runs a surplus—if the sum of interest payments on bonds plus spending is less than taxation—then the non-government sector is forced to run an identical deficit at that point in time. If the NonGovernment Sector is in positive equity, then the Government Sector must be in identical negative equity.

These outcomes are the macroeconomic consequences of the fact that one entity’s Asset is another’s Liability. The flows in Equations and show changes in Equity at one moment in time. Because these flows are identical in magnitude, but opposite in sign, at the aggregate level, the Equity of an entire economy is zero, and the rate of change of aggregate Equity is also zero.

Therefore, if one subset of the economy has positive equity, the remainder of the economy has identical negative equity. Equally, if the rate of change of one sector’s equity is positive, then the rate of change of the equity of the remainder of the economy is identical in magnitude, and negative. This is shown by Equation , where the first instance of a flow term is shown in black and the second instance in red: there are nine terms, each repeated twice, once as a positive and once as a negative. The sum is zero.

         

The question therefore is not whether deficits—and negative equity—are good or bad, but whose deficits and negative equity are sustainable or non-sustainable.

There is one sector that cannot be in persistent negative equity in a sustainable economic system: the Banking Sector. A bank must have positive Equity: it’s Assets must exceed the value of its Liabilities, otherwise it is bankrupt. Therefore, for a sustainable economic system, the Banking sector must necessarily be in positive Equity (periods when the Banking Sector as a whole is in negative equity are periods of extreme financial crisis, like 2007 and 1929).

It follows that the NonBank Sector—which is the sum of the Government plus, in this model, Firms, Capitalists and Workers—must be in negative equity. This is unavoidable, given that the sum of all Equity is zero. The only question is which subset of the non-Bank economy—the Government, or the NonBank Private Sector—will be in negative equity? Which sector is better placed to handle being in negative equity?

 

That question and others will be considered in a subsequent post.

Postscript

Figure 7 shows the full six Godley Tables that constitute this model, and Equation shows the differential equations generated by this model. If you’d like to explore this model yourself (it is attached to this post), download the latest beta release of Minsky from https://sourceforge.net/projects/minsky/files/beta%20builds/. The beta (currently version 2.19.0-beta.25) has features used here than are not in the current release version (Version 2.18). A subsequent post will define the flows used here to enable simulations, but the basic points of MMT are structural: they can be gleaned from the differential equations themselves, as I have done in this post.

Figure 7: Full Minsky MMT Model

         

 

Personal #Coronavirus Update 05 July 16th 2020: four completed papers, reviewing Kelton, and moving to Bangkok

Another two months has passed since my last update, and as with the last, much has changed in just two months. For those who want just a quick summary, the main news in this bulletin is that:

Moving to Bangkok

It’s just four months since I made the decision to leave Amsterdam and move to Thailand, which was feasible because my partner is Thai. We arrived on March 19th, one day before the Thai authorities started to prevent non-residents entering Thailand, and before quarantine or even compulsory self-isolation was introduced. Now, the country is only allowing citizens (or those on official business) to come here, and everyone must undergo strict quarantine.

Those who, like me, were here as tourists, have had their visa waivers extended till July 31st (what happens after that date is still unknown). No local transmission case has occurred in almost two months now. So long as the authorities remain vigilant about border control, Thailand will be virus-free. I now plan to make this move to Thailand a permanent one.

Patron Neil Goddard’s Covid-19 visualisation tool makes it easy to reflect on this decision. I was in Sydney meeting with the development team for Minsky between February 18 and March 11; I arrived back in Amsterdam on March 13, and I made the decision to leave for Thailand on March 15. Across these dates, the infection data changed dramatically. Though Thailand had cases first (and was, for a time, the country with the 2nd most cases after China), the rate of growth of cases there plunged compared to both Europe and Australia.

When we arrived, a surge in cases occurred in all 4 countries that I had to option to live in (Australia, the UK, Netherlands, and Thailand). But only Thailand crushed the curve: Australia let a few cases spread through quarantine and it’s now combating the feared “second wave”.

A plot of new cases per day per million people emphasises just how well Thailand handled this pandemic versus wealthier and supposedly more savvy countries. The peak rate of infection in Thailand was 2 cases per million people; in the UK and Netherlands it was over 80. Australia’s rate, which was falling towards zero, is now rising again after lapses in quarantine and testing allowed a new outbreak to commence (I hope it gets on top of this trend in the next two weeks and joins the smattering of countries that have eliminated the virus, but I won’t hold my breath).

Now that we know Thailand is safe, we’ve decided to move to Bangkok, mainly for my partner’s benefit: I can work and keep mentally stimulated anywhere there’s a power point and an internet connection, but she needs the stimulus of city life which Trang lacks. We’ve located a place not far from her family home, and will move there at the end of July.

Productivity despite volatility

Though my travails are nothing compared to those facing people still living in countries where the virus is rampant, moving countries and setting up house here have made this a wild few months that I expected would trash my productivity. It did trash my fitness: I haven’t worked out in five months, and I’ve gained as many kilos. But I’ve managed to get several significant pieces of research done:

All four papers are attached to this post. The first three have already found refereed-publication homes: the first on Nordhaus will appear in the journal Globalisations soon; the second will appear in the Review of Political Economy next year; the third is a chapter in a Springer book on system dynamics in economics. The fourth will evolve into a paper with Matheus Grasselli and Tim Garrett, once I get time to work on it again after the move to Bangkok.

Reviewing Kelton

I’ve been working on the above tasks for so long, and so intensely, that it feels like a bit of a let-down to finish them. I could have tried to dive into the next set of tasks, but I was feeling the pressure of so much work at once that I feel bit of a lull is in order. Since I won’t have time to start any major new project between now and our move to Bangkok at the end of next week, I’ve decided to use the time to review Stephanie Kelton’s The Deficit Myth instead. I’ll try to use Minsky models to illustrate my review, and I will probably record a quick video, using Minsky, on the fundamental points of MMT as well.

Keep Safe!

That’s it for now. Thanks again to my Patrons for their support, without which very little of the research detailed here would have been possible.

A model of production with energy and matter on the Planet of the Iron Giants (corrected)

This is the third in a series of posts documenting the development of a single commodity model of production with energy and matter. In the first I published what appeared to work as a model, but which had obvious scaling issues. In the second, I explained that this first model had errors that I spotted when working with Matheus and Tim: there was no formal link between the output of the energy sector and the energy consumption of all three sectors (energy mining, iron ore mining, factory turning iron ore into iron plus slag).

In this post, I can report on an error-free model. Following Matheus’s suggestion, I modelled the system as effectively vertically integrated: the output of both energy and mining was set to meet the needs of the whole economy. Energy output (E, not shown in the model below) was equal to the needs of energy in all three sectors; mining output (or iron ore) was identical, by the conservation of matter, to the matter output of the factory sector (which consists of both iron, and slag).

This made it possible to work with a composite of capital, rather than modelling capital and output in three separate sectors. It led to some complicated constants, but it worked. The product was the expected “Goodwin cycle”, but in this first model occurring in the wage rate (in kilograms of iron per year) and aggregate capital (measured in kilograms of iron).

Now that we’ve completed it, one of the research outputs of our group will be a paper linking the original Goodwin model to these two extensions. They were done both for their own sake—to replace the “ad hoc” productivity in the Goodwin model with something derived from the reality that all production is based on energy—and as a foundation for integrated modelling of economics and ecology. Waste output is easily added to this model, and related to the output of iron; CO2 is easily added and linked to the consumption of energy. Feedbacks can be added from waste generation (both slag and CO2) to the productivity of the economy—the sort of realism that is notably absent from the “Integrated Assessment Models” produced by Neoclassical eCONomists like William Nordhaus and Richard Tol.

I’ll close this Patron-only post with the observation that this project showed up a weakness of Minsky as it stands. Though I’m showing the simulation in Minsky, it was actually easier to do the logical work in Mathcad (see the two files attached to this post; the image below is an excerpt from one of those Mathcad files).

The flowchart paradigm is very useful when you are modelling a causal sequence, but there was a lot of algebraic logic needed to get this model right, and there the direct equation entry capabilities of Mathcad were far easier to use than Minsky’s flowchart.

This is one of the reasons that we plan to add the capability to enter parameter and variable definitions on new tabs within Minsky (currently labelled “parameters” and “variables”). It will take some time and programming nous (supplied by Russell Standish and Wynand Dednam rather than me), but I want to have the same capability to write equations just as you would on paper in Minsky—and then simulate them there.

It’s also much easier to read equations—sometimes—than it is to read a flowchart. The first yellow highlighted equation in the figure above is:

    

If you’re into reading math, then that’s pretty easy to get your head around. Not so the flowchart version of the same, which also takes up much more space:

Anyway, I’m posting these files for the interest of those of you who are into mathematical modelling, and as a courtesy to young students, who will often think, when they see a completed work by an academic, that they couldn’t do the same thing. In fact, a lot of research involves making mistakes which are gradually corrected over time. What you see in a journal paper is often the product of a lot of mistakes that get corrected over time. So don’t ever be discouraged by reading a refereed paper.

More is Different—the Class Economists Failed to Attend

An Australian friend of mine tweeted that he was seeing economists criticizing #MMT for its lack of microfoundations:

This typical display of ignorance—by economists, not Con!—masquerading as wisdom prompted me to reply:

Looking for an accessible PDF of Anderson’s 1972 paper {Anderson, 1972 #4697} led to finding the one linked in the tweet above: an open access reprint of the article in a 2014 issue of the journal Emergence: Complexity & Organization. I’ve attached that reprint to this post, and encourage you to read it, both the original and the fascinating commentary on how the article came to be.

The introduction to Anderson’s paper, written by Jeffrey Goldstein, explains that Anderson was involved in an acrimonious debate within physics over the role of reductionism, and it had echoes of the obsession with microfoundations that Con had experienced in his Twitter feed:

It is worthwhile to recognize that Anderson’s paper was written within the context of an ongoing, and at the time vituperative debate, between particle physicists, on the one hand, with their highly effective Standard Model of the so-called fundamental forces (such as weak, strong, electro-magnetic on up to their final unified “theory of everything”) and mostly negative attitude towards emergence in the past, and solid state or condensed matter physics, on the other hand, whose investigations into phenomena such as phase transitions, superconductivity, ferromagnetism and so on required the introduction of constructs and methods pertaining to higher scale dynamics, organizing principles, and emergent collectivities. Two of the chief antagonists in this conceptual battle have been the Nobel Laureate particle physicist Steven Weinberg known for his work on the unification of the electro-magnetic and the weak forces and Anderson who of course is another Nobel Prize winning physicist (on this dispute see Silberstein, 2009). This clash shows itself in this classic paper through Anderson’s attack on strident reductionism, of which Weinberg has long been a vigorous proponent, along with Victor Weiskopf whose reductionist stance involving extensive and intensive explanatory strategies Anderson takes on in his paper. (Goldstein, pp. 118-19)

The key point in Anderson’s paper was not a rejection of reductionism per se, but the obverse of reductionism, which he termed “constructionism”:

In his classic paper, Anderson did not then, nor does he now, completely renounce reductionism as such as if he were calling for an embrace of some kind of “holism”. Instead his criticism is of the totalizing type which he describes through his notion of the “constructionist hypothesis”: “The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe“. (Goldstein, p. 119)

This attitude is the essence of the microfoundations obsession: the belief that everything should be constructed from its lower level foundations: that macro should be constructed from micro. Leaving aside that Neoclassical micro itself is a logical and empirical travesty, Anderson’s key point was that, though a hierarchy of sciences can be constructed:

according to the idea: The elementary entities of science X obey the laws of science Y. But this hierarchy does not imply that science X is “just applied Y*” At each stage entirely new laws, concepts, and generalizations are necessary, requlring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor s biology applied chemistry. (Anderson, p. 393).

Likewise, macroeconomics is not applied microeconomics—but that is what economists have been trying to do ever since Muth developed the fantasy of “rational expectations” {Muth, 1961 #1900}.

It would be better to start with Anderson’s paper, rather than the introduction—ironically the introduction is more difficult and more specialized than the original paper (though it is still worth reading if you’re, like me, an obsessive about these things.

Answering some questions on the evolution of my approach to economics

1. What was the driving force behind writing the paper Finance and Economic Breakdown: Modeling Minsky’s Financial Instability Hypothesis?

 

I had become a critic of mainstream economics way back in 1971, in my first year at University, courtesy of being exposed to the “Theory of the Second Best” (Lancaster 1966) by our brilliant lecturer Frank Stilwell (Butler, Jones et al. 2009), and then discovering the “Cambridge Controversies” (Harcourt 1972) on my own in the economic department’s library. I was particularly struck by Samuelson’s concession of defeat in 1966 (Samuelson 1966), since we were using Samuelson’s textbook at the time, and there was no sign of this debate in his textbook, let alone the fact that he conceded that the rebels were right.

I was also doing first year mathematics at the time, and loved differential equations, but they simply did not turn up in my economics courses, where the mathematical techniques used seemed arcane by comparison.

Like all critics, I looked for alternative analyses to Neoclassical economics. None of the alternative literature really inspired me, until I read Minsky. Marxian economics had an obvious flaw to me that I spotted when I first read Capital in 1973: Marx’s dialectical philosophy contradicted the Labour Theory of Value (Keen 1993; Keen 1993). Austrian economics was watered-down Neoclassicism. Post Keynesian was realistic but felt arbitrary—there was no unifying theory of value.

Most of the critiques of capitalism argued that it had a tendency to stagnation (Baran 1968), something that in my own experience I couldn’t see. My Arts Degree coincided with a huge boom, driven largely by real estate in Sydney (Daly 1982), and it seemed to me there was a tendency to euphoria rather than depression. Then the boom busted spectacularly in 1975 when I was completing my Law degree. There were financial failures everywhere, and unemployment trebled. I wanted a theory that would explain that process.


I finally found this theory when I went back to university in my mid-30s to do a Masters degree as a prelude to undertaking my PhD. Bill Junor, an excellent Keynesian lecturer there, set us the task of reviewing a major book in the Macroeconomics module in 1987. Minsky’s John Maynard Keynes (Minsky 1975) was one of the books on the list. I’d heard of his name before, but I hadn’t read anything by him. So, I chose to review it.

It was a revelationary experience. Finally, I had found a critic of capitalism who appeared to understand its tendency to credit-driven booms and busts, and who was free of the proclivity to reason in equilibrium terms:

“we are dealing with a system that is inherently unstable, and the fundamental instability is “upward.”” (Minsky 1975, p. 162)

However to me, there was one obvious weakness in Minsky’s analysis: the mathematical model he had tried to build of his Financial Instability Hypothesis was based on Hicks’s 2nd order difference equation model of the trade cycle:

    

I already knew that this was a bad model in itself, because I had applied my mathematics training to analyzing it in another course, and showed that it was economically invalid. It was derived from adding together, as Hicks defined them, actual savings and desired investment . There is no economic theory, Keynesian or otherwise, that says that this addition makes any sense (Keen 2020). But at the same time, Minsky’s verbal model was eminently suited to treating in differential equation terms. It considered an economy in historical time, as opposed to the faux equilibrium “time” of Neoclassical models, and even the logical time of Robinson’s “Golden Age” analysis. It had the “initial condition” that it began after a preceding economic crisis—because we are dealing with the real world where such a history exists, rather than the stale blackboard abstractions of Neoclassicism.

While I did my Masters, I also sat in on undergraduate mathematics courses at the University of New South Wales, since it was by then more than 15 years since I had last studied mathematics. The staff there alerted me to the fact that the department’s by-then-deceased founder, the John Blatt, had turned his formidable mathematical skills to critiquing economics, in a brilliant book entitled Dynamic Economic Systems: A Post Keynesian Approach (Blatt 1983). In it, Blatt praised Richard Goodwin’s “A growth cycle” model(Goodwin 1967), and explained it far more clearly than Goodwin himself had done. He concluded that one flaw of the model—that its equilibrium was “not unstable”!—could be remedied by introducing a financial sector:

Of course, the model is far from perfect. In particular, we feel that the existence of an equilibrium which is not unstable (it is neutral) is a flaw in this model; so is the possibility of arbitrarily large cycles. The first flaw can be remedied in several ways: (1) introduction of more disaggregation in the nature of the output, e.g., separate outputs of consumer goods and investment goods; (2) introduction of a financial sector, including money and credit as well as some index of business confidence. Either or both of these changes are likely to make the equilibrium point locally unstable, as is desirable. (Blatt 1983, p. 210)

I took that as my lead, and added the private debt ratio as a third system state to the two in Goodwin’s model, the employment rate and the wages share of GDP. I was astonished to see that the resulting model not only reproduced the prediction Minsky made—that capitalism could fall into a debt-deflationary trap after a series of credit-driven boom and bust cycles—but also generated an unexpected result, that the cycles in the economic growth rate would diminish at first and then rise.

This discovery forced me to take seriously the nascent area of chaos and complexity studies, since it turned out that this phenomenon was first identified in studies of fluid dynamics (Pomeau and Manneville 1980) inspired by Lorenz’s work on turbulence (Lorenz 1963).

Because I have primarily explored the private-sector-only model since then, it is worth noting that I also developed a model of a counter-cyclical government, which generated another model displaying far from equilibrium dynamics. This replicated Minsky’s observation that “Big Government” could stabilize an unstable economy.

2. What influenced the interest in data from the United States if you have previously analyzed Australia?

I first became aware that my Minsky model was becoming an empirical reality when I was asked to write an Expert Witness opinion in a law case over predatory lending, in December 2005 (Keen 2005). Looking at the data through both Minsky’s eyes and the lens of my model, it was apparent to me that this was “It”: the crisis about which Minsky asked “Can “It” Happen Again?”. Minsky had argued that a combination of government deficits and lender of last resort interventions by the Central Bank could prevent a deep crisis (Minsky 1963; Minsky 1982), but Neoliberal ideology had weakened both of these since he wrote. It seemed to me that a repeat of “the Big One”—the bust that led to the Great Depression—was imminent, though its severity would be attenuated by an inevitable rise in net government spending once it occurred. Someone had to warn about it, and at least in Australia, that somebody was me.

I established the blog http://www.debtdeflation.com/blogs/, went on the media as much as I could, and successfully raised the alarm in Australia—only to see its government enact policies that re-started the Australian housing bubble (see FHB Boost is Australia’s “Sub-prime Lite”) and thus prevent a deep crisis in Australia, while one occurred in the USA.

I copped a lot of flak in Australia for being wrong—”there wasn’t a crisis, so what were you worried about?”, while the fact that this was a global crisis was ignored. That alone encouraged me to focus less on my own country and more on the rest of the world.

Also, one of the original difficulties in doing this work was that the core data—the level and rate of change of private debt—wasn’t recorded in a systematic way by any international authority. But then the Bank of International Settlements, following the lead of its superb research director Bill White (Bill was the only person in an official capacity to have read and understood Minsky, and was thus aware of the potential for a crisis, and warning of it via official BIS publications), started publishing an excellent database on private and government debt with data from over 40 countries. This made it easy to analyse debt at the global level.

Despite its problems, America is still the biggest economy on the planet, and the most self-contained. So extraneous factors—like for example, Chinese demand to buy housing, or export price and exchange rate volatility, don’t wash out the inherent dynamics as much in the USA as they can in Australia, Canada, the UK, etc. So it made sense to focus on the USA so that the general principles of monetary macroeconomics were more obvious.

3.Why do you think so few economists predicted the crisis from the year 2008? what should we learn from this lesson?

It’s the same reason that no Ptolemaic astronomer predicted the return of Halley’s comet: they were using the wrong paradigm. In Ptolemy’s model of astronomy, the Earth was at the centre of the Universe, the Heavens were perfect and unchanging, and the Earth was where change and decay occurred. In that paradigm, comets were atmospheric phenomena and therefore couldn’t be predicted.

Likewise, according to the Neoclassical paradigm, credit (which I define as the annual change in private debt), is simply a transfer of money from one non-bank agent to another, and it simply redistributes spending power, without changing its magnitude. With this perspective, increasing or falling levels of credit can’t be predicted to have any significant impact on the economy, unless you know in advance which agents have a higher propensity to spend: it’s quite possible that a fall in credit could cause an increase in aggregate demand because savers had a higher propensity to spend than borrowers. This is exactly the reason that Ben Bernanke, who was in a position to see the data on private debt, and do something about it rising too quickly, completely ignored it instead. He asked the right questions about the Great Depression—what caused aggregate demand to plunge and stay so low:

Because the Depression was characterized by sharp declines in both output and prices, the premise of this essay is that declines in aggregate demand were the dominant factor in the onset of the Depression. This starting point leads naturally to two questions: First, what caused the worldwide collapse in aggregate demand in the late 1920s and early 1930s (the “aggregate demand puzzle”)? Second, why did the Depression last so long? In particular, why didn’t the “normal” stabilizing mechanisms of the economy, such as the adjustment of wages and prices to changes in demand, limit the real economic impact of the fall in aggregate demand (the “aggregate supply puzzle”)? (Bernanke 2000, p. ix)

You can see his Neoclassical paradigm getting in the way already—the belief that capitalism has “”normal” stabilizing mechanisms of the economy, such as the adjustment of wages and prices to changes in demand”. But at least he’s starting from the right question, which is what caused aggregate demand to plunge so much.

But he then rejected the most accurate explanation for what caused that plunge—Irving Fisher’s “Debt Deflation Theory of Great Depressions” (Fisher 1932; Fisher 1933) because it relied on changes in bank lending causing changes in aggregate demand, and in his Neoclassical, “Loanable funds” paradigm, that couldn’t happen:

The idea of debt-deflation goes back to Irving Fisher (1933). Fisher envisioned a dynamic process in which falling asset and commodity prices created pressure on nominal debtors, forcing them into distress sales of assets, which in turn led to further price declines and financial difficulties.” His diagnosis led him to urge President Roosevelt to subordinate exchange-rate considerations to the need for reflation, advice that (ultimately) FDR followed. Fisher’s idea was less influential in academic circles, though, because of the counterargument that debt-deflation represented no more than a redistribution from one group (debtors) to another (creditors). Absent implausibly large differences in marginal spending propensities among the groups, it was suggested, pure redistributions should have no significant macroeconomic effects. (Bernanke 2000, p. 24)

All it would have taken is a look at the data, which was available when Bernanke wrote his Essays on the Great Depression (Census 1949; Census 1975). But he didn’t even look, because his paradigm told him that data was irrelevant.

So much for being irrelevant. But still, over a decade after the crisis, Neoclassicals are ignoring this data, and the Central Banks that are telling them that their textbook models of banking are wrong (Krugman 2014; McLeay, Radia et al. 2014; Deutsche Bundesbank 2017).

What we should learn is that economics is dominated by a paradigm that is as wrong about capitalism as Ptolemaic astronomy was about the universe. We need a new paradigm, and there is virtually nothing in the old paradigm that will be of use in the new.

4. What do you think about consequences from the covid-19 crisis for the World economy?

In the medium run I think it will end the fetish for globalisation, which has really been about the West exploiting low wages and the East using this to industrialize rapidly. The perverse outcome this has led to with Covid-19 is that southeast Asia and China in particular were well prepared for the pandemic by having the manufacturing capability to supply their populations with personal protective equipment (whereas the developed nations, which had outsourced their manufacturing, could not), and the social capacity to enforce social distancing policies. Now China, most of southeast Asia, maybe Japan and South Korea, and a handful of non-Asian countries—including New Zealand and perhaps Australia, but also Norway, Switzerland, Croatia, Lithuania, Angola and Zambia, and other scattered countries—will be a virus-free block. America, Europe, especially the UK, Russia and South America will be in a virus-zone.

It’s a division of the planet unlike anything in our history. Previous blocks have united countries with similar histories or politics; nothing of the sort applies here. But they will form because it will be easy to travel between virus-free countries, difficult to travel between virus-afflicted, and very difficult to cross the viral border. It is a fractured planet.

Figure 1: The www.endcoronavirus.org website’s map of the fractured planet

This will have all sorts of effects on the global economy, some of them contradictory. China’s power and prestige will be enhanced, as will its dominance of its local region—unless Japan manages to overcome the virus as well. Seaborne transportation of manufactured goods will be severely compromised, since ships need crews, who would need to be quarantined at either end of their journeys, drastically increasing the costs and creating points of weakness in disease control measures. Bulk transport of oil and coal and other minerals might still work, but more likely rail transport and pipelines would dominate. This could strengthen Russia over Saudi Arabia, for example.

At the macroeconomic level, it promises a debt-deflation at the end of the measures used to contain the virus. These measures have been successful trials of many progressive ideas—such as a Universal Basic Income, or direct funding of the UK Treasury by the Bank of England—which we might have thought would never see the light of day.

However, governments are likely to revert to Neoliberal type, cut back government supports too early, and institute austerity again to try to reduce government debt levels, even though austerity is a major reason why they were unprepared for the pandemic in the first place. This could lead to a debt-deflationary crisis in the USA, UK and Europe—and even Australia, which is already imposing austerity measures.

One worrying trend here has been a huge jump in corporate indebtedness, probably due to companies borrowing to be able to sustain unavoidable outlays in the midst of a collapse in their cash flows.

There could be a wave of evictions of renters and mortgagors too, thanks to the huge rise in unemployment and collapse in “the gig economy” jobs that kept many households barely solvent before the crisis.

My fear is that the government response to the aftermath of the crisis will be as bad as the response to it during the crisis, generally speaking, was good. So bankruptcies and poverty effects which were minimized during the crisis by the large scale government financing that was needed, will instead occur after the crisis.

That still assumes that there will be an “after”, which depends on the development of a vaccine and its effective distribution to enough of the world’s population to drive the vaccine extinct. That is still a speculation rather than a certainty. It is also possible that Covid-19 is just the first wave in a sequence of global environmental crises that are caused by the excessive pressure of human industrial society on the planet.

References

Baran, P. A. (1968). Monopoly capital an essay on the American economic and social order / Paul A. Baran and Paul M. Sweezy. New York, New York : Monthly Review Press.

Bernanke, B. S. (2000). Essays on the Great Depression. Princeton, Princeton University Press.

Blatt, J. M. (1983). Dynamic economic systems: a post-Keynesian approach. Armonk, N.Y, M.E. Sharpe.

Butler, G., E. Jones, et al. (2009). Political Economy Now!: The struggle for alternative economics at the University of Sydney. Sydney, Darlington Press.

Census, B. o. (1949). Historical Statistics of the United States 1789-1945. B. o. t. Census. Washington, United States Government.

Census, B. o. (1975). Historical Statistics of the United States Colonial Times to 1970. B. o. t. Census. Washington, United States Government.

Daly, M. T. (1982). Sydney Boom, Sydney Bust. Sydney, George Allen and Unwin.

Deutsche Bundesbank (2017). “The role of banks, non- banks and the central bank in the money creation process.” Deutsche Bundesbank Monthly Report: 13-33.

Fisher, I. (1932). Booms and Depressions: Some First Principles. New York, Adelphi.

Fisher, I. (1933). “The Debt-Deflation Theory of Great Depressions.” Econometrica
1(4): 337-357.

Goodwin, R. M. (1967). A growth cycle. Socialism, Capitalism and Economic Growth. C. H. Feinstein. Cambridge, Cambridge University Press: 54-58.

Harcourt, G. C. (1972). Some Cambridge Controversies in the Theory of Capital. Cambridge, Cambridge University Press.

Keen, S. (1993). “The Misinterpretation of Marx’s Theory of Value.” Journal of the History of Economic Thought
15(2): 282-300.

Keen, S. (1993). “Use-Value, Exchange Value, and the Demise of Marx’s Labor Theory of Value.” Journal of the History of Economic Thought
15(1): 107-121.

Keen, S. (2005). Expert Opinion, Permanent Mortgages vs Cooks. Sydney, Legal Aid NSW.

Keen, S. (2020). “Emergent Macroeconomics: Deriving Minsky’s Financial Instability Hypothesis Directly from Macroeconomic Definitions.” Review of Political Economy
Forthcoming.

Krugman, P. (2014). “A Monetary Puzzle.” The Conscience of a Liberal
http://krugman.blogs.nytimes.com/2014/04/28/a-monetary-puzzle/.

Lancaster, K. (1966). “A New Approach to Consumer Theory.” Journal of Political Economy
74(2): 132-157.

Lorenz, E. N. (1963). “Deterministic Nonperiodic Flow.” Journal of the Atmospheric Sciences
20(2): 130-141.

McLeay, M., A. Radia, et al. (2014). “Money creation in the modern economy.” Bank of England Quarterly Bulletin
2014 Q1: 14-27.

Minsky, H. P. (1963). Can “It” Happen Again? Banking and Monetary Studies. D. Carson. Homewood, Illinois, Richard D. Irwin: 101-111.

Minsky, H. P. (1975). John Maynard Keynes. New York, Columbia University Press.

Minsky, H. P. (1982). “Can ‘It’ Happen Again? A Reprise.” Challenge
25(3): 5-13.

Pomeau, Y. and P. Manneville (1980). “Intermittent transition to turbulence in dissipative dynamical systems.” Communications in Mathematical Physics
74: 189-197.

Samuelson, P. A. (1966). “A Summing Up.” Quarterly Journal of Economics
80(4): 568-583.

 

 

The Appallingly Bad Neoclassical Economics of Climate Change

This is the draft of an invited paper for the journal Globalizations on “Economics and the Climate Crisis“. As I’ve argued for some time, Neoclassical economists—especially William Nordhaus and Richard Tol—bear enormous responsibility for trivializing the dangers of climate change on intellectually spurious grounds. This paper is the most comprehensive overview I’ve done of this issue, and it includes new material on Nordhaus’s misreading of scientific literature. Word has deleted the footnotes, which you can find in the attached PDF.

Introduction

William Nordhaus was awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (Mirowski 2020) in 2018 for his work on climate change. His first major paper in this area was “World Dynamics: Measurement Without Data” (Nordhaus 1973), which attacked the pessimistic predictions in Jay Forrester’s World Dynamics (Forrester 1971; Forrester 1973) on the grounds, amongst others, that they were not firmly grounded in empirical research:

The treatment of empirical relations in World Dynamics can be summarised as measurement without dataNot a single relationship or variable is drawn from caactual data or empirical studies. (Nordhaus 1973, p. 1157. Italics in original. Subsequent emphases added.)

There is no explicit or apparent reference to data or existing empirical studies. (Nordhaus 1973, p. 1182)

Whereas most scientists would require empirical validation of either the assumptions or the predictions of the model before declaring its truth content, Forrester is apparently content with subjective plausibility. (Nordhaus 1973, p. 1183)

Sixth, there is some lack of humility toward predicting the future. Can we treat seriously Forrester’s (or anybody’s) predictions in economics and social science for the next 130 years? Long-run economic forecasts have generally fared quite poorly… And now, without the scantest reference to economic theory or empirical data, Forrester predicts that the world’s material standard of living will peak in 1990 and then decline. (Nordhaus 1973, p. 1183)

After this paper, Nordhaus’s own research focused upon the economics of climate change. One could rightly expect, from his critique of Forrester, that Nordhaus was scrupulous about basing his modelling upon sound empirical data.

One’s expectations would be dashed. Whereas Nordhaus characterised Forrester’s work as “measurement without data”, Nordhaus’s can be characterised as “making up numbers to support a pre-existing belief”: specifically, that climate change could have only a trivial impact upon the economy. This practice was replicated, rather than challenged, by subsequent Neoclassical economists—with some honourable exceptions, notably Weissman (Weitzman 2011; Weitzman 2011), de Canio (DeCanio 2003), Cline (Cline 1996), Darwin (Darwin 1999), Kaufmann (Kaufmann 1997; Kaufmann 1998), and Quiggin and Horowitz (Quiggin and Horowitz 1999).

The end product is a set of purported empirical estimates of the impact of climate change upon the economy that are utterly spurious, and yet which have been used to calibrate the “Integrated Assessment Models” (IAMs) that have largely guided the political responses to climate change. Stephen de Canio expressed both the significance and the danger of this work very well in his neglected book Economic Models of Climate Change: a Critique:

Perhaps the greatest threat from climate change is the risk it poses for large-scale catastrophic disruptions of Earth systems…

Business as usual amounts to conducting a one-time, irreversible experiment of unknown outcome with the habitability of the entire planet.

Given the magnitude of the stakes, it is perhaps surprising that much of the debate about the climate has been cast in terms of economics

Nevertheless, it is undeniably the case that economic arguments, claims, and calculations have been the dominant influence on the public political debate on climate policy in the United States and around the world… It is an open question whether the economic arguments were the cause or only an ex post justification of the decisions made by both administrations, but there is no doubt that economists have claimed that their calculations should dictate the proper course of action. (DeCanio 2003, pp. 2-4)

The impact of these economists goes beyond merely advising governments, to actually writing the economic components of the formal reports by the IPCC (“Intergovernmental Panel On Climate Change”), the main authority coordinating humanity’s response, such as it is, to climate change. The sanguine conclusions they state—such as the following from the 2014 IPCC Report (Field, Barros et al. 2014)—carry more weight with politicians, obsessed as they are with their countries’ GDP growth rates, than the far more alarming warnings in the sections of the Report written by actual scientists:

Global economic impacts from climate change are difficult to estimate. Economic impact estimates completed over the past 20 years vary in their coverage of subsets of economic sectors and depend on a large number of assumptions, many of which are disputable, and many estimates do not account for catastrophic changes, tipping points, and many other factors. With these recognized limitations, the incomplete estimates of global annual economic losses for additional temperature increases of ~2°C are between 0.2 and 2.0% of income. (Arent, Tol et al. 2014, p. 663. Emphasis added)

This is a prediction, not of a drop in the annual rate of economic growth—which would be significant even, at the lower bound of 0.2%–but a prediction that the level of GDP will be between 0.2% and 2% lower, when global temperatures are 2°C higher than pre-industrial levels, compared to what they would have been in the complete absence of global warming. This involves a trivial decline in the predicted rate of economic growth between 2014 and when the 2°C increase occurs, even at the upper bound of 2%.

Given the impact that economists have had on public policy towards climate change, and the immediacy of the threat we now face from climate change, this work could soon be exposed as the most significant and dangerous hoax in the history of science.

Fictional Empirics

The numerical relationships that economists assert exist between global temperature change and GDP change were summarized in Figure 1 of the chapter “Key Economic Sectors and Services” (Arent, Tol et al. 2014) in the 2014 IPCC Report Climate Change 2014: Impacts, Adaptation, and Vulnerability (Field, Barros et al. 2014). It is reproduced below as Figure 1.

Figure 1: Figure 10.1 from Chapter 10 “Key Economic Sectors and Services” of the IPCC Report Climate Change 2014 Impacts, Adaptation, and Vulnerability

The sources of these numbers—as I explain below, they cannot be called “data points”—are given in Table SM10-1 from the supplement to this report (Arent 2014). Four classifications of the approaches used were listed by the IPCC: “Enumeration” (ten studies); “Statistical” (5 studies); “CGE” (“Computable General Equilibrium”: 2 studies—one with 2 results); and “Expert Elicitation” (1 study).

Enumeration: It’s what you don’t count that counts

The bland description of what the “Enumeration” approach entails given by Tol makes it seem unobjectionable:

In this approach, estimates of the “physical effects” of climate change are obtained one by one from natural science papers, which in turn may be based on some combination of climate models, impact models, and laboratory experiments. The physical impacts must then each be given a price and added up. For agricultural products, an example of a traded good or service, agronomy papers are used to predict the effect of climate on crop yield, and then market prices or economic models are used to value the change in output. (Tol 2009, pp. 31-32)

However, this analysis commenced from the perspective, in the very first reference in this tradition (Nordhaus 1991), that climate change is a relatively trivial issue:

it must be recognised that human societies thrive in a wide variety of climatic zones. For the bulk of economic activity, non-climate variables like labour skills, access to markets, or technology swamp climatic considerations in determining economic efficiency. (Nordhaus 1991, p. 930. Emphasis added)

If there had been a decent evaluation process in place at this time for research into the economic impact of climate change, this paragraph alone should have raised alarm bells: yes, it is quite likely that climate today is a less important determinant of “economic efficiency” today than “labour skills, access to markets, or technology”, when one is comparing one region or country with another today. But what is the relevance of this cross-sectional comparison to assessing the impact of drastically altering the entire planet’s climate over time, via the retention of additional solar energy from additional greenhouse gases?

Nordhaus then excludes 87% of US industry from consideration, on the basis that it takes place “in carefully controlled environments that will not be directly affected by climate change”:

Table 5 shows a sectoral breakdown of United States national income, where the economy is subdivided by the sectoral sensitivity to greenhouse warming. The most sensitive sectors are likely to be those, such as agriculture and forestry, in which output depends in a significant way upon climatic variables. At the other extreme are activities, such as cardiovascular surgery or microprocessor fabrication in ‘clean rooms’, which are undertaken in carefully controlled environments that will not be directly affected by climate change. Our estimate is that approximately 3% of United States national output is produced in highly sensitive sectors, another 10% in moderately sensitive sectors, and about 87% in sectors that are negligibly affected by climate change. (Nordhaus 1991, p. 930. Emphasis added)

The examples of “cardiovascular surgery or microprocessor fabrication in ‘clean rooms'” might seem reasonable activities to describe as taking place in “carefully controlled environments”. However, Nordhaus’s list of industries that he simply assumed would be negligibly impacted by climate change is so broad, and so large, that it is obvious that what he meant by “not be directly affected by climate change” is anything that takes place indoors—or, indeed, underground, since he includes mining as one of the unaffected sectors. Table 1, which is an extract from Nordhaus’s Table 5 (Nordhaus 1991, p. 931), lists the subset of industries that he considered would be “negligibly affected by climate change”.

Table 1: Extract from Nordhaus’s breakdown of economic activity by vulnerability to climatic change in US 1991 $ terms (Nordhaus 1991, p. 931 )

Since this was the first paper in a research tradition, one might hope that subsequent researchers challenged this assumption. However, instead of challenging it, they replicated it. The 2014 IPCC Report repeats the assertion that climate change will be a trivial determinant of future economic performance:

For most economic sectors, the impact of climate change will be small relative to the impacts of other drivers (medium evidence, high agreement). Changes in population, age, income, technology, relative prices, lifestyle, regulation, governance, and many other aspects of socioeconomic development will have an impact on the supply and demand of economic goods and services that is large relative to the impact of climate change. (Arent, Tol et al. 2014, p. 662)

It also repeats the assertion that indoor activities will be unaffected. The one change between Nordhaus in 1991 and the IPCC Report 23 years later is that it no longer lumps mining in the “not really exposed to climate change” bracket. Otherwise it repeats Nordhaus’s assumption that anything done indoors will be unaffected by climate change:

Frequently Asked Questions

FAQ 10.3 | Are other economic sectors vulnerable to climate change too?

Economic activities such as agriculture, forestry, fisheries, and mining are exposed to the weather and thus vulnerable to climate change. Other economic activities, such as manufacturing and services, largely take place in controlled environments and are not really exposed to climate change. (Arent, Tol et al. 2014, p. 688)

All the intervening papers between Nordhaus in 1991 and the IPCC in 2014 maintain this assumption: neither manufacturing, nor mining, transportation, communication, finance, insurance and non-coastal real estate, retail and wholesale trade, nor government services, appear in the “enumerated” industries in the “Coverage” column in Table 3. All these studies have simply assumed that these industries, which account for of the order of 90% of GDP, will be unaffected by climate change.

There is a “poker player’s tell” in FAQ quoted above which implies that these Neoclassical economists are on a par with Donald Trump in their understanding of what climate change really entails. This is the statement that “Economic activities such as agriculture, forestry, fisheries, and mining are exposed to the weather and thus vulnerable to climate change“. Explicitly, they are saying that if an activity is exposed to the weather, it is vulnerable to climate change, but if it is not, it is “not really exposed to climate change”. They are equating the climate to the weather.

This is a harsh judgment to pass on academics, who are supposed to have sufficient intellect to not make such mistakes. But there is no other way to make sense of their collective decision to exclude almost 90% of GDP from their enumeration of damages from climate change. Nor is there any other way to interpret the core assumption of their other dominant method of making up numbers for the models, the so-called “statistical” or “cross-sectional” method.

The “Statistical approach”

While locating the fundamental flaw in the “enumeration” approach took some additional research, the flaw in the statistical approach was obvious in the first reference I read on it, Richard Tol’s much-corrected (Tol 2014) and much-criticised paper (Gelman 2014; Gelman 2015; Nordhaus and Moffat 2017, p. 10; Gelman 2019), “The Economic Effects of Climate Change”:

An alternative approach, exemplified in Mendelsohn’s work (Mendelsohn, Morrison et al. 2000; Mendelsohn, Schlesinger et al. 2000) can be called the statistical approach. It is based on direct estimates of the welfare impacts, using observed variations (across space within a single country) in prices and expenditures to discern the effect of climate. Mendelsohn assumes that the observed variation of economic activity with climate over space holds over time as well; and uses climate models to estimate the future effect of climate change. (Tol 2009, p. 32)

If the methodological fallacy in this reasoning is not immediately apparent—bearing in mind that numerous academic referees have let pass papers making this assumption—think what it would mean if this assumption were correct.

Within the United States, it is generally true that very hot and very cold regions have a lower level of per capita income than median temperature regions. Using the States of the contiguous continental USA for those regions, Florida (average temperature 22.5°C) and North Dakota (average temperature 4.7°C), for example, have lower per capita incomes than New York (average temperature 7.4°C). But the difference in average temperatures is far from the only reason for differences in income, and in the greater scheme of things, the differences are trivial anyway: as American States, at the global level they are all in the high per capita income range (respectively $26,000, $26,700 and $43,300 per annum in 2000 US dollars). A statistical study of the relationship between “Gross State Product” (GSP) per capita and temperature will therefore find a weak, nonlinear relationship, with GSP per capita rising from low temperatures, peaking at medium ones, and falling at higher temperatures.

If you then assume that this same relationship between GDP and temperature will apply as global temperatures rise with Global Warming, you will conclude that Global Warming will have a trivial impact on global GDP. Your conclusion is your assumption.

This is illustrated by Figure 2, which shows a scatter plot of deviations from the national average temperature by State in °C, against the deviations from the national average (GDP per capita) of Gross State Product per capita in percent of GDP (the source data is in Table 4), and a quadratic fit to this data, which has a coefficient of -0.00318, and, as expected, a weak correlation coefficient of 0.31.

 

Figure 2: Correlation of temperature and USA Gross State Product per capita

This regression thus yields a very poor, but not entirely useless, “in-sample” model of how of temperature deviations from the USA average affect deviations from average US GDP per capita today:

         

In words, this asserts that Gross State Product per capita falls by 0.318% (of the national average GDP per capita) for every 1°C difference in temperature (from the national average temperature) squared.

An absurd “out of sample” policy recommendation from this model would be that the US’s GDP would increase if hotter and colder States could move towards the average temperature for the USA. This absurd recommendation could be “refined” by using this same data to calculate the optimum temperature for the USA’s GDP, and then proposing that all States move to that temperature. Of course, these “policies” are clearly impossible, simply because the States can’t change their location on the planet.

However, the economists doing these studies reasoned that Global Warming would achieve the same result over time (with the drawback that it would be applied equally to all regions). So they did indeed calculate optimum temperatures for each of the sectors they expected to be affected by climate change—and their calculations excluded the same list of sectors that the “enumeration” approach assumed would be unaffected (manufacturing, mining, services, etc.):

Both the reduced-form and cross-sectional response functions imply that the net productivity of sensitive economic sectors is a hill-shaped function of temperature (Mendelsohn, Schlesinger et al. 2000). Warming creates benefits for countries that are currently on the cool side of the hill and damages for countries on the warm side of the hill. The exact optimum temperature varies by sector. For example, according to the Ricardian model, the optimum temperatures for agriculture, forestry, and energy are 14.2, 14.8 and 8.6°C, respectively. With the reduced form model, the optimum temperatures for agriculture and energy are 11.7 and 10.0. (Mendelsohn, Morrison et al. 2000, p. 558)

They then estimated the impact on GDP of increasing global temperatures, assuming that the same coefficients they found for the relationships between temperature and output today (using what Tol called “the statistical” and Mendelsohn called the “cross-sectional” approach) could be used to estimate the impact of global warming. This resulted in more than one study which concluded that increasing global temperatures via global warming would be beneficial to the economy. Here, for example, is Meldelsohn, Schlesinger et al. on the impact of a 2.5°C increase in global temperatures:

Compared to the size of the economy in 2100 ($217 trillion), the market effects are small… The Cross-sectional climate-response functions imply a narrower range of impacts across GCMs: from $97 to $185 billion of benefits with an average of $145 billion of benefits a year. (Mendelsohn, Schlesinger et al. 2000, p. 41. Italics added)

The, once more, explicit assumption these economists are making is that it doesn’t matter how you alter temperature. Whether this is hypothetically done by altering a region’s location on the planet—which is impossible—or by altering the temperature of the entire planet—which is what Climate Change is going—they assumed that the impact on GDP would be the same.

Expert Opinions—Real and Imagined

Nordhaus conducted the only two surveys of “expert opinions” to estimate the impact of global warming on GDP, in 1994 (Nordhaus 1994), and 2017 (Nordhaus and Moffat 2017). The former asked people from various academic backgrounds to give their estimates of the impact on GDP of three global warming scenarios: (A) a 3°C rise by 2090; (B) a 6°C rise by 2175; and (C) a 6°C rise by 2090. The numbers used by the IPCC from this study in Figure 1 were a 3°C temperature rise for a 3.6% fall in GDP.

Expert opinions are a valid procedure to aggregate knowledge in areas that require a large number of disparate fields to be aggregated, as the climate scientist Tim Lenton and co-authors explained in their paper “Tipping elements in the Earth’s climate system” (Lenton, Held et al. 2008):

formal elicitations of expert beliefs have frequently been used to bring current understanding of model studies, empirical evidence, and theoretical considerations to bear on policy-relevant variables. From a natural science perspective, a general criticism is that expert beliefs carry subjective biases and, moreover, do not add to the body of scientific knowledge unless verified by data or theory. Nonetheless, expert elicitations, based on rigorous protocols from statistics and risk analysis, have proved to be a very valuable source of information in public policymaking. It is increasingly recognized that they can also play a valuable role for informing climate policy decisions. (Lenton, Held et al. 2008, p. 1791)

I cite this paper in contrast to Nordhaus’s here for two reasons: (1) it shows how expert opinion surveys should be conducted; (2) Nordhaus later cites this survey in support of his use of a “damage function” for climate change which lacks tipping points, when this survey explicitly rejects such functions.

Lenton et al.’s survey was sent to 193 scientists, of whom 52 responded. Respondents were specifically instructed to stick to their area of knowledge, rather than to speculate more broadly: “Participants were encouraged to remain in their area of expertise” (Lenton, Held et al. 2008, p. 10). These are listed in Table 2.

Table 2: Fields of expertise for experts surveyed in (Lenton, Held et al. 2008); abridged from Table 1 in (Lenton, Held et al. 2008, p. 10)

Nordhaus’s survey began with a letter requesting 22 people to participate, 18 of whom fully complied, and one partially. Nordhaus describes them as including 10 economists, 4 “other social scientists”, and 5 “natural scientists and engineers”, but also describes eight of the economists as coming from “other subdisciplines of economics (those whose principal concerns lie outside environmental economics)” (Nordhaus 1994, p. 48)—which ipso facto should rule them out from taking part in this expert survey in the first place.

One of them was Larry Summers—who is probably the source of the choicest quotes in the paper, such as “For my answer, the existence value [of species] is irrelevant—I don’t care about ants except for drugs” (Nordhaus 1994, p. 50).

Lenton’s survey combined the expertise of its interviewees in specific fields of climate change to compile a list of large elements of the planet’s climate system (>1,000km in extent) which could be triggered by increases in global temperature of between 0.5°C (disappearance of Arctic summer sea ice) and 6°C (amplified En Nino causing drought in Southeast Asia and elsewhere), on timescales varying from 10 years (Arctic summer sea ice) to 300 years (West Antarctic Ice Shelf disintegration) (Lenton, Held et al. 2008, p. 1788).

Nordhaus’s survey was summarised by a superficially bland pair of numbers—3°C temperature rise and a 3.6% fall in GDP—but that summary hides far more than it reveals. There was extensive disagreement, well documented by Nordhaus, between the relatively tiny cohort of actual scientists surveyed, and in particular the economists “whose principal concerns lie outside environmental economics”. The quotes from the economists surveyed also reveal the source of the predisposition by economists in general to dismiss the significance of climate change.

As Nordhaus noted, “Natural scientists’ estimates [of the damages from climate change] were 20 to 30 times higher than mainstream economists'” (Nordhaus 1994, p. 49). The average estimate by “Non-environmental economists” (Nordhaus 1994, Figure 4, p. 49) of the damages to GDP a 3°C rise by 2090 was 0.4% of GDP; the average for natural scientists was 12.3%, and this was with one of them refusing to answer Nordhaus’s key questions:

Also, although the willingness of the respondents to hazard estimates of subjective probabilities was encouraging, it should be emphasized that most respondents proffered these estimates with reservations and a recognition of the inherent difficulty of the task. One respondent (19), however, was a holdout from such guesswork, writing:

I must tell you that I marvel that economists are willing to make quantitative estimates of economic consequences of climate change where the only measures available are estimates of global surface average increases in temperature. As [one] who has spent his career worrying about the vagaries of the dynamics of the atmosphere, I marvel that they can translate a single global number, an extremely poor surrogate for a description of the climatic conditions, into quantitative estimates of impacts of global economic conditions. (Nordhaus 1994, pp. 50-51)

Comments from economists lay at the other end of the spectrum from this self-absented scientist. Because they had a strong belief in the ability of “human societies” to adapt—born of their acceptance of the Neoclassical model of capitalism, in which “the economy” always returns to equilibrium after a “exogenous shock”—they could not imagine that climate change itself could do significant damage to the economy, whatever it might do to the biosphere itself:

One respondent suggested whimsically that it was hardly surprising, given that the economists know little about the intricate web of natural ecosystems, whereas natural scientists know equally little about the incredible adaptability of human societies

There is a clear difference in outlook among the respondents, depending on their assumptions about the ability of society to adapt to climatic changes. One was concerned that society’s response to the approaching millennium would be akin to that prevalent during the Dark Ages, whereas another respondent held that the degree of adaptability of human economies is so high that for most of the scenarios the impact of global warming would be “essentially zero”.

An economist explains that in his view energy and brain power are the only limits to growth in the long run, and with sufficient quantities of these it is possible to adapt or develop new technologies so as to prevent any significant economic costs. (Nordhaus 1994, pp. 48-49. All emphases added)

Given this extreme divergence of opinion between economists and scientists, one might imagine that Nordhaus’s next survey would examine the reasons for it. In fact, the opposite applied: his methodology excluded non-economists entirely.

Rather than a survey of experts, this was a literature survey (Nordhaus and Moffat 2017), which ipso facto is another legitimate method to provide data for a topic subject that is difficult to measure, and subject to high uncertainty. He and his co-author searched for relevant articles using the string “”(damage OR impact) AND climate AND cost” (Nordhaus and Moffat 2017, p. 7), which is reasonable, if rather too broad (as they themselves admit in the paper).

The key
flaw in this research was where they looked: they executed their search string in Google, which returned 64 million results, Google Scholar, which returned 2.8 million, and the economics-specific database Econlit, which returned just 1700 studies. On the grounds that there were too many results in Google and Google Scholar, they ignored the results from Google and Google Scholar, and simply surveyed the 1700 articles in Econlit (Nordhaus and Moffat 2017, p. 7). These are, almost exclusively, articles written by economists.

Nordhaus and Moffat read the abstracts of these 1700 to rule out all but 24 papers from consideration. Reading these papers led to just 11 they included in their survey results. The supplemented this “systematic research synthesis (SRS)” with:

a second approach, known as a “non-systematic research summary.” In this approach, the universe of studies was selected by a combination of formal and informal methods, such as the SRS above, the results of the Tol survey, and other studies that were known to the researchers. (Nordhaus and Moffat 2017, p. 8)

Their labours resulted in the addition of just five studies which had not been used either by the IPCC or by Tol in his aggregation papers (Tol 2009; Tol 2018; Tol 2018), with additional 6 results, and 4 additional authors—Cline, Dellink, Kemfert and Hambel—who had not already cited in the empirical estimates literature (though Cline was one of Nordhaus’s interviewees in his 1994 survey).

Remarkably, given that Nordhaus was the lead author of this study, one of the previously unused studies was by Nordhaus himself in 2010 (Nordhaus 2010). (Nordhaus and Moffat 2017)
does not provide details of this paper, or any other paper they uncovered, but I presume it is (Nordhaus 2010), given the date, and the fact that the temperature and damages estimates given in it—a 3.4°C increase in temperature causing a 2.8% fall in GDP—are identical to those given in this paper’s Table 2.

It may seem strange that Nordhaus did not notice that a paper by himself, estimating the damages from climate change, was not included in previous studies. But in fact, there is a good reason for this omission: (Nordhaus 2010) was not an enumerative study, nor a statistical one, let alone the results of an “expert elicitation”, but the output of a run of Nordhaus’s own “Integrated Assessment Model” (IAM), DICE! Treating this as a “data point” is using an output of a model to calibrate the model itself. Nonetheless, these numbers—and the five additional pairs from the four additional studies uncovered by their survey—were added to the list of numbers from which economists like Nordhaus could calibrate what they call their “damage functions”.

Damage Functions

“Damage functions” are the way in which Neoclassical economists connect estimates from scientists of the change in global temperature to their own, as shown in previous sections, utterly unsound estimates of future GDP, given this change in temperature. They reduce GDP from what they claim it would have been in the total absence of climate change, to what they claim it will be, given different levels of temperature rise. The form these damage functions take is normally simply a quadratic:

         

Nordhaus justifies using a quadratic to describe such an inherently discontinuous as climate change by misrepresenting the scientific literature—specifically, the careful survey of expert opinions carried out by Lenton et al (Lenton, Held et al. 2008) and contrasted earlier to Nordhaus’s survey of largely non-experts (Nordhaus 1994). Nordhaus makes the following statement in his DICE manual, and repeats it in (Nordhaus and Moffat 2017, p. 35):

The current version assumes that damages are a quadratic function of temperature change and does not include sharp thresholds or tipping points, but this is consistent with the survey by Lenton et al. (2008) (Nordhaus and Sztorc 2013, p. 11. Emphasis added)

In The Climate Casino (Nordhaus 2013), Nordhaus states that:

There have been a few systematic surveys of tipping points in earth systems. A particularly interesting one by Lenton and colleagues examined the important tipping elements and assessed their timing… Their review finds no critical tipping elements with a time horizon less than 300 years until global temperatures have increased by at least 3°C. (Nordhaus 2013, p. 60)

These claims can only be described as blatant misrepresentations of “Tipping elements in the Earth’s climate system”(Lenton, Held et al. 2008). The very first element in the summary table of their findings meets two of the three criteria that Nordhaus claimed were not met: Arctic summer sea-ice could be triggered by global warming of between 0.5–2°C, and in a time span measured in decades—see Figure 3.

Figure 3: An extract from Table 1 of “Tipping elements in the Earth’s climate system”,(Lenton, Held et al. 2008, p. 1788)

Nordhaus justifies his omission of Arctic summer sea ice in his table N1 (Nordhaus 2013, p. 333) via a column headed “Level of concern (most concern = ***)”, where it receives the lowest ranking (*)—thus apparently justifying his statement that there was “no critical tipping point” in less than 300 years, and with less than a 3°C temperature increase.

However, no such column exists in Table 1 of Lenton, Held et al. (2008), while their discussion of the ranking of threats puts Arctic summer sea ice first, not last:

We conclude that the greatest (and clearest) threat is to the Arctic with summer sea-ice loss likely to occur long before (and potentially contribute to) GIS melt (Lenton, Held et al. 2008, pp. 1791-92. Emphasis added).

Their treatment of time also differs substantially from that implied by Nordhaus, which is that decisions about tipping elements with time horizons of several centuries can be left for decision makers several centuries hence. While Lenton et al, do give a timeframe of more than 300 years for the complete melting of the Greenland Ice Sheet (GIS), for example, they note that focused on tipping elements whose fate would be decided this century:

Thus, we focus on the consequences of decisions enacted within this century that trigger a qualitative change within this millennium, and we exclude tipping elements whose fate is decided after 2100. (Lenton, Held et al. 2008, p. 1787)

Thus, while the GIS might not melt completely for several centuries, the human actions that will decide whether that happens or not will be taken in this century, not in several hundred years from now.

Finally, the paper’s conclusion began with the warning that smooth functions should not be used, noted that discontinuous climate tipping points were likely to be triggered this century, and reiterated that the greatest threats were Arctic summer sea ice and Greenland:

Conclusion
Society may be lulled into a false sense of security by smooth projections of global change. Our synthesis of present knowledge suggests that a variety of tipping elements could reach their critical point within this century under anthropogenic climate change. The greatest threats are tipping the Arctic sea-ice and the Greenland ice sheet, and at least five other elements could surprise us by exhibiting a nearby tipping point. (Lenton, Held et al. 2008, p. 1792. Emphasis added)

There is thus no empirical or scientific justification for choosing a quadratic to represent damages from climate change—the opposite in fact applies. Regardless, this is the function that Nordhaus ultimately adopted. Given this assumed functional form, the only unknowns are the values of the coefficients a, b and c in Equation .

Ever since Nordhaus started using a quadratic, he has consistently reduced the value of its parameters, from an initial 0.0035 for the quadratic term—which means that global warming is assumed to reduce GDP by 0.35% times the temperature (change over pre-industrial levels) squared—to a final value of 0.00227 (see Equation ). Source documents here are (Nordhaus and Sztorc 2013, pp. 83, 86, 91 & 97 for the 1992, 1999, 2008 and 2013 versions of DICE.; Nordhaus 2017, p. 1 for 2017; Nordhaus 2018, p. 345 for 2018):

         

This reduction progressively reduced his already trivial predictions of damage to GDP from global warming. For example, his prediction for the impact on GDP of a 4°C increase in temperature—the level he describes as optimal in his “Nobel Prize” lecture, since according to his model, it minimises the joint costs of damage and abatement (Nordhaus 2018, Slides 6 & 7)—was reduced from a 7% fall in 1992 to a 3.6% fall in 2018 (see Figure 4).

Figure 4: How low can you go? Nordhaus’s downward revisions to his damage function

I now turn to doing what Nordhaus himself said a scientist should do, when deriding Forrester’s model—”require empirical validation of either the assumptions or the predictions of the model
before declaring its truth content” (Nordhaus 1973, p. 1183). This is clearly something neither Nordhaus nor other Neoclassical climate change economists did themselves—apart from the honourable mentions noted earlier.

Deconstructing Neoclassical Delusions: GDP and Energy

Nordhaus justified the assumption that 87% of GDP will be unaffected by climate change on the basis that:

for the bulk of the economy—manufacturing, mining, utilities, finance, trade, and most service industries—it is difficult to find major direct impacts of the projected climate changes over the next 50 to 75 years. (Nordhaus 1991, p. 932)

In fact, a direct effect can easily be identified by surmounting the failure of economists in general—not just Neoclassicals—to appreciate the role of energy in production. Almost all economic models use production functions that assume that “Labour” and “Capital” are all that are needed to produce “Output”. However, neither Labour nor Capital can function without energy inputs: “to coin a phrase, labour without energy is a corpse, while capital without energy is a sculpture” (Keen, Ayres et al. 2019, p. 41). Energy is directly needed to produce GDP, and therefore if energy production has to fall because of global warming, then so will GDP.

The only question is how much, and the answer, given our dependence on fossil fuels, is a lot. Unlike the trivial correlation between local temperature and local GDP used by Nordhaus and colleagues in the “statistical” method, the correlation between global energy production and global GDP is overwhelmingly strong. A simple linear regression between energy production and GDP has a correlation coefficient of 0.997—see Figure 5.

Figure 5: Energy determines GDP

GDP in turn determines excess CO2 in the atmosphere. A linear regression between GDP and CO2 has a correlation coefficient of 0.998—see Figure 6.

Figure 6: Without significant de-carbonization, GDP determines CO2

Lastly, CO2 very tightly determines the temperature excess over pre-industrial levels. A linear regression between CO2 and the Global Temperature Anomaly has a correlation of 0.992 using smoothed data (which excludes the effect of non-CO2 fluctuations such as the El Nino effect).

Figure 7: CO2 determines Global Warming

Working in reverse, if climatic changes caused by the increase in global temperature persuade the public and policymakers that we must stop adding CO2 to the atmosphere “now”, whenever “now” may be, then global GDP will fall roughly proportionately to the ratio of fossil-fuel energy production to total energy production at that time.

As of 2020, fossil fuels provided roughly 85% of energy production. So, if 2020 were the year humanity decided that the growth in CO2 had to stop, GDP would fall by of the order of 85%. Even if the very high rate of growth of renewables in 2015 were maintained—when the ratio of renewables to total energy production was growing at about 3% per annum—renewables would still yield less than 40% of total energy production in 2050—see Figure 8. This implies a drop in GDP of about 50% at that time. The decision by Neoclassical climate change economists to exclude “manufacturing, mining, utilities, finance, trade, and most service industries” from any consequences from climate change is thus utterly unjustified.

Figure 8: Renewable energy as a percentage of total energy production

Deconstructing Neoclassical Delusions: Statistics

The “cross-sectional approach” of using the coefficients from the geographic temperature:GDP relationship as a proxy for the global temperature:GDP relationship is similarly unjustified. It assumes that it doesn’t matter how one alters temperature: the effect on GDP will be the same. This belief was defended by Tol in an exchange on Twitter between myself, the Climate scientist Daniel Swain, and the Professor of Computational Astrophysics Ken Rice on June 17-18 2019:

Richard Tol:    10K is less than the temperature distance between Alaska and Maryland (about equally rich), or between Iowa and Florida (about equally rich). Climate is not a primary driver of income. https://twitter.com/RichardTol/status/1140591420144869381?s=20

Daniel Swain:    A global climate 10 degrees warmer than present is not remotely the same thing as taking the current climate and simply adding 10 degrees everywhere. This is an admittedly widespread misconception, but arguably quite a dangerous one. https://twitter.com/Weather_West/status/1140670647313584129?s=20

Richard Tol:    That’s not the point, Daniel. We observe that people thrive in very different climates, and that some thrive and others do not in the same climate. Climate determinism therefore has no empirical support. https://twitter.com/RichardTol/status/1140928458853421057?s=20

Richard Tol:    And if a relationship does not hold for climate variations over space, you cannot confidently assert that it holds over time. https://twitter.com/RichardTol/status/1140928893878263808?s=20

Steve Keen:    The cause of variations over space is utterly different to that over time. That they are comparable is the most ridiculous and dangerous “simplifying assumption” in the history of economics. https://twitter.com/ProfSteveKeen/status/1140941982082244608?s=20

Ken Rice:    Can I just clarify. Are you actually suggesting that a 10K rise in global average surface temperature would be manageable? https://twitter.com/theresphysics/status/1140661721633308673?s=20

Richard Tol:    We’d move indoors, much like the Saudis have. https://twitter.com/RichardTol/status/1140669525081415680?s=20

As with the decision to exclude ~90% of GDP from damages from climate change, Tol’s assumed equivalence of weather changes across space with climate change over time ignores the role of energy in causing climate change. This can be illustrated by annotating his third tweet above with respect to the amount of energy needed to bring about a 10°C temperature increase for the atmosphere:

And if a relationship does not hold for climate variations over space [without changing the energy level of the atmosphere], you cannot confidently assert that it holds over time [as the Solar energy retained in the atmosphere rises by more than 50,000 million Terajoules]. (Trenberth 1981)

To put this level of energy in more comprehensible terms, this is the equivalent of 860 million Hiroshima atomic bombs. That amount of additional energy in the atmosphere would lead to sustained “wet bulb” temperatures that would be fatal for humans in the Tropics and much of the sub-tropics (Raymond, Matthews et al. 2020; Xu, Kohler et al. 2020). A 10°C temperature increase is of the order of that which caused the end-Permian extinction event, the most extreme mass-extinction in Earth’s history (Penn, Deutsch et al. 2018). It is five times the level of global temperature increase that climate scientists fear could trigger “tripping cascades” could transform the planet into a “Hothouse Earth” (Steffen, Rockström et al. 2018; Lenton, Rockström et al. 2019), which could potentially be incompatible with human existence:

Hothouse Earth is likely to be uncontrollable and dangerous to many, particularly if we transition into it in only a century or two, and it poses severe risks for health, economies, political stability (especially for the most climate vulnerable), and ultimately, the habitability of the planet for humans. (Steffen, Rockström et al. 2018, p. 8256)

It therefore very much does matter how one alters the temperature. At the planetary level, there are 3 main determinants of the temperature at any point on the globe:

  1. Variations in the solar energy reaching the Earth;
  2. Variations in the amount of this energy retained by greenhouse gases; and
  3. Differences in location on the planet—primarily differences in distance from the Equator

What the “cross-sectional method” did was derive parameters for the third factor, and then simply assume that the same parameters applied to the second. This is comparable to carefully measuring the terrain of a mountain in the North-South direction, and then using that information to advise on the safety of traversing it East to West.

Econometrics before Ecology

This weakness of the “cross-sectional approach” has been admitted in a more recent paper in this tradition:

Firstly, the literature relies primarily on the cross-sectional approach (see, for instance, Sachs and Warner 1997, Gallup et al. 1999, Nordhaus 2006, and Dell et al. 2009), and as such does not take into account the time dimension of the data (i.e., assumes that the observed relationship across countries holds over time as well). (Kahn, Mohaddes et al. 2019, p. 2. Emphasis added)

This promising start was unfortunately neutered by their eventual simple linear extrapolation of the change in the relationship temperature to GDP relationship between 1960 and 2014 forward to 2100:

We start by documenting that the global average temperature has risen by 0:0181 degrees Celsius per year over the last half century… We show that an increase in average global temperature of 0:04°C per year— corresponding to the Representative Concentration Pathway (RCP) 8.5 scenario (see Figure 1), which assumes higher greenhouse gas emissions in the absence of mitigation policies— reduces world’s real GDP per capita by 7.22 percent by 2100. (Kahn, Mohaddes et al. 2019, p. 4)

Their predictions for GDP change as a function of temperature change is the shaded region in Figure 9 (which reproduces their Figure 2). The linearity of their projection is evident: it presumes no structural change in the relationship between global temperature and GDP, even as temperature rises by 3.2°C over their time horizon of 80 years (0.04°C per year from 2020 till 2100).

Figure 9: Kahn and Mohaddes’s linear extrapolation of the temperature:GDP relationship from 1960-2014 out till 2100 (Kahn, Mohaddes et al. 2019, p. 6)

The failure of this paper to account for the obvious discontinuities such a temperature increase will wreak on the planet’s climate was acknowledged by one of the authors on Twitter on October 31st 2019:

Kamiar Mohaddes:    I also want to be clear that we cannot, and do not, claim that our empirical analysis allows for rare disaster events, whether technological or climatic, which is likely to be an important consideration. From this perspective, the counterfactual outcomes that we discuss… in Section 4 of the paper (see: https://ideas.repec.org/p/cam/camdae/1965.html) should be regarded as conservative because they only consider scenarios where the climate shocks are Gaussian, without allowing for rare disasters. https://twitter.com/KamiarMohaddes/status/1189846383307694084?s=20 ; https://twitter.com/KamiarMohaddes/status/1189846648366796800?s=20

Steve Keen:    Kamiar, the whole point of #GlobalWarming is that it shifts the entire distribution. What is “rare” in our current climate—like for example the melting of Greenland—becomes a certainty at higher temperatures. https://twitter.com/ProfSteveKeen/status/1189849936290029569?s=20

What Mohaddes called “rare disaster events”—such as, for example, the complete disappearance of the Arctic Ice sheet during summer—would indeed be rare at our current global temperature. But they become certainties as the temperature rises another 3°C (Steffen, Rockström et al. 2018, Figure 3, p. 8255). This forecast is as useful as a study of the relationship between temperature and speed skating, which concludes that it would be advantageous to increase the temperature of the ice from -2°C to +2°C.

This recent paper alerted me to one potentially promising study I had previously missed: the significant outlier in Figure 9 by Burke et al. (Burke, Hsiang et al. 2015). This was at least outside the economic ballpark, if not in that of scientists like Steffen, who expect a 4°C increase in temperature to lead to the collapse of civilisation (Moses 2020).

As its title “Global non-linear effect of temperature on economic production” implies, it did at least consider nonlinearities in the Earth’s climate. But once again, this was restricted to nonlinearities in the relationship between 1960 and 2010, and it was then extrapolated to a future planet with a vastly different climate:

We quantify the potential impact of warming on national and global incomes by combining our estimated non-linear response function with ‘business as usual’ scenarios of future warming and different assumptions regarding future baseline economic and population growth. This approach assumes future economies respond to temperature changes similarly to today’s economies—perhaps a reasonable assumption given the observed lack of adaptation during our 50-year sample… climate change reduces projected global output by 23% in 2100 relative to a world without climate change, although statistical uncertainty allows for positive impacts with probability 0.29 (Burke, Hsiang et al. 2015, pp. 237-38. Emphasis added)

As applies to so much of this research, these two recent papers show the authors delighting in the ecstasy of econometrics, while failing to appreciate the irrelevance of their framework to the question at hand.

GIGO: Garbage In, Garbage Out

When I began this research, I expected that the main cause of Nordhaus’s extremely low predictions of damages from climate change would be the application of a very high discount rate to climate damages estimated by scientists, and that a full critique of his work would require explaining why an equilibrium-based Neoclassical model like DICE was the wrong tool to analyse something as dynamic and far from equilibrium as climate change (DeCanio 2003). Instead, I found that the computing adage “Garbage In, Garbage Out” (GIGO) applied: it does not matter how good or how bad the actual model is, when it is fed “data” like that concocted by Nordhaus and his coterie of like-minded Neoclassical economists. The numerical estimates to which they fitted their inappropriate models are, as shown here, utterly unrelated to the phenomenon of global warming. Even an appropriate model of the relationship between climate change and GDP would return garbage predictions if it were trained on “data” like this.

This raises the key question: how did such transparently inadequate work get past academic referees?

Simplifying Assumptions and the Refereeing Process: the Poachers becomes the Gatekeepers

One undeniable reason why this research agenda was not drowned at birth was the proclivity for Neoclassical economists to make assumptions on which their conclusions depend, and then dismiss any objections to them on the grounds that they are merely “simplifying assumptions”.

As Paul Romer observed, the standard justification for this is “Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions” (Romer 2016, p. 5). Those who make this defence do not seem to have noted Friedman’s footnote that “The converse of the proposition does not of course hold: assumptions that are unrealistic (in this sense) do not guarantee a significant theory” (Friedman 1953, p. 14).

A simplifying assumption is something which, if it is violated, makes only a small difference to your analysis. Musgrave points out that “Galileo’s assumption that air-resistance was negligible for the phenomena he investigated was a true statement about reality, and an important part of the explanation Galileo gave of those phenomena” (Musgrave 1990, p. 380). However, the kind of assumptions that Neoclassical economists frequently make, are ones where if the assumption is false, then the theory itself is invalidated (Keen 2011, pp. 158-174).

This is clearly the case here with the core assumptions of Nordhaus and his Neoclassical colleagues. If activities that occur indoors are, in fact, subject to climate change; if the temperature to GDP relationships across space cannot be used as proxies for the impact of global warming on GDP, then their conclusions are completely false. Climate change will be at least one order of magnitude more damaging to the economy than their numbers imply—working solely from the spurious assumption that 90% of the economy will be unaffected by it. It could be far, far worse.

Unfortunately, referees who accept Friedman’s dictum that “a theory cannot be tested by the “realism” of its “assumptions”” (Friedman 1953, p. 23) are unlikely to reject a paper because of its assumptions, especially if that paper otherwise makes assumptions that Neoclassical economists accept. Thus, Nordhaus’s initial sorties in this area received a free pass.

After this, a weakness of the refereeing process took over. As any published academic knows, once you are published in an area, you will be selected by journal editors as a referee for that area. Thus, rather than peer review providing an independent check on the veracity of research, it can allow the enforcement of a hegemony. As one of the first of the very few Neoclassical economists to work on climate change, and the first to proffer empirical estimates of the damages to the economy from climate change, this put Nordhaus in the position to both frame the debate, and to play the role of gatekeeper. One can surmise that he relishes this role, given not only his attacks on Forrester and the Limits to Growth (Meadows, Randers et al. 1972; Nordhaus 1973; Nordhaus 1992), but also his attack on his fellow Neoclassical economist Nicholas Stern for using a low discount rate in The Stern Review (Nordhaus 2007; Stern 2007).

The product has been a degree of conformity in this community that even Tol acknowledged:

it is quite possible that the estimates are not independent, as there are only a relatively small number of studies, based on similar data, by authors who know each other well… although the number of researchers who published marginal damage cost estimates is larger than the number of researchers who published total impact estimates, it is still a reasonably small and close-knit community
who may be subject to group-think, peer pressure, and self-censoring. (Tol 2009, pp. 37, 42-43)

Indeed.

Conclusion: Drastically underestimating damages from Global Warming

Were climate change an effectively trivial area of public policy, then the appallingly bad work done by Neoclassical economists on climate change would not matter greatly. It could be treated, like the intentional Sokal hoax (Sokal 2008), as merely a salutary tale about the foibles of the Academy.

But the impact of climate change upon the economy, human society, and the viability of the Earth’s biosphere in general, are matters of the greatest importance. That work this bad has been done, and been taken seriously, is therefore not merely an intellectual travesty like the Sokal hoax. If climate change does lead to the catastrophic outcomes that some scientists now openly contemplate (Kulp and Strauss 2019; Lenton, Rockström et al. 2019; Wang, Jiang et al. 2019; Yumashev, Hope et al. 2019; Lynas 2020; Moses 2020; Raymond, Matthews et al. 2020; Xu, Kohler et al. 2020), then these Neoclassical economists will be complicit in causing the greatest crisis, not merely in the history of capitalism, but potentially in the history of life on Earth.

Appendix

Table 3: Table SM10-1, p. SM10-4 of “Key Economic Sectors, plus other studies by economists

Table 4: USA average temperature, GDP/GSP and Population data

References

Arent, D. J., R.S.J. Tol, E. Faust, J.P. Hella, S. Kumar, K.M. Strzepek, F.L. Tóth, and D. Yan, (2014). Key economic sectors and services – supplementary material. . Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. C. B. Field, V. R. Barros, D. J. Dokkenet al.

Arent, D. J., R. S. J. Tol, et al. (2014). Key economic sectors and services. Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. C. B. Field, V. R. Barros, D. J. Dokkenet al. Cambridge, United Kingdom, Cambridge University Press: 659-708.

Bosello, F., F. Eboli, et al. (2012). “Assessing the economic impacts of climate change.” Review of Energy Environment and Economics
1(9).

Burke, M., S. M. Hsiang, et al. (2015). “Global non-linear effect of temperature on economic production.” Nature
527(7577): 235.

Cline, W. (1996). “The impact of global warming on agriculture: Comment.” The American Economic Review
86(5): 1309-1311.

Darwin, R. (1999). “The Impact of Global Warming on Agriculture: A Ricardian Analysis: Comment.” American Economic Review
89(4): 1049-1052.

DeCanio, S. J. (2003). Economic models of climate change : a critique. New York, Palgrave Macmillan.

Fankhauser, S. (1995). Valuing Climate Change: The economics of the greenhouse. London, Earthscan.

Field, C. B., V. R. Barros, et al. (2014). IPCC, 2014: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom, Cambridge University Press, .

Forrester, J. W. (1971). World Dynamics. Cambridge, MA, Wright-Allen Press.

Forrester, J. W. (1973). World Dynamics. Cambridge, MA, Wright-Allen Press.

Friedman, M. (1953). The Methodology of Positive Economics. Essays in positive economics. Chicago, University of Chicago Press: 3-43.

Gelman, A. (2014). “A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”.” Statistical Modeling, Causal Inference, and Social Science https://statmodeling.stat.columbia.edu/2014/05/27/whole-fleet-gremlins-looking-carefully-richard-tols-twice-corrected-paper-economic-effects-climate-change/.

Gelman, A. (2015). “More gremlins: “Instead, he simply pretended the other two estimates did not exist. That is inexcusable.”.” Statistical Modeling, Causal Inference, and Social Science https://statmodeling.stat.columbia.edu/2015/07/23/instead-he-simply-pretended-the-other-two-estimates-did-not-exist-that-is-inexcusable/.

Gelman, A. (2019). “The climate economics echo chamber: Gremlins and the people (including a Nobel prize winner) who support them.” https://statmodeling.stat.columbia.edu/2019/11/01/the-environmental-economics-echo-chamber-gremlins-and-the-people-including-a-nobel-prize-winner-who-support-them/.

Hope, C. (2006). “The marginal impact of CO2 from PAGE2002: an integrated assessment model incorporating the IPCC’s five Reasons for Concern.” Integrated Assessment
6(1): 19-56.

Kahn, M. E., K. Mohaddes, et al. (2019) “Long-Term Macroeconomic Effects of Climate Change: A Cross-Country Analysis.” DOI: https://doi.org/10.24149/gwp365.

Kaufmann, R. K. (1997). “Assessing The Dice Model: Uncertainty Associated With The Emission And Retention Of Greenhouse Gases.” Climatic Change
35(4): 435-448.

Kaufmann, R. K. (1998). “The impact of climate change on US agriculture: a response to Mendelssohn et al. (1994).” Ecological Economics
26(2): 113-119.

Keen, S. (2011). Debunking economics: The naked emperor dethroned? London, Zed Books.

Keen, S., R. U. Ayres, et al. (2019). “A Note on the Role of Energy in Production.” Ecological Economics
157: 40-46.

Kulp, S. A. and B. H. Strauss (2019). “New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding.” Nature Communications
10(1): 4844.

Lenton, T., J. Rockström, et al. (2019). “Climate tipping points – too risky to bet against.” Nature
575(7784): 592-595.

Lenton, T. M., H. Held, et al. (2008). “Supplement to Tipping elements in the Earth’s climate system.” Proceedings of the National Academy of Sciences
105(6).

Lenton, T. M., H. Held, et al. (2008). “Tipping elements in the Earth’s climate system.” Proceedings of the National Academy of Sciences
105(6): 1786-1793.

Lynas, M. (2020). Our Final Warning: Six Degrees of Climate Emergency. London, HarperCollins Publishers.

Maddison, D. (2003). “The amenity value of the climate: the household production function approach.” Resource and Energy Economics
25(2): 155-175.

Maddison, D. and K. Rehdanz (2011). “The impact of climate on life satisfaction.” Ecological Economics
70(12): 2437-2445.

Meadows, D. H., J. Randers, et al. (1972). The limits to growth. New York, Signet.

Mendelsohn, R., W. Morrison, et al. (2000). “Country-Specific Market Impacts of Climate Change.” Climatic Change
45(3): 553-569.

Mendelsohn, R., M. Schlesinger, et al. (2000). “Comparing impacts across climate models.” Integrated Assessment
1(1): 37-48.

Mirowski, P. (2020). The Neoliberal Ersatz Nobel Prize. Nine Lives of Neoliberalism. D. Plehwe, Q. Slobodian and P. Mirowski. London, Verso: 219-254.

Moses, A. (2020). ‘Collapse of civilisation is the most likely outcome’: top climate scientists. Voice of Action. Melbourne, Australia.

Musgrave, A. (1990). ‘Unreal Assumptions’ in Economic Theory: The F-Twist Untwisted. Milton Friedman: Critical assessments. Volume 3. J. C. Wood and R. N. Woods. London and New York, Routledge: 333-342.

Nordhaus, W. (1994). “Expert Opinion on Climate Change.” American Scientist
82(1): 45–51.

Nordhaus, W. (2007). “Critical Assumptions in the Stern Review on Climate Change.” Science
317(5835): 201-202.

Nordhaus, W. (2008). A Question of Balance. New Haven, CT, Yale University Press.

Nordhaus, W. (2013). The Climate Casino: Risk, Uncertainty, and Economics for a Warming World. New Haven, CT, Yale University Press.

Nordhaus, W. (2018). “Nobel Lecture in Economic Sciences. Climate Change: The Ultimate Challenge for Economics.” from https://www.nobelprize.org/uploads/2018/10/nordhaus-slides.pdf.

Nordhaus, W. (2018). “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies.” American Economic Journal: Economic Policy
10(3): 333–360.

Nordhaus, W. and J. G. Boyer (2000). Warming the World: Economic Models of Global Warming. Cambridge, Massachusetts, MIT Press.

Nordhaus, W. and P. Sztorc (2013). DICE 2013R: Introduction and User’s Manual.

Nordhaus, W. D. (1973). “World Dynamics: Measurement Without Data.” The Economic Journal
83(332): 1156-1183.

Nordhaus, W. D. (1991). “To Slow or Not to Slow: The Economics of The Greenhouse Effect.” The Economic Journal
101(407): 920-937.

Nordhaus, W. D. (1992). “Lethal Model 2: The Limits to Growth Revisited.” Brookings Papers on Economic Activity(2): 1-43.

Nordhaus, W. D. (1994). “Expert Opinion on Climatic Change.” American Scientist
82(1): 45-51.

Nordhaus, W. D. (1994). Managing the global commons : the economics of climate change / William D. Nordhaus. Cambridge, Mass., Cambridge, Mass.

Nordhaus, W. D. (2006). “Geography and macroeconomics: New data and new findings.” Proceedings of the National Academy of Sciences of the United States of America
103(10): 3510-3517.

Nordhaus, W. D. (2010). “Economic aspects of global warming in a post-Copenhagen environment.” Proceedings of the National Academy of Sciences of the United States of America
107(26): 11721-11726.

Nordhaus, W. D. (2017). “Revisiting the social cost of carbon Supporting Information.” Proceedings of the National Academy of Sciences
114(7): 1-2.

Nordhaus, W. D. and A. Moffat (2017). A Survey Of Global Impacts Of Climate Change: Replication, Survey Methods, And A Statistical Analysis. New Haven, Connecticut, Cowles Foundation. Discussion Paper No. 2096.

Nordhaus, W. D. and Z. Yang (1996). “A Regional Dynamic General-Equilibrium Model of Alternative Climate-Change Strategies.” The American Economic Review
86(4): 741-765.

Penn, J. L., C. Deutsch, et al. (2018). “Temperature-dependent hypoxia explains biogeography and severity of end-Permian marine mass extinction.” Science
362(6419): eaat1327.

Plambeck, E. L. and C. Hope (1996). “PAGE95: An updated valuation of the impacts of global warming.” Energy Policy
24(9): 783-793.

Quiggin, J. and J. Horowitz (1999). “The impact of global warming on agriculture: A Ricardian analysis: Comment.” The American Economic Review
89(4): 1044-1045.

Raymond, C., T. Matthews, et al. (2020). “The emergence of heat and humidity too severe for human tolerance.” Science Advances
6(19): eaaw1838.

Rehdanz, K. and D. Maddison (2005). “Climate and happiness.” Ecological Economics
52(1): 111-125.

Romer, P. (2016). “The Trouble with Macroeconomics.” https://paulromer.net/trouble-with-macroeconomics-update/WP-Trouble.pdf.

Roson, R. and D. v. d. Mensbrugghe (2012). “Climate change and economic growth: impacts and interactions.” International Journal of Sustainable Economy
4(3): 270-285.

Sokal, A. D. (2008). Beyond the hoax : science, philosophy and culture / Alan Sokal. Oxford, Oxford : Oxford University Press.

Steffen, W., J. Rockström, et al. (2018). “Trajectories of the Earth System in the Anthropocene.” Proceedings of the National Academy of Sciences
115(33): 8252-8259.

Stern, N. (2007). The Economics of Climate Change: The Stern Review. Cambridge, Cambridge University Press.

Tol, R. S. J. (1995). “The damage costs of climate change toward more comprehensive calculations.” Environmental and Resource Economics
5(4): 353-374.

Tol, R. S. J. (2002). “Estimates of the Damage Costs of Climate Change. Part 1: Benchmark Estimates.” Environmental and Resource Economics
21(1): 47-73.

Tol, R. S. J. (2009). “The Economic Effects of Climate Change.” The Journal of Economic Perspectives
23(2): 29–51.

Tol, R. S. J. (2014). “Correction and Update: The Economic Effects of Climate Change.” The Journal of Economic Perspectives
28(2): 221-226.

Tol, R. S. J. (2018). “The Economic Impacts of Climate Change.” Review of Environmental Economics and Policy
12(1): 4-25.

Tol, R. S. J. (2018). “The Economic Impacts of Climate Change Appendix.” Review of Environmental Economics and Policy
12(1): 4-25.

Trenberth, K. E. (1981). “Seasonal variations in global sea level pressure and the total mass of the atmosphere.” Journal of Geophysical Research: Oceans
86(C6): 5238-5246.

Wang, X. X., D. Jiang, et al. (2019). “Extreme temperature and precipitation changes associated with four degree of global warming above pre-industrial levels.” International Journal Of Climatology
39(4): 1822-1838.

Weitzman, M. L. (2011). “Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change.” Review of Environmental Economics and Policy
5(2): 275-292.

Weitzman, M. L. (2011). “Revisiting Fat-Tailed Uncertainty in the Economics of Climate Change.” REEP Symposium on Fat Tails
5(2).

Xu, C., T. A. Kohler, et al. (2020). “Future of the human climate niche.” Proceedings of the National Academy of Sciences: 201910114.

Yumashev, D., C. Hope, et al. (2019). “Climate policy implications of nonlinear decline of Arctic land permafrost and other cryosphere elements.” Nature Communications
10(1): 1900.

 

 

The macroeconomics of degrowth: can planned economic contraction be stable?

The Limits to Growth (Meadows, Randers et al. 1972) is infamous for many things, but above all for its “Standard Run” scenario, which predicted that, if there were no changes to the direction of economic development after 1972, then by some time in the early to mid-21st century, human civilisation would undergo a serious decline.

Figure 1: The Limits to Growth standard run (Meadows, Randers et al. 1972, p. 124). The legend for all plots is provided in Figure 2

Figure 2: The legend for all the Limits to Growth plots (Meadows, Randers et al. 1972, p. 123)

Less well known is its stabilised run, in which a range of policies were hypothetically introduced in 1975 to achieve a state of “global equilibrium … so that the basic material needs of each person on earth are satisfied and each person has an equal opportunity to realize his individual human potential” (Meadows, Randers et al. 1972, p. 24). The simulation concluded that no single policy was sufficient: population control on its own was not enough, nor pollution abatement without population control, and so on. But if all the policy changes they modelled were undertaken (they are described in Figure 3), then the world could achieve a sustainable future where average living standards for the globe as a whole were three times as high as they were in 1970—and much more equitably distributed.

Figure 3: The state of global equilibrium run (Meadows, Randers et al. 1972, p. 165)

As crucial as the need for a swathe of coordinated policies was the timing: if the changes were delayed for another 25 years till 2000, then they would fail: there would be overshoot of the biosphere’s capability to endure the pressure put upon it by humanity’s industrialized society. Both output and population would reach a peak in the mid-21st century and then decline.

Figure 4: Policies for global equilibrium introduced in 2000 instead of 1975 (Meadows, Randers et al. 1972, p. 169)

It is stating the obvious that policies to restrain humanity’s pressure on the biosphere were not put in place in 1975, nor 2000, nor even 2020. With research by Graham Turner (Turner 2008; Turner, Hoffman et al. 2011; Turner 2014) confirming that the world is still largely tracking the Standard Run of Limits to Growth, and studies like the Human Ecological Footprint (https://www.footprintnetwork.org/resources/data/) asserting that the human species alone is using 1.6 times the reproducible limit of the biosphere, we are well into ecological overshoot. Meadows et al noted that there were only three possibilities for the future, and that only two were possible:

All the evidence available to us, however, suggests that of the three alternatives—unrestricted growth, a self-imposed limitation to growth, or a nature-imposed limitation to growth—only the last two are actually possible. (Meadows, Randers et al. 1972, p. 168)

Since we have clearly failed to impose limits ourselves, we now face Nature doing that for us. Meadows et al deliberately avoided providing precise timing for their predictions, stating that:

We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known. (Meadows, Randers et al. 1972, pp. 123-24)

However, it is hardly being hyperbolic, at this point in 2020—with Australia’s wildfires behind us (Dowdy, Ye et al. 2019), Covid-19 all around us (Korolev 2020), India and Bangladesh suddenly reeling under the impact of Cyclone Amphan, and the prospect of catastrophic wildfires in the approaching American Summer—to feel that the deliberately vague timing of the Limits to Growth has proven to be precisely correct. Nature may be imposing its limits now.

But just as Covid-19 has severely jolted our consciousness, and led to policy changes that were unthinkable as recently as January 2020, what if these and subsequent ecological calamities shook humanity so much that we decided, belatedly but instantly, to impose the limits that Limits to Growth recommended we should implement 45 years ago? What would happen to global GDP?

Answering this question thoroughly would require updating the Limits to Growth study with current data. This should have been happening on a regular basis since 1972, but it was prevented in large measure by the ferocious attacks on the study’s credibility by economists in general—and by William Nordhaus in particular (Nordhaus 1973; Nordhaus 1992). These attack were based on misinformation and ignorance rather than knowledge (Forrester, Gilbert et al. 1974), but—or should I say “and”?—their impact was devastating. Though the book itself sold millions of copies, the group’s research funding evaporated. Whereas the original study was run on top of the line (for the time) mainframe computers at MIT, with a budget of the order of a million dollars in 1972, today Jorgen Randers, the one surviving author of the original study, is working without pay on developing an extended version of the World3 model called MODCAP (using the PC-based system dynamics program Vensim). In 2019 he was unable to raise funds to continue employing his one assistant.

In lieu of a complete answer therefore, I will provide a partial one by focusing on one of the many weaknesses of economics: the failure of both mainstream and non-mainstream economists to properly acknowledge the essential role of energy in production. What will happen to GDP if humanity realises that it has to immediately cease adding CO2 to the atmosphere, by ceasing to use carbon-based forms of energy, for everything from the production of electricity to transportation?

This question can’t be answered by turning to an unconventional economics textbook, let alone a conventional one. To date, Post Keynesian and Neoclassical economists alike have modelled GDP as being generated by a combination of labour and capital: Labour and Machinery inàGoods and services out. Some economists have attempted to incorporate energy by, to use Neoclassical terminology, treating Energy as a third “factor of production” on an equal footing with Labour and Capital (Solow 1974; Stiglitz 1974; Stiglitz 1974). But this is ontologically false. Energy cannot be added to a production process independently of labour and capital, and nor can a worker or a machine function without energy inputs. As I put it in “A Note on the Role of Energy in Production”:

Labour without energy is a corpse; capital without energy is a sculpture. (Keen, Ayres et al. 2019, p. 41)

This perspective casts the relationship between energy and GDP in a very different light to conventional economics, which even Larry Summers ridiculed for treating energy as no more important than its very small share of total GDP implied (less than 10% for most countries):

There’d be a set of economists who’d sit around explaining that electricity was only four percent of the economy, and so if you lost eighty percent of electricity you couldn’t possibly have lost more than three percent of the economy… It would be stupid. And we’d understand that somehow, even if we didn’t exactly understand in the model, that when there wasn’t any electricity there wasn’t really going to be much economy. (Summers 2013)

Treating energy as inputs to both labour and machinery (in different forms of course) implies a critical relationship between energy inputs and GDP: if there is no energy input, then there is no GDP, and if energy inputs fall, then ipso facto, so will GDP.

There are only two ways in which these implications could be countered: if there was over time a trend for GDP to “decouple” from energy, so that more GDP was produced per unit of energy; and if there was a strong trend for renewable forms of energy to replace carbon-based forms, so that the overall fall in energy could be attenuated.

Both these counters are false hopes. Though there has been a trend to a falling level of energy per unit of GDP in some countries, at the global level, the relationship between energy consumption and GDP between 1971 and 2017 is stunningly linear—see Figure 5. A rising global GDP requires a rising level of energy, and since energy is the motive force, if we are forced to abandon carbon-based energy forms, then GDP has to fall by much the same fraction as carbon-based energy is of total energy.

Figure 5: The relationship between global energy and global GDP between 1971 and 2016

The remaining hope is that progress in renewable energy has been such that it makes up a much larger proportion of total energy production than in the past. But again, the data dashes hope: though there has been a rapid increase in renewable energy as a percentage of total energy production since 2007, by 2016, it had only just pushed the renewable percentage past the peak it reached in 1983—see Figure 6. Even if the trend during 2015 had continued, renewables would still constitute less than 16% of total energy output in 2020.

 

Figure 6: The disappointing news on renewables as a percent of total energy supply

That implies that, if we decided to cease using carbon-based fuels at all at the end of 2020—so that we stopped adding to the level of CO2 in the atmosphere—then GDP would fall by about 85%.

There is no way that such a decision will be made in 2020—or at least, no way it will be made voluntarily. As Covid-19 has shown, a decision of a comparable magnitude can be compelled by Nature, but the mentality amongst decision-makers and the public is still to aim for a “return to normal” when—or rather if—the Covid-19 threat is contained.

But what if the wildfires, and the virus, and the floods, and the locusts(!) of 2020 are just the beginning of a series of ecological nightmares that will finally alert humanity to how far we have overstepped the carrying capacity of this planet? What if policymakers, with not merely the support of the public but pressure from it, adopt the UK Labour target of 2030 as the year in which CO2 emissions fall to zero? Given the linear relationship between energy usage and GDP shown in Figure 5, what would it take in terms of the replacement of carbon-based energy sources by renewable ones—primarily solar and wind—to preserve 2020’s global real output level in 2030?

Extrapolating the trends in both GDP and Energy from 2016 to 2020 yields an estimate of global GDP in 2020 of $86 trillion (in 2010 US$ terms), and energy production of 14 million KTOE (KiloTonnes of Oil Equivalent, one of the standard measures of energy). If we assume that, as of 2020, renewable energy is providing 15% of total energy, then to reach 100% renewable energy by 2030—and thus achieve zero emissions of CO2—while maintaining GDP at the 2020 level, would require the installation of renewable energy sources capable of yielding the energy equivalent of roughly 12 billion tonnes of oil every year.

To put this in perspective, to generate the same amount of power from large (1,000 Megawatt) hydro-electric or nuclear power stations in 2030 would require building 16,000 of them between now and 2030, or more than 4 such stations per day, every day. It goes without saying that we do not have the capacity to achieve either goal.

Solar cells and wind farms are technologically far less complicated than nuclear power stations, and capable of being installed in far more locations than hydroelectric dams. However, solar only generates power during daylight hours, wind farms are more restricted in location, and both require an enormous area to generate the same power as a nuclear or coal-fired power station. Since the energy the Earth’s surface receives from the Sun peaks at about 10 Megawatts per hectare, a 1000MW solar power station would require roughly 1,000 hectares of land with optimistic assumptions about efficiency and availability (wind power is even more diffuse). To completely replace our planetary energy production with solar power would require an area of solar panels roughly equivalent to one third the area of Spain.

That amount of land could be provided by the world’s road networks and rooftops, with farms also hosting wind generation, so the goal is not unreachable. But are we getting to this goal fast enough, so that by 2030, renewable energy could provide 100% of our energy needs? Unfortunately, no—and not by a long shot. The red and blue plots in Figure 7 are the same as in Figure 6, just with the time horizon pushed further out: at the current growth rate of renewable energy, it would take until 2082 to achieve 100% renewable energy at the 2020 level of GDP. To get to 100% renewable energy by 2030 and thus be able to sustain 2020’s real GDP level, we need a growth rate of the renewable (predominantly wind and solar energy) to total energy ratio of 20% per year—almost seven times as high as the actual growth rate in 2015. Call me a pessimist, but I can’t see that happening. Individual countries (such as the UK) might be able to get there, but at the planetary level, we will not.

This means that we won’t be able to maintain 2020 GDP levels in 2030 with an economy which is completely powered by renewable energy, and hence has net zero CO2 emissions. Though we should endeavour to expand renewable and non-carbon-based power in general as much as possible, if we are to have a zero carbon economy by 2030, then we have to accept that GDP will fall substantially. Even a four-fold increase in the rate of growth of renewable energy will result in energy input levels—and hence GDP output levels—that are 50% below 2020 levels in 2030.

 

 

Figure 7: Extending the optimistic extrapolation of the trend in Figure 6 till it reaches 100% of energy production

How could such a reduction in output be undertaken deliberately and, as much as possible, peacefully? We need a mechanism for GDP reduction, and for the encouragement of the shift to renewable energy, that falls primarily on the wealthy. A price for carbon, as championed by Neoclassical economists like William Nordhaus, will afflict the poor disproportionately compared to the rich. The riots with which the Gilles Janes movement began in France in response to carbon pricing less than 2 years ag should make it obvious that the burden of adjustment must fall on the rich rather than the poor—both within nations and between them.

One system that could work is a dual-price mechanism based on carbon rationing, as proposed by Total Carbon Rationing. Currently, for other reasons, many Central Banks are exploring the concept of “Central Bank Digital Currencies” (CBDCs), which would give every resident in a country an account at the Central Bank. Rather than being a means to create and store the national currency, these accounts could be used to provide a “Universal Carbon Credit” (UCC) to every resident of a country on an equal per capita basis per recipient—so that billionaires would receive the same annual UCC as paupers. To buy any commodity, a consumer would need to pay both its money price, as now, and its CO2 content as well, using UCCs.

The ration could initially be set well above the average per capita CO2 consumption of the country, so that the vast majority of the population would never exhaust their allowance, and would therefore be able to sell their excess UCCs to the rich—who, at their current consumption levels, would definitely exhaust their allowance, and thus need to buy UCCs from the poor. It would work as a redistributive mechanism, as well as a means to reduce consumption and hence GDP and CO2 output as it was reduced over time.

An economically and politically stable route to reduced GDP is thus conceivable. Is it realistic? My expectation is that it is not. The far more likely outcome is that humanity in general and the powerful in particular will delay the decision to act, hoping instead that GDP can return to pre-Covid-19 growth rates, while ignoring the dependence of this growth rate on an increasing use of carbon-based energy that will accelerate Global Warming. We will, in effect, let Nature make the decision for us.

Dowdy, A. J., H. Ye, et al. (2019). “Future changes in extreme weather and pyroconvection risk factors for Australian wildfires.” Scientific Reports
9(1).

Forrester, J. W., W. L. Gilbert, et al. (1974). “The Debate on “World Dynamics”: A Response to Nordhaus.” Policy Sciences
5(2): 169-190.

Keen, S., R. U. Ayres, et al. (2019). “A Note on the Role of Energy in Production.” Ecological Economics
157: 40-46.

Korolev, I. (2020). Identification and Estimation of the SEIRD Epidemic Model for COVID-19, SUNY at Binghamton, Department of Economics.

Meadows, D. H., J. Randers, et al. (1972). The limits to growth. New York, Signet.

Nordhaus, W. D. (1973). “World Dynamics: Measurement Without Data.” The Economic Journal
83(332): 1156-1183.

Nordhaus, W. D. (1992). “Lethal Model 2: The Limits to Growth Revisited.” Brookings Papers on Economic Activity(2): 1-43.

Solow, R. M. (1974). “The Economics of Resources or the Resources of Economics.” The American Economic Review
64(2): 1-14.

Stiglitz, J. (1974). “Growth with Exhaustible Natural Resources: Efficient and Optimal Growth Paths.” The Review of Economic Studies
41: 123-137.

Stiglitz, J. E. (1974). “Growth with Exhaustible Natural Resources: The Competitive Economy.” Review of Economic Studies
41(5): 139.

Summers, L. (2013). Larry Summers at IMF Economic Forum, November 8th 2013.

Turner, G. M. (2008). “A comparison of The Limits to Growth with 30 years of reality.” Global Environmental Change
18(3): 397-411.

Turner, G. M. (2014). Is Global Collapse Imminent? An Updated Comparison of The Limits to Growth with Historical Data. Melbourne, Melbourne Sustainable Society Institute.

Turner, G. M., R. Hoffman, et al. (2011). “A tool for strategic biophysical assessment of a national economy – The Australian stocks and flows framework.” Environmental Modelling & Software
26: 1134-1149.

 

 

Personal #Coronavirus Update 03 May 23rd 2020

It’s been a long time—in Coronavirus days—since my last update on March 21st. At that stage, we had been in Thailand for just 2 days, and we were staying in a hotel in Trang (we stayed in the only 5-star hotel in this town of 60,000, which is about 500 miles south of Bangkok, and 300km south of Phuket, far off the beaten track for foreign tourists to Thailand).

The main motivation for the move from The Netherlands to Thailand was to buy myself time to finish the expose I want to write on the appallingly bad garbage that has passed for mainstream climate change economics. I still fully expected that I’d get the virus; I just didn’t want it to get me before I “got”, as best I can, Nordhaus and Tol and all their fellow-travellers for their arguably criminally negligent trivialization of the dangers of climate change. I was buying time, but I thought, not health or freedom.

Now, we’re renting a 4-bedroom house in a gated community of about 500 homes on the outskirts of Trang, and the odds are that I won’t get the virus at all.

Thailand is one of about 40 countries that, to use the pandemic diseases expert and complex systems theorist Yaneer Bar-Yam‘s phrase, are not merely “flattening”, but “crushing the curve”: eliminating the virus from within their borders. When we arrived, Thailand’s daily case count was 56 on a 3-day moving average basis, and was still rising—see the screenshot below from the excellent Coronavirus visualisation tool build by one of my Patrons, Nigel Goddard (the blue dot marks March 20, the day we arrived here).

Figure 1: Thailand is recording just 1 case a day now https://homepages.inf.ed.ac.uk/ngoddard/covid19/

Now it has fallen to 1 new case per day, and only one case in the last two weeks was of community transmission: the rest have come from Thai nationals in quarantine after returning from overseas. It is highly likely that Thailand will eliminate the virus entirely this month.

This has been due to strong personal hygiene, stringent controls from a central government that took the disease seriously from the outset and had recent experience of an epidemic with SARS in 2003, and plentiful supplies of personal protection equipment: when we arrived, every morning we’d line up at a local supermarket (with about 500 others) to buy a pack of 4 N95 surgical masks for 10 Baht—or about 10 US cents each.

Figure 2: The standard pack of 4 masks for US$10 cents each

The price was government controlled, but included a 20% profit for the local Thai manufacturer. This experience alone demonstrated to me the folly of the West offshoring production under “globalization”. That meant cheap goods for consumers and large profits for capitalists that no longer had to pay their local workers decent wages. But it also meant that the USA, and the UK, and most of Europe, weren’t able to produce enough masks for their own people during this pandemic, while Asian countries were able to churn them out cheaply, and make them available en masse.

Figure 3: The orderly queue to purchase the masks. People were a bit closer than the 1.5 metre rule, but literally everyone was wearing a mask to begin with

We now have a personal stock of about 25 packs of these, plus alcohol gels and sprays. This would be replicated across Thailand (though not necessarily to the same scale per household). The bottom line is that pretty much everyone in the country has the personal protection equipment (and social practices) needed to drastically reduce person to person transmission of the virus.

Figure 4: Part of our household stock of masks, gels and sprays.

It has certainly been eliminated in the province we’re living in, Trang (in the capital city of the same name). There were 3 cases here when we arrived in Thailand, then 4, 6 and finally 7—all from one family so I’m told, of a 24-year-old who had been working in Phuket.

Phuket is a major tourist destination, and has had a total of 224 cases out of a population of 420,000—or about 1 case per 2000 residents (that’s about half as bad as The Netherlands). The province of Trang has had 7 cases amongst it 700,000 residents—or 1 case per 100,000. The last new case was over a month ago. All the most recent cases have been in Bangkok, a sprawling city of 8 million that I was sure would be a viral hotspot. Instead, it has recorded just 1548 cases: about 19 cases per 100,000, versus 260 per hundred thousand in the Netherlands and close to 400 in the UK.

Figure 5: Bangkok’s total case count at https://covidtracker.5lab.co/en?fbclid=IwAR2FoTEMKOjtADGTM7Bv1EGiqP2-mppEKtpdx9zv9ZO45FM0qi04yxxEAKk

The personal impact of this is palpable. Even though people are still practicing personal caution here, the mood is relaxed: you’re no longer afraid of your fellow human being. I noticed this at a restaurant earlier this week, when the owner came up and clinked glasses with us over a meal. Even a month ago, that was unthinkable. Now, it feels like old times—as in, like six months ago. I wouldn’t even have noted such an event back then. Now, it’s significant. I feel like someone who almost drowned, noticing the air in a way that everyone else takes for granted.

Thailand won’t let this relaxed mood lead to a resurgence of cases, however. It is still locking down provinces—you can’t travel from one to another without a health clearance, a good reason to travel (tourism doesn’t qualify!), and a clearance to travel from the provincial government; you have to scan a QR code when you enter and leave a shop, to enable case tracking; everyone everywhere wears a mask when they are in contact with people they don’t know; and a curfew still applies, but now from 11pm till 4am rather than the original 10pm till 4am.

So, I’m confident that Thailand will get to where Taiwan already is: to zero new cases for more than 2 weeks, which confirms that the virus is not within its borders. There are, according to Yaneer’s excellent EndCoronavirus.org website, 46 countries that are on their way to that state.

Figure 6: Yaneer’s personal profile from the https://www.endcoronavirus.org/ website

They include China, the original source of the virus, which is down to a mere 7 new cases a day in a population of 1.4 billion. It is very much the whale amongst minnows here, in both numerical and economic terms. The next biggest country in population terms is Vietnam, with a population of 95 million—and the incredible success story of having just 302 cases and no deaths. Next is Thailand (70 million), Australia (25 million), (Taiwan) 24 million, then Sri Lanka (22 million), and after that, way down to countries like Norway (5 million). Economically, China is also first at over US$13 trillion, with Australia in second place at one tenth that level.

Figure 7: The 46 countries that are on their way to eliminating the virus completely

Another 25 are “nearly there”. These include most of the countries of the EU, Japan, and South Korea. Of these, both Japan and South Korea look likely to join Yaneer’s winners circle—where they will form a welcome counterpoint to China’s economic and population dominance. Many of the others, I fear, will spark a second wave if they succumb to the pressure to re-open before the virus has been eradicated. That includes Italy for example, which is still recording almost a thousand new cases every day, and the Netherlands, which I left in order to come here, which has over 44,000 cases and about 200 new cases every day.

Figure 8: Countries that are “nearly there”

Then there are the 36 countries where the virus is still rampant—including crucially the United States of America and the UK, which both completely bungled their fight with the virus, and which seem highly likely to experience a second wave of infections after their pathetically managed lockdowns are ended as well.

 

Figure 9: The “virus forever?” countries?

The worst country may well be Brazil, which seems to be skipping the odds of a second wave by completely failing to contain the first. Cases are doubling every ten days.

Figure 10: Brazil’s cases are doubling every 10 days. See https://homepages.inf.ed.ac.uk/ngoddard/covid19/

So I find myself in part of the world that is virus-free, and watching a New World Order evolve that no-one anticipated—not even Huxley or Orwell. It’s a “fractured planet”, with two enormously disparate fractions: China, Southeast Asia and Oceania in the “virus free” segment, and the rest of the world in the “virus afflicted”. I’m glad to be in the virus-free part, but I do have some trepidation about the future politics of this block, in which China is by far the major power economically and militarily.

Figure 11: The “winners and losers” from EndCoronavirus.org at https://www.endcoronavirus.org/map-visualization

That worry aside, I’m relaxed and working well, though enormously behind on numerous projects thanks to the time I lost in the move. Initially, getting settled here took total precedence: finding a place to rent (we rapidly located an unfurnished 4 bedroom house in a gated community on the outskirts of Trang, for US$300 a month), furnishing it, buying the essentials for mobility in a region where the temperature never drops below 24°C and frequently hits 37°C (a car, motorbike, and bicycles for exercise before the sun rises too high). That took about six weeks all up. It came after spending two weeks visiting my family in Sydney for what I was sure would be the last time for at least a year, after working with Russell Standish on Minsky for two weeks in late February.

All of March, all of April, and part of May was thus lost to the personal impact of the virus. I finally got down to solid work about two weeks ago. So far, I’ve just finished two major tasks with tight deadlines: a chapter for a book on system dynamics modelling in economics, and a paper collating my work on macroeconomics for the Review of Political Economy. With those two out of the way, there’s still a ton of work to do—summarised in part by this “to-do” list:

Figure 12: My current to-do list

My daily routine is a 10-30km bike ride starting somewhere between 5.45-6.30am on the very safe internal roads of this community, then a brief 5BX exercise routine at home, before breakfast and getting into work from about 8-8.30am. I work till around 6pm, when my partner and I head out for a meal (why cook, when dinner for two costs about US$4 at a local market?).

Speaking of which, it’s her birthday, so we’re off for a slightly more expensive meal with her cousin and her partner somewhere in town.

Keep safe everyone. And many, many thank to my Patrons: your support enabled me to make this shift, from somewhere where I was worried every day about getting the virus, to somewhere where my emotional and intellectual energies can be focused on skewering the bad economics—and bad economists—that got us into this trap in the first place.

Figure 13: My office. It’s sparese, with no books, since there was no room in the luggage on our flight from Amsterdam (bar one on managing back pain, and another by Richard Tol which I intend using in my case against him and his fellow climate change trivializers) –but a nice backdrop of the local beach, which hopefully we’ll be allowed to visit again next month

Figure 14: The bikes in the house’s vestibule. My partner uses the motorbike far more often than I do, and I use one of the bicycles far more often than she does.

Figure 15: The car and house. The car is essential with no public transport to speak of, a 2km walk to the town, and a standard daytime temperature of 34°C. Paying a rent of US$300 a month for a 4-bedroom 2-storey house (which would cost maybe $120,000 to buy) makes Michael Hudson’s point that high houses benefit rentiers and make the West uncompetitive with the East, thanks to the wages that are needed to pay exorbitant rents and mortgages.

Figure 16: The 1km long internal main road of the community on which I do 5-15 laps by bicycle every morning

Figure 17: The entrance to the community, next to a 7-11, on a busy 4-lane highway into Trang

Figure 18: Temperature scanning takes place at every shop, and you have to use LINE to scan a QR code on entry and exit, to enable tracking in case a community transmission occurs

Figure 19: Personal protection equipment is plentiful and cheap. N95 masks for $1.25, 100cc alcohol sprays for $2

Figure 20: Everyone wears masks everywhere that they’re in contact with the public

Figure 21: My cartoon collaborator and good friend Miguel Guerra’s excellent cartoon on the folly of putting the economy before health

Emergent Macroeconomics: Deriving Minsky’s Financial Instability Hypothesis Directly from Macroeconomic Definitions

Abstract

Though Minsky developed a compelling verbal model of the “Financial Instability Hypothesis” (FIH), he abandoned his early attempts to build a mathematical model of financial instability (Minsky 1957). While many mathematical models of the FIH have been developed since, the criticism that these models are “ad hoc” lingers.

In this paper I show that the essential characteristics of Minsky’s hypothesis are emergent properties of a complex systems macroeconomic model which is derived directly from macroeconomic definitions, augmented by the simplest possible assumptions for relations between system states, and the simplest possible behavioural postulates.

I also show that credit, which I define as the time derivative of private debt, is an essential component of aggregate demand and aggregate income, given that bank lending creates money (Holmes 1969; Moore 1979; McLeay, Radia et al. 2014).

Minsky’s Financial Instability Hypothesis is thus derived from sound macrofoundations. This stylized complex-systems model reproduces not only the core predictions of Minsky’s verbal hypothesis, but also empirical properties of the real world which have defied Neoclassical understanding, and which were also not predictions of Minsky’s verbal model: the occurrence of a “Great Moderation”—a period of diminishing cycles in employment, inflation, and economic growth—prior to a “Minsky Moment” crisis; and a tendency for inequality to rise over time.

The simulations in this paper use the Open Source system dynamics program Minsky, which was named in Minsky’s honour.

Keywords

Minsky, Financial Instability Hypothesis, Complexity, System Dynamics, Credit, Debt, Macroeconomics

JEL Codes

C60, C61, C62, E11, E12, E31, E40, E44, F47, G01, G12, G51, N11, N12, Y1,

Introduction

Though Minsky developed a compelling verbal model of the “Financial Instability Hypothesis” (FIH), he abandoned his early attempts to build a mathematical model of financial instability (Minsky 1957). Many mathematical models of the FIH have been developed since (Taylor and O’Connell 1985; Jarsulic 1989; Keen 1995; Charles 2005; Cruz 2005; Tymoigne 2006; Charles 2008; Fazzari, Ferri et al. 2008; Santos and Macedo e Silva 2009), and Minsky collaborated in some of them (Gatti, Delli Gatti et al. 1994), but the criticism that these models are “ad hoc” lingers (Rosser 1999, p. 83).

In this paper I show that the essential characteristics of Minsky’s hypothesis are emergent properties of a complex systems macroeconomic model which is derived directly from macroeconomic definitions, augmented by the simplest possible assumptions for relations between system states, and the simplest possible behavioural postulates.

I also show that credit—which I define as the time derivative of private debt (see Appendix A)—is an essential component of aggregate demand and aggregate income, given that bank lending creates money (Holmes 1969; Moore 1979; McLeay, Radia et al. 2014).

Minsky’s Financial Instability Hypothesis is thus derived from sound macrofoundations. This stylized complex-systems model reproduces not only the core predictions of Minsky’s verbal hypothesis, but also empirical properties of the real world which have defied Neoclassical understanding, and which were also not predictions of Minsky’s verbal model: the occurrence of a “Great Moderation”—a period of diminishing cycles in employment, inflation, and economic growth—prior to a “Minsky Moment” crisis; and a tendency for inequality to rise over time (Piketty 2014).

Deriving Minsky directly from macroeconomic definitions

Minsky’s Financial Instability Hypothesis is one of the rarest things in the history of economics: a powerful and accurate intuition. Neoclassical economists from Jevons onward have portrayed capitalism as a system that tends to equilibrium—while ignoring both history, and mathematical results like the Perron-Frobenius theorem (Jorgenson 1960; Jorgenson 1961; Jorgenson 1963; McManus 1963; Blatt 1983, , pp. 111-146), that establish otherwise. Marxists predict a perpetual tendency towards stagnation, via simplistic applications of Marx’s “tendency for the rate of profit to fall” (Marx 1894, Chapter 13), or its obverse that “that surplus must have a strong and persistent tendency to rise” (Baran 1968, p. 67). Starting from the still-disputed proposition—not just in Neoclassical economics (Bernanke 2000, p. 24), but in Post Keynesian economics as well (Fiebiger 2014; Keen 2014; Lavoie 2014; Palley 2014; Keen 2015)—that there was a “relation between debt and income” (Minsky 1982, p, 66), Minsky instead deduced that “the fundamental instability of a capitalist economy is upward”. Given the proclivity of economists to model an economist’s theory without having understood it (Hicks 1937; Hicks 1981), this pivotal passage is worth quoting at length:

The natural starting place for analyzing the relation between debt and income is to take an economy with a cyclical past that is now doing well. The inherited debt reflects the history of the economy, which includes a period in the not too distant past in which the economy did not do well… As the period over which the economy does well lengthens, two things become evident in board rooms. Existing debts are easily validated and units that were heavily in debt prospered; it paid to lever. After the event it becomes apparent that the margins of safety built into debt structures were too great. As a result, over a period in which the economy does well, views about acceptable debt structure change. In the deal-making that goes on between banks, investment bankers, and businessmen, the acceptable amount of debt to use in financing various types of activity and positions increases. This increase in the weight of debt financing raises the market price of capital assets and increases investment. As this continues the economy is transformed into a boom economy.

Stable growth is inconsistent with the manner in which investment is determined in an economy in which debt-financed ownership of capital assets exists, and the extent to which such debt financing can be carried is market determined. It follows that the fundamental instability of a capitalist economy is upward. The tendency to transform doing well into a speculative investment boom is the basic instability in a capitalist economy. (Minsky 1982, p, 66. Emphasis added)

Minsky explained the source of his “preanalytic cognitive act … called Vision” (Schumpeter 1954, p. 41) that led to the Financial Instability Hypothesis as his desire to explain what causes Great Depressions:

Can “It”—a Great Depression—happen again? And if “It” can happen, why didn’t “It” occur in the years since World War II? These are questions that naturally follow from both the historical record and the comparative success of the past thirty-five years. To answer these questions it is necessary to have an economic theory which makes great depressions one of the possible states in which our type of capitalist economy can find itself. (Minsky 1982, p. xix. Emphasis added)

Though this was a compelling and ultimately successful Vision, the dominant Vision in macroeconomics remains the need to derive it from “good” foundations, where Neoclassical economists have defined “good” as the capacity to derive macroeconomics from microeconomics. As Robert Lucas, the father of “rational expectations macroeconomics”, put it in an address subtitled “My Keynesian Education”:

I also held on to Patinkin’s ambition somehow, that the theory ought to be microeconomically founded, unified with price theory. I think this was a very common view… Nobody was satisfied with IS-LM as the end of macroeconomic theorizing. The idea was we were going to tie it together with microeconomics and that was the job of our generation. Or to continue doing that. That wasn’t an anti-Keynesian view. (Lucas 2004, p. 20)

Despite the failure of the models derived from this Vision to anticipate the Great Recession, this remains the core Vision in economics. Even the relatively progressive mainstream economist Olivier Blanchard could see no alternative to deriving macroeconomics from microeconomics:

Starting from explicit microfoundations is clearly essential; where else to start from? (Blanchard 2018, p. 47)

The answer to Blanchard’s question is surprisingly simple: you can start directly from macroeconomics itself. The fundamentals of Minsky’s successful hypothesis can be derived directly from incontestable macroeconomic definitions, allied to the simplest possible definitions for both key economic relationships and essential behavioural functions.

The essential macroeconomic definitions needed are the employment rate , the wages share of GDP , the private debt to GDP ratio , the output to employment ratio , and the capital to output ratio :

         

When the first three of these are differentiated with respect to time, three true-by-definition dynamic statements result:

  • The employment rate will rise if economic growth exceeds the sum of change in the output to labour ratio and population growth;
  • The wages share of output will rise if the total wages grow faster than GDP; and
  • The private debt to GDP ratio will rise if private debt growth exceeds the rate of economic growth

These statements are shown in Equation , where is used to signify :

         

The following simplifying assumptions are used to turn these definitions into an economic model:

Table 1: Simplifying Assumptions

Assumption

Equation

Parameters & Initial Conditions

1. Exogenous growth in the output to labour ratio

2. Exogenous population growth

3. A constant capital (K) to output (Y) ratio

4. The rate of change of capital is net investment, which is gross investment minus depreciation

5. A uniform real wage

6. A linear wage change function driven by the employment rate . is the slope of the wage-change function and is the employment rate at which wage change equals zero.

7. A linear gross investment function driven by the profit rate . is the slope of the investment function and is the profit rate at which gross investment equals profit .

8. Credit, which I define as the annual change in debt (see Appendix D), finances gross investment in excess of profits

9. Profit is output net of wages and interest payments

10. Initial conditions for

 

Applying these assumptions, and signifying the real growth rate as , leads to the model shown in Equation

         

As shown in Appendix B, this inherently nonlinear model has two meaningful equilibria: one with a positive employment rate, positive wages share, and debt to GDP ratio, which Grasselli & Costa Lima dubbed the “good equilibrium”; another with zero employment, zero wages share, and an infinite debt to GDP ratio, which they dubbed the “bad equilibrium” (Grasselli and Costa Lima 2012).

The key parameter that determines the stability of these two equilibria is the slope of the investment function. With a low desire to invest ()—which, on the surface, would appear to imply a poorer level of economic performance—the “good equilibrium” is stable with equilibrium values of (see Figure 1), with the system converging over a large number of cycles.

Figure 1: Simulation with : convergence to the “good equilibrium”

However, with a high desire to invest ()—which, on the surface, would appear to imply a higher level of economic performance—the “good equilibrium” is unstable, with equilibrium values of (see Figure 2).

Figure 2: Simulation with : a “Great Moderation” followed by rising cycles and breakdown

The approach to, and then repulsion from, the good equilibrium follows what is known as the “intermittent route to chaos” (Pomeau and Manneville 1980), in which systemic turbulence appears to decline, only to subsequently rise once more. This reproduces several of the stylized facts of recent macroeconomic data:

  • A rising level of private debt compared to GDP;
  • An initial apparent decline in the volatility of employment, growth and wage demands—a “Great Moderation”—followed by increasing volatility and (ultimately) an economic collapse—a “Great Recession”; and
  • Rising inequality, as the increased share going to bankers (in this three-class system) comes at the expense not of capitalists—who are the only ones borrowing in this simple model—but at the expense of the workers’ share of income.

A simple model derived directly from macroeconomic definitions thus reproduces the essence of Minsky’s FIH: the faster cyclical growth of debt relative to income over a series of credit-driven boom and bust cycles, leading to a period of increasing volatility and, in this model without bankruptcy or government, ultimate a terminal economic breakdown.

One essential aspect of this model is the proposition that the change in debt finances part of investment, and thus part of aggregate demand—loans are not “pure redistributions” (Bernanke 2000, p. 24) as portrayed in Neoclassical literature, but increases in bank assets which simultaneously create both money and additional aggregate demand and income. This can be proven using the key macroeconomic identity that expenditure is income.

The role of credit in aggregate demand and aggregate income

Central Banks have recently relieved Post Keynesian economists of the necessity of insisting that their “Endogenous Money” model of banking behaviour is structurally correct (Holmes 1969; Moore 1979; Moore 1988; Moore 1988; Dow 1997; Rochon 1999; Fullwiler 2013), while the Neoclassical models of “Loanable Funds” and the “Money Multiplier” are incorrect (McLeay, Radia et al. 2014; Deutsche Bundesbank 2017).

However, though the endogeneity of money is fully accepted in Post Keynesian and MMT circles, the macroeconomic significance of Endogenous Money is not. For instance, in a recent blog post, Wray argued that “in retrospect the endogenous money literature is trivial for several reasons”, with its implications largely being confined to how central banks set interest rates (Wray 2019). Similarly, in the debate in the Review of Keynesian Economics over an earlier, initially flawed, and more complicated expression of the arguments in this section (Fiebiger 2014; Keen 2014; Lavoie 2014; Palley 2014; Keen 2015), Fiebiger treated passages in which Minsky attempted to establish a role in aggregate demand for a change in the money supply (caused by a change in debt) as simply expressing a tautology:

Given the parameters specified, Minskys (Minsky 1975, p. 133) deduction that ΔMt must be the source of growth allows Yt ex ante > Yt1 ex post to be viewed as a tautology. (Fiebiger 2014, p. 295)

In the following tables (which I term “Moore Tables” in honour of Basil Moore), I use the key macroeconomic identity that expenditure is income to show that endogenous money is far from macroeconomically trivial, and that Minsky’s comments in (Minsky 1975, p. 133) and (Minsky 1982, pp. 3-6 in a section entitled “A sketch of a model”) were not tautological, but critical insights whose expression was hampered by the use of period analysis (Fontana 2003; Fontana 2004), as were Fiebiger’s attempts to interpret them. These tables show flows in continuous time, including credit, which I define as the time derivative of debt:

         

As in any differential equation, these flows are measured instantaneously, and dimensioned in the relevant time unit, which is dollars per year. I know that this is a foreign concept to a discipline accustomed to thinking in terms of periods, normally of a year: doesn’t one have to measure for a year to speak of, for example, credit per year? In fact, one does not. A monetary flow can be measured at an instant in time in terms of dollars per year, just as a car’s speedometer measures velocity at an instant in time in terms of kilometres per hour: since velocity is the time derivative of distance, if an instantaneous velocity of 100km/hr were maintained for an hour, then the vehicle would cover 100 kilometres in that hour. The same principle applies to the near instantaneous measurement of financial flows, even though they are the sum of a large number of discrete but asynchronous transactions sampled across a very small instant of time.

Each row in a Moore Table shows expenditure by one sector on the others in an economy. Expenditure is shown as a negative entry on the diagonal of the table, and a positive entry on the off-diagonal, with the two necessarily summing to zero on each row and the overall table. The negative sum of the diagonal of the table is aggregate demand , while the sum of the off-diagonal elements is aggregate income . The two are necessarily equal: expenditure is income.

Figure 3 shows a monetary economy in which neither lending nor borrowing can occur. The flows A to F represent the turnover of an existing and constant money stock, and in this sense are comparable to Friedman’s mythical “Optimum Quantity of Money” economy (Friedman 1969), though minus the helicopters dispensing money. The sum of these monetary flows can thus be substituted by the velocity of money V times the stock of money M, as in Equation .

Figure 3: Moore Table for a monetary economy with no lending

  

Sector 1

Sector 2

Sector 3

Sum

Sector 1

-(A+B)

A

B

0

Sector 2

C

-(C+D)

D

0

Sector 3

E

F

-(E+F)

0

Sum

(C+E)-(A+B)

(A+F)-(C+D)

(B+D)-(E+F)

0

         

Figure 4 shows the mythical (McLeay, Radia et al. 2014) Neoclassical model of Loanable Funds, in which lending is between one non-bank agent and another. Lending is shown as a flow across the diagonal of the Moore Table. Without loss of generality, I show Sector 2 lending Credit dollars per year to Sector 1, which Sector 1 then spends buying the output of Sector 3; Sector 1 also has to pay Interest $/year to Sector 1, to service the outstanding stock of debt. The flow of lending affects the spending power of the lender as well as the borrower: the flow of Credit $/Year from Sector 2 to Sector 1 reduces the amount that Sector 2 can spend on Sector 3.

Figure 4: Moore Table for Loanable Funds

  

Sector 1

Sector 2

Sector 3

Sum

Sector 1

-(A+B+Credit +Interest)

A+Interest

B+Credit

0

Sector 2

C

-(C+D-Credit)

D-Credit

0

Sector 3

E

F

-(E+F)

0

Sum

(C+E)-(A + B + Credit + Interest)

(A+ F+ Interest) – (C+D-Credit)

(B+D)-(E+F)

0

The sum of either the off-diagonal elements of Figure 4 (or the negative of the sum of the diagonal) confirms the belief of Neoclassical economists, that if banks were just intermediaries, then credit would be a pure redistribution, and it would play no role in aggregate demand and income. However, one interesting result is that (gross) interest payments are part of aggregate demand and aggregate income—see Equation .

         

Figure 5 shows the real-world situation that Credit money is created by bank lending. The Table is now expanded to show the Assets, Liabilities and Equity of the Banking Sector, and monetary flows now include the matching increase of Assets and Liabilities when a new loan (Credit, denominated in $/Year) is made, as well as transfers between Liabilities (predominantly deposit accounts), and also Bank Equity. The Credit money created by the loan is used by Sector 1 to buy goods from Sector 3, and Sector 1 is obliged to service the stock of outstanding loans by paying the flow of Interest $/Year to the Bank (which is recorded in its Equity account).

Figure 5: Moore Table for Endogenous Money (“Bank Originated Money and Debt”)

  

Assets

Liabilities (Deposit Accounts)

Equity

  

  

Debt

Sector 1

Sector 2

Sector 3

Bank

Sum

Sector 1

Credit

-(A+B + Credit + Interest)

A

B + Credit

Interest

0

Sector 2

  

C

-(C+D)

D

  

0

Sector 3

  

E

F

-(E+F)

  

0

Bank

  

G

H

I

-(G+H+I)

  

Sum

  

(C+E+G) – (A+B + Credit + Interest)

(A+F+H)-(C+D)

(B+D+I+Credit)-(E+F)

Interest-(G+H+I)

0

The crucial result here is that Credit is part of both aggregate demand and aggregate income, in the real world of Endogenous Money in which bank lending creates money:

         

This realisation strengthens the underlying Post Keynesian and MMT methodologies. Not only is “Endogenous Money/BOMD” a more realistic description of banking than “Loanable Funds”, it has an enormous impact on macroeconomics as well. Macroeconomic models that omit banks, debt, money—and therefore the role of credit in aggregate demand and income—omit the “causa causans that factor which is most prone to sudden and wide fluctuation” (Keynes 1936, p. 221), and are utterly misleading models of the macroeconomy. This judgment applies to the entire corpus of Neoclassical economics, bar the work of Michael Kumhof (Kumhof and Jakab 2015; Kumhof, Rancière et al. 2015).

Simulating Loanable Funds and BOMD

The macroeconomic significance of BOMD can be easily illustrated by converting a simple model of Loanable Funds in Minsky to a model of BOMD. The Loanable Funds model is fashioned on the model in (Eggertsson and Krugman 2012), where the consumer sector lends to the investment sector via a bank which operates as an intermediary, and which charges an introduction fee to the consumer sector. The model is completed by the employment of workers by both sectors, intermediate goods purchases by each sector from the other, and purchases of goods by workers and bankers.

The five accounts in the banking sector’s Godley Table are Reserves on its Assets side, three deposit accounts on its Liabilities—one each for the Consumer Sector , Investment Sector and workers —and the Bank’s Equity account . The transaction of lending, repayment, interest payments and the bank fee all operate through the Liability side of the Banking sector’s ledger: its Assets are unaffected (see Figure 6).

Figure 6: Banking sector view of Loanable Funds

Conversely, the financial operations all occur on the Asset side of the Consumer (lending) sector’s Godley Table (see Figure 7). Lending reduces the amount of money in the consumer sector’s deposit account, and increases the debt that the investment sector owes to it (see Figure 7).

Figure 7:Consumer (lender) sector view of Loanable Funds

For the borrower, the financial operations alter its Assets and its Liabilities equally. Credit increases its Asset the deposit account it has with the Banking Sector, and identically increases its liability of , its debt to the consumer sector (see Figure 8).

Figure 8: The Investment Sector’s (borrower’s) view of the economy

The core differential equations of this model, shown in Equation , can be derived directly by summing the columns of Figure 6 and Figure 7 (the flows that will be affected by the later conversion of this model to BOMD are highlighted in red):

         

All flows are defined in terms of first-order time lags related to the relevant account. In particular, lending by the consumer sector is shown as being based on the amount left in its bank account , while repayment by the Investment Sector is based upon the level of outstanding debt :

         

The parameters and are time constants, which can be varied during a simulation: reducing increases the speed of lending while reducing increases the speed of repayment. These are varied in the simulation shown in Figure 10. Substantial variations in the speed of lending and repayment dramatically alter the private debt to GDP ratio, but only transiently affect economic activity.

This simulation confirms the Neoclassical conditional logic that, if banks were mere intermediaries as Loanable Funds portrays them to be, then banks, debt and money could be ignored in macroeconomics. Large changes in credit have negligible impact upon GDP growth—and in fact credit and GDP growth move in opposite directions in this simulation, because the borrower has been given a lower overall propensity to spend than the lender, so that an increase in lending actually reduces GDP via a fall in the velocity of money (and vice-versa: see Figure 9).

Figure 9: Loanable Funds in Minsky. Credit has no significant impact on macroeconomics

This was done to illustrate Bernanke’s assertion that, when lending is simply a “pure redistribution” (Bernanke 2000, p. 24), any macroeconomic impact of lending depends on differences in the marginal spending propensities of the lender and borrower. With the macroeconomic impact of credit depending on idiosyncratic characteristics of the borrower and lender, there is no systemic benefit for including banks, debt, and arguably, money, in macroeconomic theory for a world in which Loanable Funds is true.

Figure 10: Varying lending & repayment rates in Loanable Funds; no significant macroeconomic effects

However, in the real world, banks originate money and debt, and the impact of banks, debt and money on macroeconomics is highly significant. This can be illustrated by making the technically minor but systemically huge changes needed to convert this model of Loanable Funds into Bank Originated Money and Debt (BOMD)—by shifting debt from being an asset of the Consumer Sector to an asset of the Banking Sector (and deleting the superfluous “Fee”, since the Banking Sector now gets its income from the flow of interest). Credit thus increases the Assets of the banking sector and its Liabilities (the sum in the Investment Sector’s account ) by precisely the same amount.

Figure 11: Banking sector view of BOMD

The financial equations of this system are shown in Equation . These are simpler than the equations for Loanable Funds: the mythical intermediation is deleted, the three financial operations are removed from the equation for , and the interest payment now goes to the Banking Sector’s Equity account

         

These structural changes are the only differences between the two models in this paper. Strictly speaking, the flow of new debt should have been redefined, but this was left as is, to illustrate that the change in the structure of lending alone is sufficient to drastically transform macroeconomics from a discipline in which banks, debt and money can be ignored, into one in which they are critical.

Figure 12: Bank Originated Money and Debt in Minsky. Credit plays a critical role in macroeconomics

These simple structural changes lead to credit having an enormous impact on the economy. Credit and GDP growth now move in the same direction, and GDP grows when credit is positive and falls when it is negative. In keeping with the logical analysis of the previous section, credit adds to aggregate demand and income when it is positive, and subtracts from it when it is negative.

Figure 13: Varying lending & repayment parameters in BOMD: significant macroeconomic effects

Accounting for the Great Moderation & the Great Recession

The models and logical analysis of the previous three sections provide a causal argument for a relationship between the levels of debt and credit and macroeconomics, and in particular, the experience of severe economic crises like the 2008 “Great Recession”. A rising level of debt relative to GDP, and a rising significance of credit relative to GDP, are Minskian warnings of a crisis, while the crisis is caused by a plunge in credit from strongly positive to strongly negative. The plunge in credit from a peak of 15% of GDP in late 2006 to a depth of -5% in late 2009 was the first experience of negative credit since the end of WWII, and this was the cause of the Great Recession.

Figure 14: The “Great Recession” was the first negative credit event in post-WWII economic history

The empirical relationship between credit and the level of unemployment rises as the level of private debt rises, and by the time of the recovery from the 1990s recession, it is overwhelming: in a ridiculously strong contrast to Bernanke’s Neoclassical a priori dismissal of Fisher’s Debt Deflation explanation for the Great Depression on the grounds that “Absent implausibly large differences in marginal spending propensities among the groups, it was suggested, pure redistributions should have no significant macroeconomic effects” (Bernanke 2000, p. 24), the correlation between credit and unemployment since 1990 is a staggering -0.85 (see Figure 15).

Figure 15: Credit and Unemployment. Correlation -0.53 since 1970, -0.85 since 1990

––p

The role of negative credit in the USA’s major economic crises

Since credit has no role in mainstream economic theory, the collection of data on private debt and credit has been sporadic, depending more on the initiative of statisticians than the expressed needs of economists for data. This situation has improved dramatically in recent years thanks to the work of the Bank of International Settlements (Borio 2012; Dembiermont, Drehmann et al. 2013), the Bank of England (Hills, Thomas et al. 2010) and various non-mainstream economists (Jorda, Schularick et al. 2011; Schularick and Taylor 2012; Vague 2019), but much remains to be done to provide the comprehensive time series data that the significance of debt and credit warrants.

However, some data can still be retrieved that helps make sense of past economic crises (Vague 2019). In particular, a long term debt series can be derived for the USA from three very different time series: the post-1952 Federal Reserve Flow of Funds data; Census data for debt between 1916 and 1970; and a series on loans by selected banks between 1834 and 1970 (Census 1949; Census 1975).

Figure 16: Debt to GDP data from the BIS & US Census

Fortunately, the data series overlap, and the trends in the data show that, though the definitions differed, the same fundamental processes were being tracked by these three data series. This allows a composite time series to be assembled by rescaling the two Census data series to match the current BIS/Federal Reserve data set. When credit data is derived from this composite series, two phenomena stand out: firstly, America’s greatest economic crises are caused by sustained periods of negative credit; and secondly, the post-WWII regime has only one negative credit event—the “Great Recession”—while the pre-WWI regime had frequent, though smaller, negative credit experiences (see Figure 17). The two greatest were the Great Depression, and the “Panic of 1837” (Roberts 2012).

While Great Depression and the Great Recession are etched into our collective memories, I was personally unaware of the “Panic of 1837” until this credit data alerted me to the scale of negative credit at that time. Though the recorded level of private debt was low compared to post-WWII levels, the rate of decline of debt—the scale of negative credit—was both enormous and sustained. Credit was negative between mid-1837 and 1844, and hit a maximum rate of decline of 9% of GDP. It is little wonder that the “Panic of 1837” was described as “an economic crisis so extreme as to erase all memories of previous financial disorders” (Roberts 2012, p. 24).

Figure 17: Composite time series for private debt and credit derived from the data in Figure 16

Nonlinearity and Realism

The model in Equation generates symmetric cycles—booms that are as big as busts, before a final collapse—simply because of the unrealistic assumption, made for reasons of analytic tractability, of linear behavioural relations (assumptions 7 & 8 in Table 1). Realistically, workers wage demands given the level of employment are nonlinear, as Phillips insisted (Phillips 1954; Phillips 1958), as are the investment reactions of capitalists to the rate of profit, as Minsky insisted with his perceptive concept of “euphoric expectations” (Minsky 1982, p. 140).

Keen’s 1995 Minsky model (Keen 1995) used the hyperbolic nonlinear function suggested by Blatt (Blatt 1983, p. 213) to avoid unrealistic outcomes, such as the employment rate exceeding 100% in a nonlinear Goodwin model (Goodwin 1967). A generalized exponential function can be used instead (see Equation ), which could allow unrealistic values. However, these are avoided by suitable choices of input variables (the employment to population ratio rather than the unemployment rate in the “Phillips Curve” function).

         

The parameters shown in Table 2 (stable wages at 60% employment, a slope of 2 for the wage change function at 60% employment, and a maximum wage decline of 4% per annum; investment 3% of GDP at 3% profit rate, investment function slope at 3% of 2, and a minimum gross investment level of zero) generate an asymmetric process in which the ultimate downturns are deeper than the booms. Nonlinear behavioural assumptions thus improve the realism of the model, but do not change its fundamental properties, which emanate from the inherent structural nonlinearity of the model itself.

Table 2: Nonlinear behavioural functions for wage change and investment

Assumption

Parameters

11. Nonlinear wage change function parameters

, min=-4%

12. Nonlinear investment function parameters

, min=0

Extending the definitions to include inflation

A simple single-price-level nominal extension can be derived from definitions in the same fashion as the model in Equation , though it takes more assumptions to turn the definitional dynamic statements into a model. The definition of the employment rate is unchanged, while the definitions of the wages share of GDP and the debt to GDP ratio are both in monetary terms:

         

When differentiated with respect to time, this yields three definitionally true statements as before, but this time the rate of change of prices is a component of two of them:

  • The employment rate will rise if real economic growth exceeds the sum of population growth and growth in labor productivity;
  • The wages share of output will rise if money wage demands exceed the sum of inflation and growth in labor productivity; and
  • The private debt to GDP ratio will rise if the rate of growth of private debt exceeds the sum of inflation plus the rate of economic growth.

In equations, these statements are:

         

where the subscript R signifies “real” as opposed to monetary.

Conclusion: the macroeconomic foundations of macroeconomics

Minsky’s Financial Instability Hypothesis is thus not merely a particular Post Keynesian model, but a foundational model of macroeconomics, in the same sense that Lorenz’s model of turbulence in fluid dynamics is a foundational model for meteorology (Lorenz 1963). Though Minsky did not do this himself, a model of his hypothesis can be derived directly from the impeccably sound macroeconomic foundations of incontestable macroeconomic definitions. It can be extended in the same manner, by adding definitions for government spending, asset price dynamics that differ from commodity price dynamics, multi-sectoral production, etc. The structure and history of an economy are the primary drivers of its behaviour, rather than the behaviour of individual agents in it. “Agents” are, as Marx famously remarked, constrained by history:

Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living. (Marx 1852, Chapter 1)

System dynamics enables the modelling of the structure, the history, and the dynamics of the economy. Minsky’s genius was that he perceived, without this technology, the essential elements of all three that make capitalism prone to crises. Minsky and system dynamics therefore provide the foundations for a paradigmatic challenge to Neoclassical economics, whose development has been driven by the obsession with finding sound microfoundations for macroeconomics, all the while ignoring results that showed this was impossible (Gorman 1953; Anderson 1972; Sonnenschein 1972; Sonnenschein 1973). Macrofoundations, far-from-equilibrium complex systems dynamics, and monetary analysis—the polar opposites of the Neoclassical obsessions with microfoundations, equilibrium and barter—are the proper bases for economic theory.

Postscript: Minsky the Software

All the models in this paper have been built in Minsky, which is an Open Source system dynamics program with the unique feature enabling financial flows to be modelled easily—and their structure modified easily—using interlocking double-entry bookkeeping tables called “Godley Tables” (in honour of Wynne Godley). Minsky can be downloaded from SourceForge, or its Patreon page. The developers would appreciate it if specialists on Minsky the economist—and Post Keynesian economists in general—would download Minsky the software, and help to extend it further by providing user feedback.

Appendix

  1. Continuous versus discrete time

A referee suggested that discrete-time difference equations were more appropriate “for problems which are specified in terms of accounting relationships (which are discrete)”, and that continuous-time differential equations “gives rise to nonobvious relationships in the structure of delays. What kind of profit does finance investment? Obviously past profit; but the specification in continuous-time does not allow to make it evident.”

While each individual financial transaction is discrete, each is also asynchronous with other financial transactions. In a “top down” model, aggregate asynchronous phenomena are more realistically modelled using continuous time than discrete time: this is why aggregate population growth models use continuous time, even though each birth is a discrete event.

The time delays in discrete time economic models are also normally arbitrary. They are almost always in terms of years, which is reasonable for investment, but not for consumption, where the scale should be in terms of weeks or months rather than years. To do discrete-time modelling properly, consumption in period t should be modelled as depending on income in period t-2 (say), where the time period is measured in weeks, while investment in period t should be modelled as depending on the change in income between period t-26 and t-52 (say).

But firstly, no-one does this, because it is simply too complicated: in practice, lags of one year are commonplace in macroeconomic discrete time models. Secondly, if this were done, and empirical work later found that investment in period t actually depended on the change income between period t-40 and t-86 (say), then entire structure of the model would need to be re-written. This is not necessary for a continuous time model, where the equivalent function to a time delay is a time lag. The dependence of, for example, investment today on profits in the past, could be shown using a linear first order time lag: a new variable is defined (say, ) which is shown as converging to the actual variable with a time lag of , where the value of is measured in years:

         

The scalar can be altered in a continuous time model without having to alter the structure to the model itself.

Time lags were not used in the model derived from macroeconomic definitions, because the objective was to produce the simplest possible model working from those definitions, and because time lags introduce dynamics of their own, independent of the structural points made by that model. However, time lags were integral to the models of Loanable Funds and Bank Originated Money and Debt (BOMD), in which they linked the outstanding stocks to the flows. The full equation for the rate of change of private debt in the BOMD model is:

         

Here is the length of time that new lending would take to double D if it occurred at a linear rate, while tells how long repayment would take to reduce D to zero if it occurred at a linear rate. For more on discrete vs continuous time modelling, and time lags in economic modelling, see (Andresen 2018).

  1. Stability analysis of basic Minsky model

The basic Minsky model is:

         

The following shorthand expressions are used in this model with linear behavioural functions:

         

Spelling out these shorthand expressions yields the fully specified model, which makes it easier to identify the nonlinear feedbacks in this model. Instances where the system states interact nonlinearly with each other are highlighted in red: there are two dampening nonlinear feedbacks in the equation for , one amplifying feedback for , and two amplifying feedbacks for (including one term in ).

         

The “good” equilibrium of this model is more easily derived by solving for the zeros of these equations via the substitution that (with ):

         

This equilibrium is in terms of the profit share, employment rate and debt ratio: the wages share is a derivative of these, since . This residual role for the wages-share of output manifests itself in the model dynamics as well: before the crisis, the wages share falls as the debt level rises, while the profit share fluctuates around its equilibrium. This confirms Marx’s intuition in Capital I that wages are a dependent variable in capitalism: “To put it mathematically: the rate of accumulation is the independent, not the dependent, variable; the rate of wages, the dependent, not the independent, variable” (Marx 1867, Chapter 25, Section 1).

The stability of the system about its equilibria are given by its Jacobian, which, given the system’s 3 dimensions and 10 parameters, is very complicated. Making the substitutions that , it is

         

 

Substituting numerical values for all but the key parameter yields the characteristic polynomial of the Jacobian in terms of :

         

This has one real eigenvalue which is always negative, and two complex eigenvalues which have zero or negative real parts for values of , and positive real parts for : see Figure 18. The system thus bifurcates at this point, changing from one where the “good equilibrium” is stable and a cyclical attractor, to one where it is unstable and a cyclical repeller: the system converges towards it under the influence of the negative real eigenvalue until, in proximity to this equilibrium, the real parts of the complex eigenvalues repel the system, which then explosively converges to the “bad equilibrium”.

Figure 18: Eigenvalues for π_S=6 & 6.1, as calculated symbolically in Mathcad

  1. Loanable Funds & BOMD

The key differential equations for the models of Loanable Funds and BOMD as shown in Equations and respectively. The definitions they share are shown in Equation :

         

  1. Distinguishing Debt from Credit

Kalecki once famously remarked that economics was “the science of confusing stocks with flows” (Robinson 1982, p. 295). That is apparent in the confusion caused by the use of the word “Credit” to describe both the level of debt (in $) and its rate of change (in $/year). An outstanding example of this is the paper “The Economic Crisis from a Neoclassical Perspective” (Ohanian 2010), by the prominent “New Classical” economist Lee Ohanian, in which he rules out the “financial explanation” of the 2008 crisis on the basis of the following empirical argument:

The financial explanation also argues that the 2007-2009 recession became much worse because of a significant contraction of intermediation services. But some measures of intermediation have not declined substantially. Figure 4, which is updated from Chari, Christiano, and Kehoe (Chari, Christiano et al. 2008), shows that bank credit relative to nominal GDP rose at the end of 2008 to an all-time high. And while this declined by the first quarter of 2010, bank credit was still at a higher level at this point than any time before 2008… These data suggest that aggregate quantities of intermediation volumes have not declined markedly. (Ohanian 2010, p. 59)

Ohanian’s Figure 4 is reproduced below. It is obvious from the scale that the data he used recorded the stock of outstanding debt, rather than the flow of new debt: if new debt had indeed been between 1.2 and 2 times GDP every year since 1978, then private debt would have been many hundreds to thousands of times GDP by 2010. He—and Chari, Christiano, and Kehoe before him (Chari, Christiano et al. 2008; Troshkin 2008)—interpreted that stock as a flow, in part because the word “credit” was used to describe it—and also of course because this error suits the non-monetary analysis of Neoclassical economics. On the basis of this obvious error, Ohanian (and Chari, Christiano, and Kehoe before him) reject the argument that the financial crisis of 2008 was in fact a financial crisis.

To avoid this stock-flow confusion, I use the word “Debt” to describe the level of debt, dimensioned in currency units, and “Credit” to describe the flow of new debt, dimensioned in currency units per year. I recommend this practice to other Post Keynesians.

References

Anderson, P. W. (1972). “More Is Different.” Science
177(4047): 393-396.

Andresen, T. (2018). On the Dynamics of Money Circulation, Creation and Debt – a Control Systems Approach. Engineering Cybernetics, Norwegian University of Science and Technology.

Baran, P. A. (1968). Monopoly capital an essay on the American economic and social order / Paul A. Baran and Paul M. Sweezy. New York, New York : Monthly Review Press.

Bernanke, B. S. (2000). Essays on the Great Depression. Princeton, Princeton University Press.

Blanchard, O. (2018). “On the future of macroeconomic models.” Oxford Review of Economic Policy
34(1-2): 43-54.

Blatt, J. M. (1983). Dynamic economic systems: a post-Keynesian approach. Armonk, N.Y, M.E. Sharpe.

Borio, C. (2012) “The financial cycle and macroeconomics: What have we learnt?” BIS Working Papers.

Census, B. o. (1949). Historical Statistics of the United States 1789-1945. B. o. t. Census. Washington, United States Government.

Census, B. o. (1975). Historical Statistics of the United States Colonial Times to 1970. B. o. t. Census. Washington, United States Government.

Chari, V., L. Christiano, et al. (2008). “Facts and myths about the financial crisis of 2008.” IDEAS Working Paper Series from RePEc.

Charles, S. (2005). “A Note on Some Minskyan Models of Financial Instability.” Studi Economici
60(86): 43-51.

Charles, S. (2008). “Teaching Minsky’s Financial Instability Hypothesis: A Manageable Suggestion.” Journal of Post Keynesian Economics
31(1): 125-138.

Cruz, M. (2005). “A Three-Regime Business Cycle Model for an Emerging Economy.” Applied Economics Letters
12(7): 399-402.

Dembiermont, C., M. Drehmann, et al. (2013). “How much does the private sector really borrow? A new database for total credit to the private nonfinancial sector.” BIS Quarterly Review(March): 65-81.

Deutsche Bundesbank (2017). “The role of banks, non- banks and the central bank in the money creation process.” Deutsche Bundesbank Monthly Report: 13-33.

Dow, S. C. (1997). Endogenous Money. A “second edition” of The general theory. G. C. Harcourt and P. A. Riach. London, Routledge. 2: 61-78.

Eggertsson, G. B. and P. Krugman (2012). “Debt, Deleveraging, and the Liquidity Trap: A Fisher-Minsky-Koo approach.” Quarterly Journal of Economics
127: 1469–1513.

Fazzari, S., P. Ferri, et al. (2008). “Cash Flow, Investment, and Keynes-Minsky Cycles.” Journal of Economic Behavior and Organization
65(3-4): 555-572.

Fiebiger, B. (2014). “Bank credit, financial intermediation and the distribution of national income all matter to macroeconomics.” Review of Keynesian Economics
2(3): 292-311.

Fontana, G. (2003). “Post Keynesian Approaches to Endogenous Money: a time framework explanation.” Review of Political Economy
15(3).

Fontana, G. (2004). “Hicks on monetary theory and history: money as endogenous money.” Cambridge Journal of Economics
28: 73-88.

Friedman, M. (1969). The Optimum Quantity of Money. The Optimum Quantity of Money and Other Essays. Chicago, MacMillan: 1-50.

Fullwiler, S. T. (2013). “An endogenous money perspective on the post-crisis monetary policy debate.” Review of Keynesian Economics
1(2): 171–194.

Gatti, D. D., D. Delli Gatti, et al. (1994). Financial Institutions, Economic Policy, and the Dynamic Behavior of the Economy. Levy Economics Institute of Bard College Working Papers. Annandale-on-Hudson, NY, Levy Economics Institute.

Goodwin, R. M. (1967). A growth cycle. Socialism, Capitalism and Economic Growth. C. H. Feinstein. Cambridge, Cambridge University Press: 54-58.

Gorman, W. M. (1953). “Community Preference Fields.” Econometrica
21(1): 63-80.

Grasselli, M. and B. Costa Lima (2012). “An analysis of the Keen model for credit expansion, asset price bubbles and financial fragility.” Mathematics and Financial Economics
6: 191-210.

Hicks, J. (1981). “IS-LM: An Explanation.” Journal of Post Keynesian Economics
3(2): 139-154.

Hicks, J. R. (1937). “Mr. Keynes and the “Classics”; A Suggested Interpretation.” Econometrica
5(2): 147-159.

Hills, S., R. Thomas, et al. (2010). “The UK recession in context — what do three centuries of data tell us?” Bank of England Quarterly Bulletin
2010 Q4: 277-291.

Holmes, A. R. (1969). Operational Constraints on the Stabilization of Money Supply Growth. Controlling Monetary Aggregates. F. E. Morris. Nantucket Island, The Federal Reserve Bank of Boston: 65-77.

Jarsulic, M. (1989). “Endogenous credit and endogenous business cycles.” Journal of Post Keynesian Economics
12: 35-48.

Jorda, O., M. Schularick, et al. (2011). “Financial Crises, Credit Booms, and External Imbalances: 140 Years of Lessons.” IMF Economic Review
59(2): 340-378.

Jorgenson, D. W. (1960). “A Dual Stability Theorem.” Econometrica
28(4): 892-899.

Jorgenson, D. W. (1961). “Stability of a Dynamic Input-Output System.” The Review of Economic Studies
28(2): 105-116.

Jorgenson, D. W. (1963). “Stability of a Dynamic Input-Output System: A Reply.” The Review of Economic Studies
30(2): 148-149.

Keen, S. (1995). “Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.’.” Journal of Post Keynesian Economics
17(4): 607-635.

Keen, S. (2014). “Endogenous money and effective demand.” Review of Keynesian Economics
2(3): 271–291.

Keen, S. (2015). “The Macroeconomics of Endogenous Money: Response to Fiebiger, Palley & Lavoie.” Review of Keynesian Economics
3(2): 602 – 611.

Keynes, J. M. (1936). The general theory of employment, interest and money. London, Macmillan.

Kumhof, M. and Z. Jakab (2015). Banks are not intermediaries of loanable funds — and why this matters. Working Paper. London, Bank of England.

Kumhof, M., R. Rancière, et al. (2015). “Inequality, Leverage, and Crises.” The American Economic Review
105(3): 1217-1245.

Lavoie, M. (2014). “A comment on ‘Endogenous money and effective demand’: a revolution or a step backwards?” Review of Keynesian Economics
2(3): 321 – 332.

Lorenz, E. N. (1963). “Deterministic Nonperiodic Flow.” Journal of the Atmospheric Sciences
20(2): 130-141.

Lucas, R. E., Jr. (2004). “Keynote Address to the 2003 HOPE Conference: My Keynesian Education.” History of Political Economy
36: 12-24.

Marx, K. (1852). The Eighteenth Brumaire of Louis Bonaparte. Moscow, Progress Publishers.

Marx, K. (1867). Capital. Moscow, Progress Press.

Marx, K. (1894). Capital Volume III, International Publishers.

McLeay, M., A. Radia, et al. (2014). “Money creation in the modern economy.” Bank of England Quarterly Bulletin
2014 Q1: 14-27.

McManus, M. (1963). “Notes on Jorgenson’s Model.” The Review of Economic Studies
30(2): 141-147.

Minsky, H. P. (1957). “Monetary Systems and Accelerator Models.” The American Economic Review
47(6): 860-883.

Minsky, H. P. (1975). John Maynard Keynes. New York, Columbia University Press.

Minsky, H. P. (1982). Can “it” happen again? : essays on instability and finance. Armonk, N.Y., M.E. Sharpe.

Moore, B. J. (1979). “The Endogenous Money Stock.” Journal of Post Keynesian Economics
2(1): 49-70.

Moore, B. J. (1988). “The Endogenous Money Supply.” Journal of Post Keynesian Economics
10(3): 372-385.

Moore, B. J. (1988). Horizontalists and Verticalists: The Macroeconomics of Credit Money. Cambridge, Cambridge University Press.

Ohanian, L. E. (2010). “The Economic Crisis from a Neoclassical Perspective.” Journal of Economic Perspectives
24(4): 45-66.

Palley, T. (2014). “Aggregate demand, endogenous money, and debt: a Keynesian critique of Keen and an alternative theoretical framework.” Review of Keynesian Economics
2(3): 312–320.

Phillips, A. W. (1954). “Stabilisation Policy in a Closed Economy.” The Economic Journal
64(254): 290-323.

Phillips, A. W. (1958). “The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861-1957.” Economica
25(100): 283-299.

Piketty, T. (2014). Capital in the Twenty-First Century. Harvard, Harvard College.

Pomeau, Y. and P. Manneville (1980). “Intermittent transition to turbulence in dissipative dynamical systems.” Communications in Mathematical Physics
74: 189-197.

Roberts, A. (2012). America’s first Great Depression: economic crisis and political disorder after the Panic of 1837. Ithaca, Cornell University Press.

Robinson, J. (1982). “Shedding darkness.” Cambridge Journal of Economics
6(3): 295-296.

Rochon, L.-P. (1999). “The Creation and Circulation of Endogenous Money: A Circuit Dynamique Approach.” Journal of Economic Issues
33(1): 1-21.

Rosser, J. B. (1999). Chaos Theory. Encyclopedia of Political Economy. P. A. O’Hara. London, Routledge. 2: 81-83.

Santos, C. H. D. and A. C. Macedo e Silva (2009). ‘Revisiting (and Connecting) Marglin-Bhaduri and Minsky–An SFC Look at Financialization and Profit-led Growth’, Levy Economics Institute, The, Economics Working Paper Archive.

Schularick, M. and A. M. Taylor (2012). “Credit Booms Gone Bust: Monetary Policy, Leverage Cycles, and Financial Crises, 1870-2008.” American Economic Review
102(2): 1029-1061.

Schumpeter, J. A. (1954). History of economic analysis / Joseph A. Schumpeter / edited from manuscript by Elizabeth Boody Schumpeter. London, London : Allen & Unwin.

Sonnenschein, H. (1972). “Market Excess Demand Functions.” Econometrica
40(3): 549-563.

Sonnenschein, H. (1973). “Do Walras’ Identity and Continuity Characterize the Class of Community Excess Demand Functions?” Journal of Economic Theory
6(4): 345-354.

Taylor, L. and S. A. O’Connell (1985). “A Minsky Crisis.” Quarterly Journal of Economics
100(5): 871-885.

Troshkin, M. (2008). “Facts and myths about the financial crisis of 2008 (Technical notes).” IDEAS Working Paper Series from RePEc.

Tymoigne, E. (2006). ‘The Minskyan System, Part III: System Dynamics Modeling of a Stock Flow-Consistent Minskyan Model’, Levy Economics Institute, The, Economics Working Paper Archive.

Vague, R. (2019). A Brief History of Doom: Two Hundred Years of Financial Crises. Philadelphia, University of Pennsylvania Press.

Wray, L. R. (2019). “Response to Doug Henwood’s Trolling in Jacobin.” New Economic Perspectives
http://neweconomicperspectives.org/2019/02/response-to-doug-henwoods-trolling-in-jacobin.html.