The “Anything Goes” Market Demand Curve

Solow noted that DSGE models have “a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages” (Solow 2003, Emphasis added). This bizarre construct is the consequence of the theoretical failure that lies below the apparent success of deriving macroeconomics from microeconomics.

This is chapter four of my draft book Rebuilding Economics from the Top Down. The previous chapter was The Impossibility of Microfoundations for Macroeconomics, which is published here on Patreon and here on Substack

If you like my work, please consider becoming a paid subscriber from as little as $10 a year on Patreon, or $5 a month on Substack.
PS Word doesn’t export footnotes or endnotes to blogs, so I’ve inserted footnote text here inside brackets [] and formatted them in italic

Neoclassical theory has an elaborate and internally consistent model of consumption by an individual consumer, from which it can prove, given its “Axioms of Revealed Preference” (Samuelson 1948), that the individual’s demand curve slopes down [Strictly speaking this is the “Hicksian Compensated Demand Curve”, which I explain in Chapter 3 of Debunking Economics (Keen 2011), “The Calculus of Hedonism”.]. An individual demand curve therefore obeys the so-called “Law of Demand” that, to cite Alfred Marshall:

There is then one general law of demand: The greater the amount to be sold, the smaller must be the price at which it is offered in order that it may find purchasers; or, in other words, the amount demanded increases with a fall in price, and diminishes with a rise in price. (Marshall 1890 [1920], p. 99)

Marshall thought that what applied to the individual would also apply to the market, because the “peculiarities in the wants of individuals” would cancel each other out, so that market demand would also fall as the price rose:

In large markets, then—where rich and poor, old and young, men and women, persons of all varieties of tastes, temperaments and occupations are mingled together—the peculiarities in the wants of individuals will compensate one another in a comparatively regular gradation of total demand. (Marshall 1890 [1920], p. 98)

The bad news, which was discovered decades later by a number of mathematical economists (Gorman 1953; Samuelson 1956; Sonnenschein 1972, 1973a, 1973b; Shafer and Sonnenschein 1982; Mantel 1974, 1976; Debreu 1974), was that this was not true: summing the demand curves from many individuals, each of whose personal demand curves obeyed the “Law of Demand”, can generate a market demand curve that does not necessarily slope downwards. (It can instead adopt any shape at all that you can draw that doesn’t cross itself, and which doesn’t generate two quantity outputs for one relative price input. This does not mean that empirically derived demand curves behave this way, by the way—since empirical data will inherently include the effect of the distribution of income. What it does mean is that Neoclassical economists cannot derive a core aspect of their model from their own core assumptions.)

The only way to avoid this result was to make the absurd assumptions that changes in relative prices did not affect the distribution of income, and that changes in the distribution of income had no impact upon market demand. This was put in terms of Engel curves by Gorman [Engel curves describe how an individual’s expenditure on a given good change with income, while relative prices are held constant. If a good is a necessity, then its Engel curve for a given individual will show this good becoming a smaller and smaller component of the individual’s total expenditure as his/her income rises. If it is a luxury, then its Engel curve for a given individual will show this good becoming a larger and component of the individual’s total expenditure as his/her income rises], who was the first economist to discover this result:

we will show that there is just one community indifference locus through each point if, and only if, the Engel curves for different individuals at the same prices are parallel straight lines. (Gorman 1953, p. 63)

Samuelson, three years later, quite correctly described this result as an “impossibility theorem”:

The common sense of this impossibility theorem is easy to grasp. Allocating the same totals differently among people must in general change the resulting equilibrium price ratio. The only exception is where tastes are identical, not only for all men, but also for all men when they are rich or poor. (Samuelson 1956, p. 5. Emphasis added)

Therefore, the “Law of Demand”, which played such an essential role in Marshall’s reasoning, did not apply to the market demand curve. This invalidated the very foundations of Neoclassical theory, of starting its analysis from the subjective utility of an individual consumer, and the profit-maximizing behaviour of an individual firm. Au contraire, it validated the practice of the preceding Classical school—including the much-loathed and derided Karl Marx—of treating society as consisting of social classes (capitalists, landlords, workers, bankers), each with different income sources (profits, rents, wages, interest) and differing consumption patterns (a Protestant abstemious focus on investment, profligate consumption, subsistence, and Veblenian conspicuous consumption). As Alan Kirman later put it:

If we are to progress further we may well be forced to theorise in terms of groups who have collectively coherent behaviour. Thus demand and expenditure functions if they are to be set against reality must be defined at some reasonably high level of aggregation. The idea that we should start at the level of the isolated individual is one which we may well have to abandon. There is no more misleading description in modern economics than the so-called microfoundations of macroeconomics which in fact describe the behaviour of the consumption or production sector by the behaviour of one individual or firm. (Kirman 1989, p. 138. Emphasis added)

This result—now known as the Sonnenschein-Mantel Debreu Theorem—showed that, not only can macroeconomics not be derived from microeconomics, but that even microeconomics itself—the analysis of demand for a single market—cannot be derived from microeconomics—the theory of the consumption behaviour of an individual.

This realisation should have been a moment of revolutionary change for economics. The “marginal revolution”, which rejected the objectively-based theory of value of the Classical School and replaced it with a subjective theory of the individual, had failed its first hurdle, the jump from the analysis of the isolated individual to the aggregate. Though the Classical School had its own problems (Keen 1993a, 1993b), the Neoclassical School was not a viable alternative, but instead was a dead-end.

Needless to say, that is not how Neoclassical economists—even those who discovered this result—reacted. Instead, faced with a result that meant they had to either abandon their paradigm or make ridiculous assumptions to hang onto it, they did the latter.

Gorman’s necessary and sufficient condition noted above means that the very concept of a market system disappears: it is equivalent to assuming that there is only one consumer, and only one commodity. But if there is, how can there be relative prices, let alone a market? Rather than being a model of an actual macroeconomy, this is a model of Robinson Crusoe, alone on his island, where all he can harvest and eat is coconuts.

And yet Gorman described his condition as “intuitively reasonable”, while simultaneously demonstrating that it was absurd:

The necessary and sufficient condition quoted above is intuitively reasonable. It says, in effect, that an extra unit of purchasing power should be spent in the same way no matter to whom it is given. (Gorman 1953, pp. 63-64. Emphasis added)

That is not “intuitively reasonable”: it is intuitively bonkers. Giving “an extra unit of purchasing power” to a billionaire will obviously result in totally different consumption than if it were given to a single mother.

Equally bonkers was Samuelson’s ultimate assumption that the entire economy could be treated as one big, happy family, in which income was redistributed prior to consumption, so that everyone was happy:

If within the family there can be assumed to take place an optimal reallocation of income so as to keep each member’s dollar expenditure of equal ethical worth, then there can be derived for the whole family a set of well-behaved indifference contours relating the totals of what it consumes: the family can be said to act as if it maximizes such a group preference function.

The same argument will apply to all of society if optimal reallocations of income can be assumed to keep the ethical worth of each person’s marginal dollar equal. (Samuelson 1956, p. 21. Bold emphasis added)

If? This is possibly the biggest “if” in the history of academic scholarship. Can this be assumed in the case of the United States of America—possibly the most fractious superpower in the history of human civilisation? The obvious answer is “of course not!”—and this applies to American families, let alone the entire country. But the obvious absurdity of this assumption doesn’t cause Samuelson one iota of worry or doubt. He immediately continued on with:

By means of Hicks’s composite commodity theorem and by other considerations, a rigorous proof is given that the newly defined social or community indifference contours have the regularity properties of ordinary individual preference contours (nonintersection, convexity to the origin, etc.)…

Our analysis gives a first justification to the Wald hypothesis that market totals satisfy the “weak axiom” of individual preference…

The foundation is laid for the “economics of a good society.” (Samuelson 1956, pp. 21-22. Emphasis added)

“A rigorous proof”? After possibly the most absurd assumption ever, even in a discipline renowned for absurd assumptions? It is more rigor mortis of the mind than intellectual rigour, that Samuelson, having proven that his endeavour to derive market demand from a logically coherent theory of individual demand that he developed (Samuelson 1938, 1948) had failed, could not bring himself to accept this, and instead made a manifestly false assumption to hide this contrary result. [This is not an unusual failing, unfortunately: Thomas Kuhn’s The Structure of Scientific Revolutions (Kuhn 1970) shows that this is the norm in science. Once they are committed to a paradigm, it is extremely rare that scientists ever change their minds when confronted with evidence that contradicts it: they instead search for ways to modify the paradigm to cope with the irreconcilable anomaly. Max Planck, the discoverer of quantum mechanics, remarked on this poignantly in his autobiography: “It is one of the most painful experiences of my entire scientific life that I have but seldom—in fact, I might say, never—succeeded in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof” (Planck 1949, p. 22).]

Bizarrely, if his assumption of “optimal reallocations of income … to keep the ethical worth of each person’s marginal dollar equal” had indeed laid the foundation “for the ‘economics of a good society'”, then Samuelson had proven that this “good society” must have a benevolent dictatorship as its helm, so that those “optimal reallocations” can occur. Capitalism, to behave as Neoclassical economists think it does, must be run by a socialist dictator.

Unfortunately, Gorman’s and Samuelson’s bonkers reactions became the norm, out of which emerged the “representative agent”.

Students are generally given a mendacious explanation of this manifestly absurd construct. In the most childish of such texts—such as Gregory Mankiw’s Principles of Microeconomics (Mankiw 2001)—students are simply told that the market demand curve is the sum of individual demand curves:

MARKET DEMAND AS THE SUM OF INDIVIDUAL DEMANDS. The market demand curve is found by adding horizontally the individual demand curves. At a price of $2, Catherine demands 4 ice-cream cones, and Nicholas demands 3 ice-cream cones. The quantity demanded in the market at this price is 7 cones. (Mankiw 2001, p. 71)

Samuelson’s own textbook—now maintained by William Nordhaus, whose work on climate change is the worst “research” I have ever encountered (Keen 2020)—delivers the most blatant misrepresentation of these results:

Market Demand: Our discussion of demand has so far referred to “the” demand curve. But whose demand is it? Mine? Yours? Everybody’s? The fundamental building block for demand is individual preferences. However, in this chapter we will always focus on the market demand, which represents the sum total of all individual demands. The market demand is what is observable in the real world.

The market demand curve is found by adding together the quantities demanded by all individuals at each price.

Does the market demand curve obey the law of downward-sloping demand? It certainly does. (Samuelson and Nordhaus 2010, p. 48. Boldface emphasis added)

Only slightly less deceptively, Hal Varian’s Intermediate Microeconomics: A Modern Approach implies that the derivation of a market demand curve from individual demand curves is possible, but the process is too difficult for middle-level undergraduates to understand:

Since each individual’s demand for each good depends on prices and his or her money income, the aggregate demand will generally depend on prices and the distribution of incomes. However, it is sometimes convenient to think of the aggregate demand as the demand of some “representative consumer” who has an income that is just the sum of all individual incomes. The conditions under which this can be done are rather restrictive, and a complete discussion of this issue is beyond the scope of this book. (Varian 2010, p. 271. Emphasis added.)

Mas-Colell’s gargantuan postgraduate text Microeconomic Theory (Mas-Colell, Whinston, Green, and El-Hodiri 1996) provides the most honest statement of the theorem. In a section accurately described as “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem“, it states that a market demand curve can have any shape at all:

Can … [an arbitrary function] … coincide with the excess demand function of an economy for every p [price]… Of course … [the arbitrary function] must be continuous, it must be homogeneous of degree zero, and it must satisfy Walras’ law. But for any [arbitrary function] satisfying these three conditions, it turns out that the answer is, again, “yes”. (Mas-Colell, Whinston et al. 1995, p. 602)

But Mas-Colell also uses Samuelson’s escape clause, of a “benevolent central authority” that redistributes income prior to trade, with nary a word on how absurd an assumption this is to apply to a market economy: [This is especially ridiculous coming from a school of thought in economics that champions a libertarian vision of capitalism—that it would be better off with no government intervention whatsoever.]

Let us now hypothesize that there is a process, a benevolent central authority perhaps, that, for any given prices p and aggregate wealth function w, redistributes wealth in order to maximize social welfare

If there is a normative representative consumer, the preferences of this consumer have welfare significance and the aggregate demand function can be used to make welfare judgments by means of the techniques [used for individual consumers]. In doing so however, it should never be forgotten that a given wealth distribution rule [imposed by the “benevolent central authority”] is being adhered to and that the “level of wealth” should always be understood as the “optimally distributed level of wealth”. (Mas-Colell, Whinston, and Green 1995, pp. 117-118. Emphasis added)

It is therefore little wonder that student economists, taught in this fashion ever since 1953, saw no problem with the concept of a “representative agent”, and ultimately built models of the macroeconomy that they believed were consistent with microeconomics.

Older and more realistic hands like Solow could see through this subterfuge, but they were ignored until reality, in the form of the Global Financial Crisis, exposed the inadequacy of the foundations that no amount of adding of “frictions” could overcome.

The DSGE model populates its simplified economy with exactly one single combined worker, owner, consumer, everything else who plans ahead carefully, lives forever; and … there are no conflicts of interest, no incompatible expectations, no deceptions… Under pressure from skeptics and from the need to deal with actual data, DSGE modelers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags and so on… But the basic story always treats the whole economy as if it were like a person trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person faced with that economic policy based on that kind of idea might reasonably wonder what planet he or she is on. (Solow 2010, p. 13. Emphasis added)

Despite these existential problems, Neoclassical economists have developed an answer of sorts to Solow and Kirman, with what are now called HANKs: “Heterogeneous Agent New Keynesian” models. These consider at least two different types of agents, with potentially differing income sources, consumption, and expectations (Chari and Kehoe 2008; Kaplan, Moll, and Violante 2018; Acharya and Dogra 2020; Alves, Kaplan, Moll, and Violante 2020; Acharya, Challe, and Dogra 2023). It can at least be asserted that they are treating “demand and expenditure functions … at some reasonably high level of aggregation” (Kirman 1989, p. 138).

But there is no answer to the next issue: the incompatibility of the real-world cost structure of firms with the Neoclassical theory of profit maximization.

Acharya, Sushant, Edouard Challe, and Keshav Dogra. 2023. ‘Optimal Monetary Policy According to HANK’, The American Economic Review, 113: 1741-82.

Acharya, Sushant, and Keshav Dogra. 2020. ‘Understanding Hank: Insights from a Prank’, Econometrica, 88: 1113-58.

Alves, Felipe, Greg Kaplan, Benjamin Moll, and Giovanni L. Violante. 2020. ‘A Further Look at the Propagation of Monetary Policy Shocks in HANK’, Journal of Money, Credit and Banking, 52: 521-59.

Chari, V. V., and Patrick J. Kehoe. 2008. ‘Response from V. V. Chari and Patrick J. Kehoe’, The Journal of Economic Perspectives, 22: 247-50.

Debreu, Gerard. 1974. ‘Excess demand functions’, Journal of mathematical economics, 1: 15-21.

Gorman, W. M. 1953. ‘Community Preference Fields’, Econometrica, 21: 63-80.

Kaplan, Greg, Benjamin Moll, and Giovanni L. Violante. 2018. ‘Monetary Policy According to HANK’, The American Economic Review, 108: 697-743.

Keen, Steve. 1993a. ‘The Misinterpretation of Marx’s Theory of Value’, Journal of the history of economic thought, 15: 282-300.

———. 1993b. ‘Use-Value, Exchange Value, and the Demise of Marx’s Labor Theory of Value’, Journal of the history of economic thought, 15: 107-21.

———. 2011. Debunking economics: The naked emperor dethroned? (Zed Books: London).

———. 2020. ‘The appallingly bad neoclassical economics of climate change’, Globalizations: 1-29.

Kirman, Alan. 1989. ‘The Intrinsic Limits of Modern Economic Theory: The Emperor Has No Clothes’, Economic Journal, 99: 126-39.

Kuhn, Thomas. 1970. The Structure of Scientific Revolutions (University of Chicago Press: Chicago).

Mankiw, N. Gregory. 2001. Principles of Microeconomics (South-Western College Publishers: Stamford).

Mantel, Rolf R. 1974. ‘On the Characterization of Aggregate Excess Demand’, Journal of Economic Theory, 7: 348-53.

———. 1976. ‘Homothetic Preferences and Community Excess Demand Functions’, Journal of Economic Theory, 12: 197-201.

Marshall, Alfred. 1890 [1920]. Principles of Economics ( Library of Economics and Liberty).

Mas-Colell, A., M. D. Whinston, J. R. Green, and M. El-Hodiri. 1996. “Microeconomic Theory.” In, 108-13. Wien: Springer-Verlag.

Mas-Colell, Andreu, Michael Dennis Whinston, and Jerry R. Green. 1995. Microeconomic theory (Oxford University Press: New York :).

Planck, Max. 1949. Scientific Autobiography and Other Papers (Philosophical Library; Williams & Norgate: London).

Samuelson, P. A. 1938. ‘A Note on the Pure Theory of Consumer’s Behaviour’, Economica, 5: 61-71.

———. 1948. ‘Consumption theory in terms of revealed preference’, Economica, 15: 243-53.

Samuelson, Paul A. 1956. ‘Social Indifference Curves’, The Quarterly Journal of Economics, 70: 1-22.

Samuelson, Paul A., and William D. Nordhaus. 2010. Economics (McGraw-Hill: New York).

Shafer, Wayne, and Hugo Sonnenschein. 1982. ‘Chapter 14 Market demand and excess demand functions’, Handbook of Mathematical Economics, 2: 671-93.

Solow, R. M. 2010. “Building a Science of Economics for the Real World.” In House Committee on Science and Technology Subcommittee on Investigations and Oversight. Washington.

Solow, Robert M. 2003. “Dumb and Dumber in Macroeconomics.” In Festschrift for Joe Stiglitz. Columbia University.

Sonnenschein, Hugo. 1972. ‘Market Excess Demand Functions’, Econometrica, 40: 549-63.

———. 1973a. ‘Do Walras’ Identity and Continuity Characterize the Class of Community Excess Demand Functions?’, Journal of Economic Theory, 6: 345-54.

———. 1973b. ‘The Utility Hypothesis and Market Demand Theory’, Western Economic Journal, 11: 404-10.

Varian, Hal R. 2010. Intermediate Microeconomics: A Modern Approach (W. W. Norton & Company: New York).

The Impossibility of Microfoundations for Macroeconomics

This is a quick test of whether Word successfully exports its own inline equations to the Web, after I was informed that inline MathType equations weren’t exported.

One thing which never ceases to bemuse me is the intellectual insularity of mainstream economics.

Every intellectual specialization is, by necessity, insular. Specialization necessarily requires that, to have expert knowledge in one field—say, physics—you must focus on that field to the exclusion of others—for example, chemistry. Given the extent of human knowledge today, this goes far further than it did in the 19th century: the days of the true polymath are well and truly over. There are now specializations within each field, so that a physicist specializing in statistical mechanics will know relatively little about condensed matter physics, for example, and so on in other fields.

But the insularity of mainstream economics goes far beyond this necessary minimum. Though there are a few convenient exceptions who are trotted out to counter generalizations like I am making here, in general, Neoclassicals are blithely unaware of how their own school of thought developed, of empirical and theoretical results that contradict core tenets their own beliefs, of the competing schools of thought within economics, of the development of economics itself over time, and crucially, of intellectual developments that have extended human knowledge in fundamental ways that affect all fields of knowledge—including economics. Foremost here is their ignorance of complexity analysis, which has transformed many fields of study since its re-discovery in meteorology by Edward Lorenz in 1963 (Lorenz 1963).

  1. Complexity

Complexity is often defined by what it is not, so I will attempt a positive definition—which even so, still contains a negative:

A complex system is an often very simple dynamic system which, under certain very common conditions—nonlinear interactions between some of its three or more variables—generates extremely complicated far-from-equilibrium behaviour, which cannot be understood by reducing the system to its component parts (i.e., by reductionism).

Taking each element of this definition in turn:

  • A complex system is not necessarily complicated: Lorenz’s model, which I will discuss shortly, has just three equations and three parameters. Compare that to Ireland’s model, with 10 equations and 14 parameters. Lorenz’s model generates complex dynamics, while Ireland’s far more complicated model does not;
  • The conditions that generate them are very common because in essence, everything is nonlinear. Even a straight line is nonlinear because, unless it starts on one side of the Universe and ends on the other, the very fact that it stops is a nonlinearity. More to the point, an economic system has numerous instances where one variable is multiplied by another—for example, the wage bill is equal to the wage rate (a variable) multiplied by the number of employees (another variable);
  • Nonlinearity is essential, because effects like, for example, multiplying one variable by another, amplify disturbances. In linear models, as Blanchard himself pointed out, all effects are additive: a shock twice as big causes twice as big a disturbance. Nothing amplifies anything else;
  • Three or more variables are needed because, in the type of mathematics which is used to describe complex systems, the dimensionality of the model is equal to the number of variables, and the path your system draws cannot intersect itself. A one-variable system maps to a line, and along a line you can go left, right, or towards the middle without generating the same number twice, but that’s it. A two-variable system maps to a rectangle, and you can either spiral in towards its centre, or spiral in (or out) towards a fixed orbit, but that’s it. With a three-variable system, the dynamics maps to a box, and in a box you can weave incredibly complicated patterns without ever intersecting your path;
  • Far-from-equilibrium behaviour occurs because in such a system, given realistic parameters and initial conditions, one or more of the system’s equilibria can be what are called “strange attractors”: they attract the system from a distance, only to repel it when it gets near the equilibrium. Lorenz’s model has two “strange attractors”, while the economic model that I explain in Chapter 7—and which I first built in 1992—has one (Keen 1995); and
  • Reductionism can’t be used to understand such a model because, as soon as you reduce the system to one (or even two) of its components, the third dimension, which is essential for its complex dynamics, is eliminated.

The great French mathematician Henri Poincare discovered the first complex system when he solved the “three body problem” in 1889, but this was long before humanity developed the capacity to visualise such systems using computers. Though he became rightly famous for solving the three-body problem, his discovery of complexity languished until the phenomenon was rediscovered by the mathematical meteorologist Edward Lorenz in 1963 (Lorenz 1963).

Lorenz was dissatisfied with the weather modelling practices of the late 1950s, which boiled down to both pattern matching (looking for a set of weather events in the past that resembled the pattern of the last few days, and predicting that tomorrow’s weather would be the same as the next day in the historical record) and linear modelling—the practice which, as Blanchard explained in “Where Danger Lurks” (Blanchard 2014), still dominates mainstream macroeconomics today.

Lorenz decided to construct an extremely simplified model of fluid dynamics which preserved the essential nonlinearity of the weather, by reducing the extremely complicated, high-dimensional Navier-Stokes partial-differential equations to just three very simple ordinary differential equations. What he saw ultimately transformed not only meteorology, but almost every field of science. But it has had virtually no impact on economics.

A picture, as they say, is worth 1000 words, so Figure 4 shows a picture of Lorenz’s model (Lorenz 1963)—rendered in the Open-Source system dynamics program Minsky, which I have developed to enable complex systems modelling in economics.

Figure 4: Lorenz’s “strange attractor” model of turbulent flow

The system has just three variables (x, y and z) and three parameters (a, b, and c) and just two nonlinear interactions: in the equation for y, x is multiplied by minus z, and in the equation for z, x is multiplied by y—see Equation :

        

And yet, despite this simplicity, the pattern generated by the system is incredibly complicated—and indeed, beautiful. It was called “The Butterly Effect” for more reasons than one.

It could also have been called “The Mask of Zorro”, given the existence of two “eyes” in the phase plots for x against y, y against z, and z against x. Tellingly for equilibrium-obsessed economists, these eyes are in fact two of the three equilibria of the system. The third is where , which were the initial condition of the simulation shown in Figure 4. I then nudged the x-value 0.001 away from its equilibrium at the ten second mark. After this disturbance, the system was propelled away from this unstable equilibrium towards the other two—the “eyes” in the three phase plots. These equilibria are “strange attractors”, which means that they describe regions that the system will never reach—even though they are also equilibria of the system.

This simulation shows that all three equilibria of Lorenz’s system are unstable: if the system starts at an equilibrium, it will remain there, but if it starts anywhere else, or is disturbed from the equilibrium eve by an infinitesimal distance, it will be propelled away from it, and forever display far-from-equilibrium dynamics.

And yet it does not “break down”: the simulation returns realistic values that stay within the bounds of the system. This contrasts strongly with the presumption once expressed by Hicks in relation to Harrod’s “knife-edge” model of economic instability (Harrod 1939, 1948), that models must assume stable equilibria, because a model with an unstable equilibrium “does not fluctuate; it just breaks down”:

Mr. Harrod … welcomes the instability of his system, because he believes it to be an explanation of the tendency to fluctuation which exists in the real world… But mathematical instability does not in itself elucidate fluctuation. A mathematically unstable system does not fluctuate; it just breaks down. (Hicks 1949, p. 108)

This still prevalent belief amongst economists is only true of linear systems—and even then, not all of them (Keen 2020). But it is categorically false about the behaviour of nonlinear systems.

Finally, not only is the pattern beautiful, it is also aperiodic: no one cycle is like any other. Before Lorenz’s work, scientists thought that aperiodic cycles would require exogenous shocks; after Lorenz’s work, only economists continue to assume that random shocks to a system are needed to cause aperiodicity.

It is to the great credit of meteorology that, very rapidly, Lorenz’s demonstration of the necessity of nonlinear, far-from-equilibrium modelling was accepted by meteorologists. There is much more to weather modelling than just Lorenz’s complex systems foundation, but his work contributed fundamentally to the dramatic increase in the accuracy of weather forecasts over the last half century—even in the face of global warming that is disturbing the underlying climate which determines the weather.

Likewise, Lorenz’s discovery was considered and applied by all manner of sciences, leading to complexity analysis operating as an important adjunct to the reductionist approach that remains the bedrock of scientific analysis. In 1999, the journal Science recognised this with a special issue devoted to complexity in numerous fields: Physics (Goldenfeld and Kadanoff 1999), Chemistry (Whitesides and Ismagilov 1999), Biology (Parrish and Edelstein-Keshet 1999; Pennisi 1999; Weng, Bhalla, and Iyengar 1999), Evolution (Service 1999), Geography (Werner 1999)—and even Economics (Arthur 1999).

Today, complex systems are an uncontroversial aspect of every science—but not of economics, because the dominant methods in economics are antithetical to the foundations of complexity. These methods include linearity, as Blanchard acknowledged, but more crucially, they involve a perverted form of reductionism that Physics Nobel Laureate Philip Anderson christened “Constructionism”.

  1. The impossibility of constructionism with complex systems

Anderson’s “More is Different” (Anderson 1972) attacked the idea that, in line with Ernest Rutherford’s quip that “all science is either physics or stamp collecting”, higher-level sciences—like chemistry, biology and even psychology—can and should be reduced to applied physics. Speaking as someone who had made fundamental contributions to particle physics, Anderson asserted that, though “The reductionist hypothesis … among the great majority of active scientists … is accepted without question”, this did not mean that higher-level sciences like chemistry could be generated from what we know about physics. “The main fallacy in this kind of thinking”, he declared:

is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. (Anderson 1972, p. 393. Emphasis added)

The phenomena that made this approach untenable were “the twin difficulties of scale and complexity”, since:

The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other…

At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry. (Anderson 1972, p. 393)

And nor is macroeconomics applied microeconomics—but mainstream economists, because of their extreme insularity, haven’t gotten Anderson’s memo. They continue to attempt to do the impossible: to construct the higher-level analysis of macroeconomics via a direct application of the lower-level analysis of microeconomics. That is only possible if all relationships in macroeconomics are linear as Blanchard described them in “Where Danger Lurks” (Blanchard 2014), when the lesson of the Global Financial Crisis was that they obviously are not—and in a subsequent chapter, I will show that there are fundamental nonlinearities in macroeconomics that should be embraced, rather than ignored.

Fittingly, Anderson concluded with two anecdotes from economics:

In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920’s sums it up even more clearly:

FITZGERALD: The rich are different from us.

HEMINGWAY: Yes, they have more money. (Anderson 1972, p. 396)

Marx, and money, are two other things that Neoclassical economists ignore. But even more critically, they ignore the logical and empirical fallacies that beset microeconomics. Even if it were possible to derive macroeconomics from microeconomics, Neoclassical microeconomics is not the foundation one should use, because it is manifestly wrong about both consumption and production. Some Neoclassicals are aware of the logical problems with their model of consumption—though their reactions to it are bizarre. But none of them are aware of the empirical fallacies in their model of production.

References

 

Anderson, P. W. 1972. ‘More Is Different: Broken symmetry and the nature of the hierarchical structure of science’, Science, 177: 393-96.

Arthur, W. Brian. 1999. ‘Complexity and the Economy’, Science, 284: 107-09.

Barro, Robert J. 1989. ‘The Ricardian Approach to Budget Deficits’, The Journal of Economic Perspectives, 3: 37-54.

Blanchard, Olivier. 2014. ‘Where Danger Lurks’, Finance & Development, 51.

Goldenfeld, Nigel, and Leo P. Kadanoff. 1999. ‘Simple Lessons from Complexity’, Science, 284: 87-89.

Harrod, R. F. 1939. ‘An Essay in Dynamic Theory’, The Economic Journal, 49: 14-33.

———. 1948. Towards a Dynamic Economics (Macmillan: London).

Hicks, J. R. 1949. ‘Mr. Harrod’s Dynamic Theory’, Economica, 16: 106-21.

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

———. 2020. ‘Burying Samuelson’s Multiplier-Accelerator and resurrecting Goodwin’s Growth Cycle in Minsky.’ in Robert Y. Cavana, Brian C. Dangerfield, Oleg V. Pavlov, Michael J. Radzicki and I. David Wheat (eds.), Feedback Economics : Applications of System Dynamics to Issues in Economics (Springer: New York).

Li, Tien-Yien, and James A. Yorke. 1975. ‘Period Three Implies Chaos’, The American Mathematical Monthly, 82: 985-92.

Lorenz, Edward N. 1963. ‘Deterministic Nonperiodic Flow’, Journal of the Atmospheric Sciences, 20: 130-41.

Mendelsohn, Robert, William D. Nordhaus, and Daigee Shaw. 1994. ‘The Impact of Global Warming on Agriculture: A Ricardian Analysis’, The American Economic Review, 84: 753-71.

Parrish, Julia K., and Leah Edelstein-Keshet. 1999. ‘Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation’, Science, 284: 99-101.

Pennisi, Elizabeth. 1999. ‘Unraveling Bacteria’s Dependable Homing System’, Science, 284: 82-82.

Ricardo, David, and Piero Sraffa. 1952. The works and correspondence of David Ricardo / edited by Piero Sraffa, with the collaboration of M.H. Dobb Vol.6, Letters, 1810-1815 (Cambridge University Press for the Royal Economic Society: Cambridge).

Service, Robert F. 1999. ‘Exploring the Systems of Life’, Science, 284: 80-83.

Weng, Gezhi, Upinder S. Bhalla, and Ravi Iyengar. 1999. ‘Complexity in Biological Signaling Systems’, Science, 284: 92-96.

Werner, B. T. 1999. ‘Complexity in Natural Landform Patterns’, Science, 284: 102-04.

Whitesides, George M., and Rustem F. Ismagilov. 1999. ‘Complexity in Chemistry’, Science, 284: 89-92.

 

The Impossibility of Microfoundations for Macroeconomics

One thing which never ceases to bemuse me is the intellectual insularity of mainstream economics.

Every intellectual specialization is, by necessity, insular. Specialization necessarily requires that, to have expert knowledge in one field—say, physics—you must focus on that field to the exclusion of others—for example, chemistry. Given the extent of human knowledge today, this goes far further than it did in the 19th century: the days of the true polymath are well and truly over. There are now specializations within each field, so that a physicist specializing in statistical mechanics will know relatively little about condensed matter physics, for example, and so on in other fields.

But the insularity of mainstream economics goes far beyond this necessary minimum. Though there are a few convenient exceptions who are trotted out to counter generalizations like I am making here, in general, Neoclassicals are blithely unaware of how their own school of thought developed, of empirical and theoretical results that contradict core tenets their own beliefs, of the competing schools of thought within economics, of the development of economics itself over time, and crucially, of intellectual developments that have extended human knowledge in fundamental ways that affect all fields of knowledge—including economics. Foremost here is their ignorance of complexity analysis, which has transformed many fields of study since its re-discovery in meteorology by Edward Lorenz in 1963 (Lorenz 1963).

  1. Complexity

Complexity is often defined by what it is not, so I will attempt a positive definition—which even so, still contains a negative:

A complex system is an often very simple dynamic system which, under certain very common conditions—nonlinear interactions between some of its three or more variables—generates extremely complicated far-from-equilibrium behaviour, which cannot be understood by reducing the system to its component parts (i.e., by reductionism).

Taking each element of this definition in turn:

  • A complex system is not necessarily complicated: Lorenz’s model, which I will discuss shortly, has just three equations and three parameters. Compare that to Ireland’s model, with 10 equations and 14 parameters. Lorenz’s model generates complex dynamics, while Ireland’s far more complicated model does not;
  • The conditions that generate them are very common because in essence, everything is nonlinear. Even a straight line is nonlinear because, unless it starts on one side of the Universe and ends on the other, the very fact that it stops is a nonlinearity. More to the point, an economic system has numerous instances where one variable is multiplied by another—for example, the wage bill is equal to the wage rate (a variable) multiplied by the number of employees (another variable);
  • Nonlinearity is essential, because effects like, for example, multiplying one variable by another, amplify disturbances. In linear models, as Blanchard himself pointed out, all effects are additive: a shock twice as big causes twice as big a disturbance. Nothing amplifies anything else;
  • Three or more variables are needed because, in the type of mathematics which is used to describe complex systems, the dimensionality of the model is equal to the number of variables, and the path your system draws cannot intersect itself. A one-variable system maps to a line, and along a line you can go left, right, or towards the middle without generating the same number twice, but that’s it. A two-variable system maps to a rectangle, and you can either spiral in towards its centre, or spiral in (or out) towards a fixed orbit, but that’s it. With a three-variable system, the dynamics maps to a box, and in a box you can weave incredibly complicated patterns without ever intersecting your path;
  • Far-from-equilibrium behaviour occurs because in such a system, given realistic parameters and initial conditions, one or more of the system’s equilibria can be what are called “strange attractors”: they attract the system from a distance, only to repel it when it gets near the equilibrium. Lorenz’s model has two “strange attractors”, while the economic model that I explain in Chapter 7—and which I first built in 1992—has one (Keen 1995); and
  • Reductionism can’t be used to understand such a model because, as soon as you reduce the system to one (or even two) of its components, the third dimension, which is essential for its complex dynamics, is eliminated.

The great French mathematician Henri Poincare discovered the first complex system when he solved the “three body problem” in 1889, but this was long before humanity developed the capacity to visualise such systems using computers. Though he became rightly famous for solving the three-body problem, his discovery of complexity languished until the phenomenon was rediscovered by the mathematical meteorologist Edward Lorenz in 1963 (Lorenz 1963).

Lorenz was dissatisfied with the weather modelling practices of the late 1950s, which boiled down to both pattern matching (looking for a set of weather events in the past that resembled the pattern of the last few days, and predicting that tomorrow’s weather would be the same as the next day in the historical record) and linear modelling—the practice which, as Blanchard explained in “Where Danger Lurks” (Blanchard 2014), still dominates mainstream macroeconomics today.

Lorenz decided to construct an extremely simplified model of fluid dynamics which preserved the essential nonlinearity of the weather, by reducing the extremely complicated, high-dimensional Navier-Stokes partial-differential equations to just three very simple ordinary differential equations. What he saw ultimately transformed not only meteorology, but almost every field of science. But it has had virtually no impact on economics.

A picture, as they say, is worth 1000 words, so Figure 4 shows a picture of Lorenz’s model (Lorenz 1963)—rendered in the Open-Source system dynamics program Minsky, which I have developed to enable complex systems modelling in economics.

Figure 4: Lorenz’s “strange attractor” model of turbulent flow

The system has just three variables (x, y and z) and three parameters (a, b, and c) and just two nonlinear interactions: in the equation for y, x is multiplied by minus z, and in the equation for z, x is multiplied by y—see Equation :

        

And yet, despite this simplicity, the pattern generated by the system is incredibly complicated—and indeed, beautiful. It was called “The Butterly Effect” for more reasons than one.

It could also have been called “The Mask of Zorro”, given the existence of two “eyes” in the phase plots for x against y, y against z, and z against x. Tellingly for equilibrium-obsessed economists, these eyes are in fact two of the three equilibria of the system. The third is where , which were the initial condition of the simulation shown in Figure 4. I then nudged the x-value 0.001 away from its equilibrium at the ten second mark. After this disturbance, the system was propelled away from this unstable equilibrium towards the other two—the “eyes” in the three phase plots. These equilibria are “strange attractors”, which means that they describe regions that the system will never reach—even though they are also equilibria of the system.

This simulation shows that all three equilibria of Lorenz’s system are unstable: if the system starts at an equilibrium, it will remain there, but if it starts anywhere else, or is disturbed from the equilibrium eve by an infinitesimal distance, it will be propelled away from it, and forever display far-from-equilibrium dynamics.

And yet it does not “break down”: the simulation returns realistic values that stay within the bounds of the system. This contrasts strongly with the presumption once expressed by Hicks in relation to Harrod’s “knife-edge” model of economic instability (Harrod 1939, 1948), that models must assume stable equilibria, because a model with an unstable equilibrium “does not fluctuate; it just breaks down”:

Mr. Harrod … welcomes the instability of his system, because he believes it to be an explanation of the tendency to fluctuation which exists in the real world… But mathematical instability does not in itself elucidate fluctuation. A mathematically unstable system does not fluctuate; it just breaks down. (Hicks 1949, p. 108)

This still prevalent belief amongst economists is only true of linear systems—and even then, not all of them (Keen 2020). But it is categorically false about the behaviour of nonlinear systems.

Finally, not only is the pattern beautiful, it is also aperiodic: no one cycle is like any other. Before Lorenz’s work, scientists thought that aperiodic cycles would require exogenous shocks; after Lorenz’s work, only economists continue to assume that random shocks to a system are needed to cause aperiodicity.

It is to the great credit of meteorology that, very rapidly, Lorenz’s demonstration of the necessity of nonlinear, far-from-equilibrium modelling was accepted by meteorologists. There is much more to weather modelling than just Lorenz’s complex systems foundation, but his work contributed fundamentally to the dramatic increase in the accuracy of weather forecasts over the last half century—even in the face of global warming that is disturbing the underlying climate which determines the weather.

Likewise, Lorenz’s discovery was considered and applied by all manner of sciences, leading to complexity analysis operating as an important adjunct to the reductionist approach that remains the bedrock of scientific analysis. In 1999, the journal Science recognised this with a special issue devoted to complexity in numerous fields: Physics (Goldenfeld and Kadanoff 1999), Chemistry (Whitesides and Ismagilov 1999), Biology (Parrish and Edelstein-Keshet 1999; Pennisi 1999; Weng, Bhalla, and Iyengar 1999), Evolution (Service 1999), Geography (Werner 1999)—and even Economics (Arthur 1999).

Today, complex systems are an uncontroversial aspect of every science—but not of economics, because the dominant methods in economics are antithetical to the foundations of complexity. These methods include linearity, as Blanchard acknowledged, but more crucially, they involve a perverted form of reductionism that Physics Nobel Laureate Philip Anderson christened “Constructionism”.

  1. The impossibility of constructionism with complex systems

Anderson’s “More is Different” (Anderson 1972) attacked the idea that, in line with Ernest Rutherford’s quip that “all science is either physics or stamp collecting”, higher-level sciences—like chemistry, biology and even psychology—can and should be reduced to applied physics. Speaking as someone who had made fundamental contributions to particle physics, Anderson asserted that, though “The reductionist hypothesis … among the great majority of active scientists … is accepted without question”, this did not mean that higher-level sciences like chemistry could be generated from what we know about physics. “The main fallacy in this kind of thinking”, he declared:

is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. (Anderson 1972, p. 393. Emphasis added)

The phenomena that made this approach untenable were “the twin difficulties of scale and complexity”, since:

The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other…

At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry. (Anderson 1972, p. 393)

And nor is macroeconomics applied microeconomics—but mainstream economists, because of their extreme insularity, haven’t gotten Anderson’s memo. They continue to attempt to do the impossible: to construct the higher-level analysis of macroeconomics via a direct application of the lower-level analysis of microeconomics. That is only possible if all relationships in macroeconomics are linear as Blanchard described them in “Where Danger Lurks” (Blanchard 2014), when the lesson of the Global Financial Crisis was that they obviously are not—and in a subsequent chapter, I will show that there are fundamental nonlinearities in macroeconomics that should be embraced, rather than ignored.

Fittingly, Anderson concluded with two anecdotes from economics:

In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920’s sums it up even more clearly:

FITZGERALD: The rich are different from us.

HEMINGWAY: Yes, they have more money. (Anderson 1972, p. 396)

Marx, and money, are two other things that Neoclassical economists ignore. But even more critically, they ignore the logical and empirical fallacies that beset microeconomics. Even if it were possible to derive macroeconomics from microeconomics, Neoclassical microeconomics is not the foundation one should use, because it is manifestly wrong about both consumption and production. Some Neoclassicals are aware of the logical problems with their model of consumption—though their reactions to it are bizarre. But none of them are aware of the empirical fallacies in their model of production.

References

 

Anderson, P. W. 1972. ‘More Is Different: Broken symmetry and the nature of the hierarchical structure of science’, Science, 177: 393-96.

Arthur, W. Brian. 1999. ‘Complexity and the Economy’, Science, 284: 107-09.

Barro, Robert J. 1989. ‘The Ricardian Approach to Budget Deficits’, The Journal of Economic Perspectives, 3: 37-54.

Blanchard, Olivier. 2014. ‘Where Danger Lurks’, Finance & Development, 51.

Goldenfeld, Nigel, and Leo P. Kadanoff. 1999. ‘Simple Lessons from Complexity’, Science, 284: 87-89.

Harrod, R. F. 1939. ‘An Essay in Dynamic Theory’, The Economic Journal, 49: 14-33.

———. 1948. Towards a Dynamic Economics (Macmillan: London).

Hicks, J. R. 1949. ‘Mr. Harrod’s Dynamic Theory’, Economica, 16: 106-21.

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

———. 2020. ‘Burying Samuelson’s Multiplier-Accelerator and resurrecting Goodwin’s Growth Cycle in Minsky.’ in Robert Y. Cavana, Brian C. Dangerfield, Oleg V. Pavlov, Michael J. Radzicki and I. David Wheat (eds.), Feedback Economics : Applications of System Dynamics to Issues in Economics (Springer: New York).

Li, Tien-Yien, and James A. Yorke. 1975. ‘Period Three Implies Chaos’, The American Mathematical Monthly, 82: 985-92.

Lorenz, Edward N. 1963. ‘Deterministic Nonperiodic Flow’, Journal of the Atmospheric Sciences, 20: 130-41.

Mendelsohn, Robert, William D. Nordhaus, and Daigee Shaw. 1994. ‘The Impact of Global Warming on Agriculture: A Ricardian Analysis’, The American Economic Review, 84: 753-71.

Parrish, Julia K., and Leah Edelstein-Keshet. 1999. ‘Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation’, Science, 284: 99-101.

Pennisi, Elizabeth. 1999. ‘Unraveling Bacteria’s Dependable Homing System’, Science, 284: 82-82.

Ricardo, David, and Piero Sraffa. 1952. The works and correspondence of David Ricardo / edited by Piero Sraffa, with the collaboration of M.H. Dobb Vol.6, Letters, 1810-1815 (Cambridge University Press for the Royal Economic Society: Cambridge).

Service, Robert F. 1999. ‘Exploring the Systems of Life’, Science, 284: 80-83.

Weng, Gezhi, Upinder S. Bhalla, and Ravi Iyengar. 1999. ‘Complexity in Biological Signaling Systems’, Science, 284: 92-96.

Werner, B. T. 1999. ‘Complexity in Natural Landform Patterns’, Science, 284: 102-04.

Whitesides, George M., and Rustem F. Ismagilov. 1999. ‘Complexity in Chemistry’, Science, 284: 89-92.

 

Soul-searching by a soulless discipline

The dominance of micro-founded macroeconomic models—models derived directly from the microeconomic concepts of utility-maximizing individuals and profit-maximizing firms, and based on the Ramsey Neoclassical growth model (Ramsey 1928)—did not go unchallenged prior to the Global Financial Crisis. But the critics were treated in the time-honoured Neoclassical way, of being both ignored and disparaged—if they were, like me, not Neoclassicals themselves—or politely listened to but still effectively ignored, if they were.

Pre-eminent amongst the tolerated critics was Robert Solow, a recipient of the “Nobel” Prize in Economics in 1987 for his work on a Neoclassical theory of economic growth (Solow 1956). In a series of papers (Solow 1994, 2001, 2003; Solow 2006; Solow 2007, 2008), Solow railed against the very idea of building macroeconomic analysis on the foundation of Ramsey’s growth model.

At a Festschrift for another economics “Nobel” recipient, Joseph Stiglitz, Solow delivered a dismissive judgment on micro-founded macroeconomics in a paper provocatively entitled “Dumb and Dumber in Macroeconomics”. Solow began with the question of “So how did macroeconomics arrive at its current state? The answer might provide a lead as to where it ought to go”. He continued:

The original impulse to look for better or more explicit micro foundations was probably reasonable… What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.

How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up? (Solow 2003. Emphasis added)

He also disparaged the assumption of equilibrium through time—which is imposed on a model that in fact has an unstable equilibrium—stating that “This choice between equilibrium and disequilibrium thinking may be a false choice”. He continued with the colourful metaphor that:

If I drop a ripe watermelon from this 15th-floor window, I suppose the whole process from t0 to the mess on the sidewalk could be described as some sort of dynamic equilibrium. But that may not be the most fruitful—sorry—way to describe the falling-watermelon phenomenon. (Solow 2003)

When the crisis hit, Solow was one of several economists invited by the US Congress’s House Committee on Science and Technology Subcommittee on Investigations and Oversight to explain what when wrong, in a hearing entitled “Building a Science of Economics for the Real World”. His testimony, as colourful as ever, highlighted a key problem for economics, that people schooled in this tradition had largely lost the capacity for critical thought about it:

every proposition must pass the smell test: does this really make sense? I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way. I do not think that this picture passes the smell test… The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether. (Solow 2010. Emphasis added)

Solow’s quip that the advocates of modern Neoclassical macroeconomic modelling had “lost their sense of smell altogether” neatly characterized the debate that ensued amongst these economists in the aftermath to the Global Financial Crisis. They could not deny that the crisis had happened, but likewise they could not contemplate that their models—which had not only not seen it coming, but had predicted a bountiful economic harvest, when a famine ensued—could possibly be wrong. Their dialogue resembled men—and they are almost exclusively men—without a sense of smell, trying to distinguish the aroma of a rose garden from the stink of a sewer.

My favorite “representative agent” in this journey of non-discovery is Olivier Blanchard. Blanchard was the “Class of 1941” Professor of Economics at MIT from 1994 till 2010, Chair of Department from 1998 till 2003, Chief Economist of the IMF from September 2008 till 2015, Robert M. Solow Professor of Economics at MIT from 2010-2014 (which is somewhat ironic, given his vastly different opinion of DSGE models to Solow’s), and the President of the American Economic Association in 2018. The only major mainstream economic guernsey he lacks is a “Nobel” Prize.

He began his journey in blissful ignorance of the economic crisis unfolding around him. In August 2008, Blanchard self-published an NBER working paper with the title “The State of Macro”, in which he declared that “The state of macro is good”. Starting with a portrayal of the initial conflicts between “New Classicals” and “New Keynesians”, he opined that:

there has been enormous progress and substantial convergence. For a while—too long a while—the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good. (Blanchard 2008, p. 2)

To call this blind ignorance is to insult the unsighted. The crisis is regarded as having started on August 9th, 2007—precisely a year before he uploaded this paper—when BNP Paribas Investment Partners shut down redemptions from three of its investment funds that were based on the US housing market. Figure 1 also shows that the rate of economic growth peaked in 2006 Q4 (at 4.7% in the Rest of the World and 3.7% in the USA). By the time of the BNP Paribas announcement, growth in the USA had faltered to 2.3%, in the subsequent quarter (2007 Q4) it was 0.2%. By the third quarter of 2008—which includes August, when Blanchard released his paper—it was minus 2%.

Perhaps in atonement for this monumentally badly-timed and false homage to mainstream economics, Blanchard subsequently published a string of papers that tried to assess why the state of macro was, in fact, extremely bad, and to propose what might be done to fix it (Blanchard 2014, 2016a, 2016b, 2018).

His first sortie, published in the IMF’s semi-populist journal Finance and Development, had the somewhat cartoonish title “Where Danger Lurks” (Blanchard 2014), and it was accompanied by a cartoon demon, as shown in Figure 3. Nonetheless, this paper had the most perceptive observations about the failure of macroeconomic theory that he managed to make. He focused on the assumption that economic fluctuations were linear—”so that small shocks had small effects and a shock twice as big as another had twice the effect”:

Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment…

The techniques we use affect our thinking in deep and not always conscious ways… These techniques however made sense only under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms … could understand their nature and form expectations of the future, and simple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on economic activity…

We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time… Whatever caused the Great Moderation, for a quarter century the benign, linear view of fluctuations looked fine… That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again… (Blanchard 2014, p. 28)

 

Figure 3: Blanchard’s first, and deepest, consideration of why macroeconomic theory failed

Apart from these valid insights, the paper was more notable for its illustrations than any intellectual revolution in its content. Blanchard’s main policy advice was that we should “Stay away from dark corners” (Blanchard 2014, p. 31), but he gave no means by which “dark corners” could be identified. Though he called for research to “let a hundred flowers bloom”:

Now that we are more aware of nonlinearities and the dangers they pose, we should explore them further theoretically and empirically—and in all sorts of models. (Blanchard 2014, p. 31)

He also made the bizarre argument that if—somehow, and without any guidance from economic theory—policymakers could “maintain a healthy distance from dark corners”, then it would be OK for economic theory to march on unaltered:

But this answer skirts a harder question: How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models that we use, for example, at the IMF to think about alternative scenarios and to quantify the effects of policy decisions? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?

Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate…Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage (Blanchard 2014, p. 31. Emphasis added)

How on Earth could policymakers “maintain a healthy distance from dark corners” if they had no theoretical guidance as to where they were? And if they could work it out for themselves by empirical observation, then what need was there for economists in the first place?

The real dark corner from which Blanchard was retreating was the prospect that the Neoclassical paradigm was in fact fundamentally wrong about the nature of the macroeconomy.

His next paper began with sound criticisms of DSGE models for being “based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms” (Blanchard 2016a, p. 1). But by the end, he could see no alternative to the core of DSGE modelling, of deriving macroeconomics from microeconomic foundations:

The pursuit of a widely accepted analytical macroeconomic core, in which to locate discussions and extensions, may be a pipe dream, but it is a dream surely worth pursuing. If so… Starting from explicit microfoundations is clearly essential; where else to start from? Ad hoc equations will not do for that purpose. Thinking in terms of a set of distortions to a competitive economy implies a long slog from the competitive model to a reasonably plausible description of the economy. But, again, it is hard to see where else to start from. (Blanchard 2016a, p. 3. Emphasis added)

Blanchard’s final word on the need to reform economic theory was written after interactions with a number of economists, including me:

A number of economists joined the debate about the pros and cons of dynamic DSGEs, partly in response to my blog post. Among them were Narayana Kocherlakota (2016), Simon Wren-Lewis (2016), Paul Romer (2016), Steve Keen (2016), Anton Korinek (2015), Paul Krugman (2016), Noah Smith (2016), Roger Farmer (2014), and Brad Delong (2016)…

In a sign of how incapable mainstream economists are of comprehending fundamental challenges to their methodology, he followed up this acknowledgment with this putative summary of agreed positions:

I believe that there is wide agreement on the following three propositions; let us not discuss them further, and move on.

i) Macroeconomics is about general equilibrium… (Blanchard 2018, p. 49. Emphasis added)

I was literally gobsmacked by this alleged point of agreement, and said so at the time, but to no avail. Far from agreeing that “Macroeconomics is about general equilibrium”, in the post of mine that Blanchard cited, I had argued that nonlinear, far-from-equilibrium dynamics had to be the basis of macroeconomic modelling:

Imposing linearity on a nonlinear system is a valid procedure if, and only if, the equilibrium around which the model is linearized is stable… The mathematically more valid approach is to accept that, if your model’s equilibria are unstable, then your model will display far-from-equilibrium dynamics, rather than oscillating about and converging on an equilibrium. This requires you to understand and apply techniques from complex systems analysis, which is much more sophisticated than the mathematics Neoclassical modelers use.

Just as Blanchard ultimately meandered back to DSGE modelling, so did Neoclassical economics: fifteen years after the crisis, DSGE models remain the dominant methodology in macroeconomic modelling. It is as if the crisis itself never occurred. All that has happened is that some modellers have calibrated their models to ex-post fit the crisis, as if that is a sufficient response.

This process began very soon after the crisis, with Peter Ireland’s paper “A New Keynesian Perspective on the Great Recession” (Ireland 2011). Though he began by admitting that “the Great Recession’s extreme severity makes it tempting to argue that new theories are required to fully explain it” (Ireland 2011, p. 31), he quickly disparaged what I will shortly show is in fact the correct approach—”Attempts to explain movements in one set of endogenous variables, like GDP and employment, by direct appeal to movements in another, like asset market valuations or interest rates, sometimes make for decent journalism but rarely produce satisfactory economic insights” (Ireland 2011, p. 32)—and moved back to the bread and butter of DSGE modelling: explaining all macroeconomic phenomena as being due to “exogenous shocks” disturbing a fundamentally stable economic system.

His conclusion, after developing and numerically solving a “small-scale model” (Ireland 2011, p. 52)—which had ten equations and 14 exogenous parameters, and was subjected to four types of exogenous shocks, to consumer preferences, production costs, technology and monetary policy—was that the difference between the worst economic crisis since the Great Depression, and the two relatively mild recessions that preceded it, was that the shocks that caused the “Great Recession” lasted longer and grew bigger over time:

the Great Recession began in late 2007 and early 2008 with a series of adverse preference and technology shocks in roughly the same mix and of roughly the same magnitude as those that hit the United States at the onset of the previous two recessions…

The string of adverse preference and technology shocks continued, however, throughout 2008 and into 2009. Moreover, these shocks grew larger in magnitude, adding substantially not just to the length but also to the severity of the great recession. (Ireland 2011, p. 48)

Ireland concluded that “All of these results indicate that the basic New Keynesian model continues to serve as a reliable guide for business cycle analysis and monetary policy evaluation” (Ireland 2011, p. 52).

A more sensible conclusion is that which Enrico Fermi gave to Freeman Dyson when the latter proudly showed the former his numerical solution to an experimental result of Fermi’s:

“There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” (Dyson 2004)

When Dyson protested, Fermi asked “How many arbitrary parameters did you use for your calculations?”:

I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. (Dyson 2004)

With the 14 arbitrary parameters Ireland used, von Neumann could doubtless make his elephant fly while copulating. Though economics is not applied physics, we need to take heed of Fermi’s advice that we need either “a clear physical picture of the process that you are calculating” or “a precise and self-consistent mathematical formalism.” Both can be constructed once we embrace the inherent complexity of the economic system, and abandon the Neoclassical fetishes of microfoundations, linearity, and equilibrium.

Rebuilding Economics from the Top Down—a work in progress

I have just commenced a six-month research project at the Budapest Centre for Long-Term Sustainability (https://bc4ls.com/), and one of my allotted tasks is to write a 30,000 word book. With apologies to my good friend Blair Fix, my working title echoes that of his blog (https://economicsfromthetopdown.com/): “Rebuilding Economics from the Top Down”.

I start with the hubris of mainstream economics prior to the Global Financial Crisis. The next instalment will discuss their failed attempt at soul-searching after the crisis, which has resulted in models that failed to anticipate the Global Financial Crisis still dominating the profession today.

I will post draft chapters as I complete them on my Patreon (https://www.patreon.com/ProfSteveKeen) and Substack (https://profstevekeen.substack.com/) pages. Please consider signing up to one or the other, so that I can continue to make my research freely available. My intention is to post at least one chapter each week for the next three months.

The Magnificent Failure of Mainstream Economics

For the last fifty years, the development of economics has been driven by the desire to derive macroeconomic analysis directly from microeconomic theory. This research program was a theoretical success and a practical failure.

Modern macroeconomic modelling, initially in the form of Real Business Cycle (RBC) and later Dynamic Stochastic General Equilibrium (DSGE) models, is firmly based on the utility and profit maximizing principles of microeconomics, and in particular, the growth model developed by Frank Ramsey in 1928 (Ramsey 1928). The success of this process of derivation, combined with the economic conditions of the last decade of the 20th century and the first half-decade of the 21st, led mainstream economists to believe that they had found the economic “Holy Grail”: a well-grounded theory of economics which also enabled economists to manage the economy successfully.

Robert Lucas, one of the key architects of the “microfoundations revolution”, gave a triumphalist perspective on the state of economics in his Presidential Address to the American Economic Association in January 2003:

Macroeconomics was born as a distinct field in the 1940’s, as a part of the intellectual response to the Great Depression. The term then referred to the body of knowledge and expertise that we hoped would prevent the recurrence of that economic disaster. My thesis in this lecture is that macroeconomics in this original sense has succeeded: Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades… Taking U.S. performance over the past 50 years as a benchmark, the potential for welfare gains from better long-run, supply-side policies exceeds by far the potential from further improvements in short-run demand management. (Lucas 2003, p. 1)

A similar triumphalism pervaded policy circles. Ben Bernanke, speaking just 16 months before he became Chairman of the Federal Reserve, declared in October 2004 that:

the low-inflation era of the past two decades has seen not only significant improvements in economic growth and productivity but also a marked reduction in economic volatility, both in the United States and abroad, a phenomenon that has been dubbed “the Great Moderation.” Recessions have become less frequent and milder, and quarter-to-quarter volatility in output and employment has declined significantly as well. The sources of the Great Moderation remain somewhat controversial, but as I have argued elsewhere, there is evidence for the view that improved control of inflation has contributed in important measure to this welcome change in the economy. (Bernanke 2004)

Likewise, economic modellers were confident that their mathematical models of the economy could accurately predict its future behaviour. Though, as I note below, there were some elements of conflict, criticism and scepticism within mainstream academic economics, model builders, who were the dominant faction, were boastfully loud and proud. In June 2007, the authors of the most celebrated DSGE model proclaimed its capacity to out-perform mere econometric techniques, and to forecast the economy’s future path:

Using a Bayesian likelihood approach, we estimate a dynamic stochastic general equilibrium model for the US economy using seven macroeconomic time series. The model incorporates many types of real and nominal frictions and seven types of structural shocks. We show that this model is able to compete with Bayesian Vector Autoregression models in out-of-sample prediction. We investigate the relative empirical importance of the various frictions. Finally, using the estimated model, we address a number of key issues in business cycle analysis: What are the sources of business cycle fluctuations? Can the model explain the cross correlation between output and inflation? What are the effects of productivity on hours worked? What are the sources of the “Great Moderation”? (Smets and Wouters 2007, p. 586).

This confidence rubbed off on international economic bodies, with the OECD entitling the editorial to its June 2007 Economic Outlook report “Achieving further rebalancing”, and declaring that:

the current economic situation is in many ways better than what we have experienced in years. Against that background, we have stuck to the rebalancing scenario. Our central forecast remains indeed quite benign: a soft landing in the United States, a strong and sustained recovery in Europe, a solid trajectory in Japan and buoyant activity in China and India. In line with recent trends, sustained growth in OECD economies would be underpinned by strong job creation and falling unemployment. (Cotis 2007, p. 7)

“Pride”, as the proverb goes, “goeth before destruction, and an haughty spirit before a fall”. Just 2 months after the OECD foresaw “strong job creation and falling unemployment”, and Smets and Wouters crowed about the out-of-sample predictive powers of their DSGE model, less than three years after Bernanke heralded “this welcome change in the economy”, and less than five years after Lucas proclaimed that the “problem of depression prevention has been solved”, the global economy collapsed into the greatest economic crisis since the Great Depression.

As the top plot in Figure 1 shows, the rate of economic growth crashed from 4.5% per year to minus 3%, the opposite of the “buoyant activity in China and … sustained growth in OECD economies” that the OECD advised was in store for 2008 and 2009. Just as worryingly, the rise in unemployment that always occurs during a recession was accompanied this time by a phenomenon that had not been seen since The Great Depression itself: deflation. Though in the end, in response to significant government policy interventions, the period of deflation was short-lived, it nonetheless occurred: the rate of growth of consumer prices in the USA collapsed from over 5% per year in late 2007 to under minus 1% per year by mid-2009. In stark contrast to Lucas’s confidence that “welfare gains from better long-run, supply-side policies exceeds by far [his emphasis] the potential from further improvements in short-run demand management”, economists who were ill-equipped for the job found themselves desperately trying to boost short-run economic demand.

Figure 1: Growth rates from 1990-2015 for the USA & 25 other major economies

Here they also failed, in comparison to the “Keynesian” economic orthodoxy which preceded them. The hapless President Obama, whose degrees were in political science and law, took the advice of dominant mainstream economics figures—such as Larry Summers, whom he appointed Director of the USA’s National Economic Council, and Timothy Geithner, whom he appointed Secretary of the Treasury—that the way to end the crisis quickly was to pump the banking system full of excess Reserves. This, Obama was assured, would stimulate the economy more rapidly than, for example, putting money directly into the hands of households. Obama lent his oratorical skills to this mainstream economic advice, stating in a speech at Georgetown University in April 2009 that:

although there are a lot of Americans who understandably think that government money would be better spent going directly to families and businesses instead of banks—”where’s our bailout?,” they ask—the truth is that a dollar of capital in a bank can actually result in eight or ten dollars of loans to families and businesses, a multiplier effect that can ultimately lead to a faster pace of economic growth. (Obama 2009. Emphasis added)

The incredibly brief Covid recession threw into high relief just how wrong this conventional economic advice was. When Covid lockdowns forced people out of work, the government response was to give households immediate financial stimulus—and the recession was over in 2 months. The “Great Recession”, as Americans christened what the rest of the world called the “Global Financial Crisis”, lasted one and a half years, making it the second-longest recession in the USA in the past century. Only the first phase of the Great Depression, from August 1929 till March 1933, lasted longer.

Figure 2: The drawn-out recession of the Global Financial Crisis, versus the almost instantly-over Covid Recession

A theoretical and practical failure as big as this—to have models that did not show that the biggest macroeconomic event in a century was imminent, and to be so unprepared for a crisis that the policies advocated by economists made the recovery worse—did provoke some critical reflection amongst economists. But their reflections lacked any capacity to imagine any other way to build macroeconomics apart from the method that had already failed them.

References

Bernanke, Ben S. 2004. “Panel discussion: What Have We Learned Since October 1979?” In Conference on Reflections on Monetary Policy 25 Years after October 1979. St. Louis, Missouri: Federal Reserve Bank of St. Louis.

Cotis, Jean-Philippe. 2007. ‘Editorial: Achieving Further Rebalancing.’ in OECD (ed.), OECD Economic Outlook (OECD: Paris).

Lucas, Robert E., Jr. 2003. ‘Macroeconomic Priorities’, American Economic Review, 93: 1-14.

Obama, Barack. 2009. “Obama’s Remarks on the Economy.” In. New York: New York Times.

Ramsey, F. P. 1928. ‘A Mathematical Theory of Saving’, The Economic Journal, 38: 543-59.

Smets, Frank, and Rafael Wouters. 2007. ‘Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach’, American Economic Review, 97: 586-606.

 

The Failure of Neoliberalism: Backing Up Macro Alf, & Showcasing Ravel, in 11 plots and two averages

The macro commentator Alfonso Peccatiello, who writes as @MacroAlf on Twitter/X and publishes the Macro Compass newsletter, recently posted an excellent thread on private debt that cited my work:

Let me show you one of the most underrated and yet crucial long-term macro variables in the world. Debt. But not government debt: people should stop obsessing it! The government can print money in its own currency. Of course, this has limitations: capacity constraints, inflation, credibility…but there is much more vulnerable source of debt out there. Private sector debt levels and trends are by far a more important macro variable to follow.

Let me explain why. The private sector doesn’t have the luxury to print money: if you get indebted to your eyeballs and you lose your ability to generate income, the pain is real. This amazing chart from my friend @darioperkins proves the point quite eloquently…

Figure 1: Alf’s chart of private debt to GDP bubble for 4 key economies

This post follows up on Alf’s lead by producing a private debt-focused profile of all the major economies in the OECD whose debt levels are also recorded by the Bank of International Settlements. It combines data on inflation and unemployment rates from the OECD with private and government debt and house price data from the BIS.

The plots in this post run in reverse alphabetical order from the United States (see Figure 2) to Australia. Their message is the same that Alf made in his tweet stream (x-stream?): private debt matters, and the fact that conventional Neoclassical economics ignores it is a major reason why it has failed as a guide to economic theory and policy.

This post also showcases Ravel©™, a multidimensional analytic database which I have designed, and which Minsky’s programmer Dr Russell Standish has coded. We hope to release Ravel commercially in 2024. If you like what you see here, then let me know in the comments and I’ll add you to the early pre-release of Ravel, which we hope will occur in early 2024 (comments are only open to paid subscribers on Patreon and Substack). The end of the post will briefly explain how the plots were created.

Figure 2: The USA’s data

Several common themes turn up in almost all countries in these plots. First and foremost, the shift from so-called “Keynesian” to “Neoclassical/Neoliberal” economic policies that began in the mid-1970s, which was supposed to unleash the private sector from the shackles of the State, has failed on its own terms. The rate of economic growth under Neoliberalism—which I date from the beginning of 1975, when a surge in inflation empowered the political and academic rise of Milton Friedman’s “Monetarism”—has been lower for every country in this database. Rather than the Neoclassicals showing the Keynesians how to promote growth, the Neoclassicals have shown how to turn real economic performance into permanent financial speculation:

  • The rate of economic growth (top left graph) has trended down over time, and rather than reversing this trend, Neoliberalism has accentuated it.
  • The unemployment rate is higher and more volatile than in the “bad old Keynesian days”.
  • Neoliberalism might claim the low-inflation period after the GFC as a success story, but even so, post-GFC inflation is higher than the record of the 1960s.
  • The average rate of economic growth rate has been substantially lower after the deregulation fetish began than before—at 1.8% p.a. in the USA’s case, the average rate of economic growth since Neoliberalism is barely more than half the 3.25% p.a. from 1945 till 1975.
  • The main growth success of the Neoliberal period has been an unprecedented increase in private debt. In America’s case, it has tripled from under 60% of GDP at the end of WWII to a peak of almost 180% during the GFC (Global Financial Crisis).
  • Government debt is much lower than private debt, and yet it’s government debt that is the unwarranted focus of economic discussion and policy, as Alf pointed out. Nonetheless, even government debt has risen under Neoliberalism, and in the USA’s case the post-GFC level exceeds the pre-Neoliberal peak. That reducing the ratio of government debt to GDP was even made a (failed) policy target is a sign of how little Neoclassical economists know about the economy, since government debt is in fact a record of fiat money creation.
  • By stifling the growth of Fiat money, the Neoliberal period encouraged the growth of Credit—the annual change in private debt. While Neoclassical economists like Ben Bernanke bleat that “pure redistributions [as they falsely characterise new private debt] should have no significant macroeconomic effects” (Bernanke 2000, p. 24), Credit strongly negatively correlates with unemployment: when Credit is high unemployment is low and vice versa. This shows, as Alf argued, that credit is a far more important determinant of economic performance than government spending, and yet mainstream economic theory and policy continue to ignore it.
  • Government money creation tends to be driven by the rise and fall of unemployment. Since unemployment in general has risen under Neoliberalism, and economic crises that Neoclassicals thought they had abolished (“macroeconomics in this original sense has succeeded: Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades”: Lucas 2003, p. 1) have become more extreme and frequent, panic rather than policy has driven government spending higher as a proportion of GDP under their misguided guidance.
  • House prices were tame before Neoliberalism and have been wild ever since, because the housing market has been the main destination of rising private debt. Though Neoliberal politicians like Reagan and Thatcher, Clinton and Blair, believed they were liberating the private sector in general, what they really did was let the financial sector rip.
  • The fuel beneath rising house prices has been rising household debt. This has risen even more than total private debt—in the USA’s case, household debt is now fives times as large as it was at the end of WWII, when compared to GDP.
  • Turning housing from a long-term consumption item into a financial commodity has led to house price volatility. Richard Vague’s magisterial survey of the last two centuries of financial crises (Vague 2019) shows that house price bubbles are overwhelmingly the main factor behind financial crises. This is what Neoliberalism really gave us.
  • Lastly, there is a causal link between change in the level of new mortgage debt and change in house prices. Since houses are bought primarily with borrowed money, the flow of new mortgages is the main monetary foundation of the existing price level. It follows that change in new mortgages drives change in house prices. Neoliberalism promised economic prosperity, but by its ignorance of money has turned our economies into unstable, private-debt-financed casinos.

The following plots are displayed in Ravel simply by moving the selector dot on the master Ravel in the right-hand corner of the model. Figure 3 to Figure 25 show the same data for other countries, in reverse alphabetical order from the UK to Australia.

Figure 3: UK

Figure 4: Switzerland

Figure 5: Sweden

Figure 6: Spain

Figure 7: Portugal

Figure 8: Poland

Figure 9: Norway

Figure 10: New Zealand

Figure 11: Netherlands

Figure 12: Mexico

Figure 13: Japan

Figure 14: Italy

Figure 15: Israel

Figure 16: Hungary

Figure 17: Greece

Figure 18: Germany

Figure 19: France

Figure 20: Finland

Figure 21: Denmark

Figure 22: Canada

Figure 23: Belgium

Figure 24: Austria

Figure 25: Australia

Using Ravel

The first stage in using Ravel is to import the data, in this case from the BIS and the OECD—which is what Figure 26 illustrates. An importing form specifies which columns in a database are “dimensions” and which are data. At present we only import CSV files, but the range of import formats will grow after the first release.

Figure 26: Data imported into Ravel


This imported data (stored in the parameters BISDebt, BISHPI, OECDCPI and OECDUnemp) is then, if necessary, attached to a Ravel—the square boxes in Figure 27 with arrows inside them—to enable manipulation of the data and extraction of subsets for further analysis. The top Ravel in Figure 27 stores data on government and private debt in numerous ways—raw domestic currency, percent of GDP, etc. The operations on that Ravel—collapsing one axes and extracting the maximum value stored on it, selecting a single instance (Lending by All Sectors rather than Banks alone, Adjusted for Breaks rather than unadjusted)—reduce a 7-dimensional object to a 4-dimensional one, where those dimensions are Country, Date, Sector, and Unit.

Figure 27: using Ravel’s slice, dice and aggregate functions

Figure 28 then takes the 4-dimensional object Debt and separates it into Private Debt in domestic currency, private debt as a percentage of GDP, and Government debt as a percentage of GDP.

Figure 28: Minimizing Ravels and selecting slices of debt data

Figure 29 showcases Ravel‘s analytic power, in three ways:

  • There is no easily accessible database on quarterly GDP by country, but the BIS database has raw data from which it can be derived. If you divide private debt in domestic currency by private debt as a percentage of GDP, and then multiply that by 100, you have GDP in domestic currency for all the countries in the BIS database. There are over 40 countries and close to 400 quarters of data: that would take 16,000 replicated cell formulas in Excel, but it takes just one flowchart equation in Ravel.
  • Nominal growth rate data can be derived by dividing the annual change in GDP by itself and multiplying by 100. Once again, one flowchart formula replaces close to 16,000 Excel formulas; and finally,
  • The real growth rate can be derived by subtracting the inflation rate from the nominal growth rate.

Figure 29: Deriving Quarterly nominal GDP and growth rates for the 43 countries in the BIS database

The final database is shown in Figure 30, with one country—Australia, simply because it’s the first alphabetically—highlighted.

Figure 30: The completed economic database

The importing and calculation routines are then put into groups and reduced in size to reduce clutter on the final dashboard used for this document.

References

Bernanke, Ben S. 2000. Essays on the Great Depression (Princeton University Press: Princeton).

Lucas, Robert E., Jr. 2003. ‘Macroeconomic Priorities’, American Economic Review, 93: 1-14.

Vague, Richard. 2019. A Brief History of Doom: Two Hundred Years of Financial Crises (University of Pennsylvania Press: Philadelphia).

The Paradox of Debt, by the Tycho Brahe of Credit

Very rarely do I review a book and find that the best way to convey its significance is to quote, verbatim, its first four paragraphs:

In 2020, during the darkest hours of the global coronavirus pandemic, the US government spent $3 trillion to help rescue the country’s – and, to some extent, the world’s – economy. This infusion of cash increased US government debt and thus reduced US government wealth by almost the entirety of that frighteningly large amount – the largest drop in US government wealth since the nation’s founding. Surely something this unfavorable to the government’s ‘balance sheet’ would have broad, adverse financial consequences.

So what happened to household wealth during that same year? It rose. And it improved by not just the $3 trillion injected into the economy by the government but by a whopping $14.5 trillion, the largest recorded increase in household wealth in history. As a whole, the wealth of the country – its households, businesses, and the government added together – increased by $11 trillion, so this improvement in wealth was contained largely to households.

How and why did such an extraordinary increase occur?

To understand this paradox, we need to seek answers to some of the most fundamental questions in economics: What is money? What is debt? What brings about increases in wealth? Often the most basic questions can be the most challenging to answer. They appear deceptively simple but they are complex and vitally important.” (Vague 2023, p. 1)

What follows is a magisterial analysis of the role of debt in economics, working from detailed data on each of the world’s major economies. The key focus is, as the title declares, the paradoxical role of debt in a capitalist economy. Debt is both a pre-requisite for economic growth, and a cause of economic crises as well.

“the 2008 financial crisis was no black swan or storm of the century phenomenon; instead, it would have been easy to spot, and might have been foretold years, not months, in advance, if analysts had been looking in the right direction and at the right things. Like most financial calamities, the 2008 crisis was born from the unbridled growth in private sector debt – a key trend that is straightforward to track.” (Vague 2023, p. 222)

And yet it is ignored by mainstream economics, leaving its analysis to a band of contrarians including me (Keen 1995), Hyman Minsky (Minsky 1982), whose “Financial Instability Hypothesis” inspired me, Irving Fisher (Fisher 1933), whose “Debt Deflation Theory of Great Depressions” inspired him, Michael Hudson (Hudson 2018) and the late David Graeber (Graeber 2011), who track the history of debt, Richard Werner (Werner 2016), who explains the mechanisms by which debt creates money, and now Richard Vague (Vague 2019, 2023), who covers the empirics of private and public debt in great and fascinating detail.

“A lending boom is optimism on steroids… Euphoria is the hardest habit to quit… Lending and debt are the agents and catalysts of that euphoric delusion. To seek to explain booms solely through impersonal, technical factors is to miss the fact that economics is a behavioral and not a physical science. It is to miss the essence of financial crises.” (Vague 2023, p. 188)

I often feel that the struggle against Neoclassical economics is like the struggle by believers in the Heliocentric model of the solar system against the then dominant Geocentric model of Aristotle and Ptolemy. Copernicus, Brahe, Kepler, Galileo, and finally Newton, were the pivotal fighters for truth in that struggle. I’m not putting any of us on the same pedestal as those critical contributors to the triumph of science over religion, but my mind often wanders to the personal parallels: whose contributions to realism on the monetary nature of capitalism are most akin to the astronomical contributions of those giants?

“In the lead-up to the Global Financial Crisis of 2008, the US accumulated a gargantuan mountain of new mortgage debt, totaling $5 trillion. This debt was so large it was practically impossible to miss – except that most economists did miss it entirely, and therefore failed to predict the financial crisis.” (Vague 2023, p. 58)

Fisher was our Copernicus, first putting forward the theory that “over-indebtedness to start with and deflation following soon after” are the key factor in causing Great Depressions”. Minsky was our Kepler, working out the elliptical ways in which debt drove economics. Perhaps, by inventing Minsky—which I regard as the monetary equivalent of Galileo’s telescope—I have some parallels with Galileo. But there is no doubt about the astronomical doppelganger for Richard Vague: it is Tycho Brahe.

“Still another objection is that this is just one more example of government trying to pick winners and losers, and only the wisdom of markets can do that. Well, in 2007, the market was picking no-down-payment mortgage loans as winners and how did that work out?” (Vague 2023, p. 226)

Tycho’s meticulous observations of the motions of the planets and stars provided the rock-solid foundations on which the Heliocentric model was constructed. This is the realm in which Vague excels. This book, and the website that supports it, does for the credit-based model of capitalism what Tycho’s measurements did for the Heliocentric model—a model without which we would still be an Earthbound species, rather than one with the potential to reach for the stars.

“Based on these ratios, debt is a peripheral issue to the top 10 percent of US households but a monumental issue to many in the bottom 60 percent… champion the trickle-down theory of economics are correct – except for one detail: it is debt that has been trickling down, not wealth.” (Vague 2023, pp. 70-72)

The Paradox of Debt: A new path to prosperity without crisis provides that data in an extremely accessible and entertaining form. It’s not easy to write about empirical facts embodied in tables and graphs, and make the prose engaging. Vague does it with ease.

“In the early 1980s, many economists made dire predictions about the likely consequences of high levels of government debt. The warned that it would constrain spending, crowd out lending and investment, lead to higher interest rates and inflation, and seriously encumber the country. At the time, inflation had reached 14 percent and interest rates were close to 20 percent.

Since then, government debt has exploded and so we have had ample opportunity to put these predictions to the test. As it turns out, over this time span interest rates have generally plummeted, not risen; investment has remained high, not been constrained; and household net worth has risen, not sunk.” (Vague 2023, p. 55)

I am in some ways too close to this topic myself—and too close to the author, who is not merely a friend, but also one of my favorite people—to write a detailed commentary and critique here. What I can say is that reading Vague is much easier than reading Keen, and provides insights that I don’t, through the wealth of data Richard has assembled with his research group—which he has, funnily enough, named after Tycho Brahe.

Figure 1: Global debt levels from the BIS, with the analysis done in my new program Ravel

“In particular, the growth that comes from government spending could come instead from a non-debt-based source. All it takes is for the Treasury to sell a non-interest-bearing instrument with no maturity to the Federal Reserve… Let’s call this instrument perpetual money. It may well be that a balance between debt-based money and perpetual money is a healthier and more technically sound way of managing monetary policy.” (Vague 2023: , p. 256)

Though there are many aspects of Vague’s analysis that are consistent with other contrarian theories, Richard is not beholden to any doctrines, and frequently makes observations that challenge other contrarians. For instance, he regards a trade as a negative, which contradicts an aspect of Modern Monetary Theory that I also criticise. He worries about aggregate debt—public as well as private—whereas other contrarians, me included, tend to see government debt as benign (the opposite of the Neoclassical argument). Like me, he argues for a debt jubilee, though with quite a different structure to my “Modern Debt Jubilee”.

The ratio of debt to income in economies almost always rises, with profound consequences, both good and bad.

• Money is itself created by debt.

• New money, and therefore new debt, is required for economic growth.

• Rising total debt brings an increase in household and national wealth or capital. Most wealth is only possible if other people or entities have debt. As wealth grows, so too must debt.

• At the same time, debt growth brings greater inequality, in part because middle- to lower-income households carry a disproportionate relative share of household debt burden. In fact, in economic systems based on debt – which is the world as it operates now – rising inequality is inevitable, absent some significant countervailing change such as a major change in a nation’s tax policy.

• A current account and trade deficit contributes to private sector debt burdens.

• The overall increase in debt, especially private debt, eventually slows economic growth and can bring economic calamity. (Vague 2023, p. 7)

This makes his case worthy of attention from other contrarians, as well as politicians, the general public, and the investment community. He even spices up the book with predictions, based on his careful attention to the data, which financial analysts may find both surprising and potentially rewarding.

“Any among these turns of events would likely force Germany to make some stark economic choices. Surely it would – and, given the outlook for China’s GDP growth, I’m tempted to say it will – suffer a contraction, unless the government quickly encourages large increases in private sector spending, funded by increased business and household debt, or, alternatively, enacts large increases in government spending.” (Vague 2023, p. 115)

I strongly recommend this book (the sale proceeds from which go to charity) to my own readers, and if you haven’t heard of Richard Vague before, read this excellent and entertaining profile, from the days prior to 2020 when he was considering running for President: Richard Vague May Be the Most Revolutionary Thinker in Philly (phillymag.com).

And yes, in case you’re wondering, we are considering writing a book together. The extent to which our approaches to economics complement each other is matched only by the novelty of our contradictory names.

Click here to purchase The Paradox Of Debt

Fisher, Irving. 1933. ‘The Debt-Deflation Theory of Great Depressions’, Econometrica, 1: 337-57.

Graeber, David. 2011. Debt: The First 5,000 Years (Melville House: New York).

Hudson, Michael. 2018. …and forgive them their debts: Lending, Foreclosure and Redemption From Bronze Age Finance to the Jubilee Year (Islet: New York).

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

Minsky, Hyman P. 1982. Can “it” happen again? : essays on instability and finance (M.E. Sharpe: Armonk, N.Y.).

Vague, Richard. 2019. A Brief History of Doom: Two Hundred Years of Financial Crises (University of Pennsylvania Press: Philadelphia).

———. 2023. The Paradox of Debt: A new path to prosperity without crisis (Forum: London).

Werner, Richard A. 2016. ‘A lost century in economics: Three theories of banking and the conclusive evidence’, International Review of Financial Analysis, 46: 361-79.

 

Railroaded: Bring Back Thatcher and Reagan

One of my recurring fantasies is to reanimate Maggie Thatcher and Ronald Reagan on UK and Dutch trains, to tell them the year, and then ask them where they think they are—with only the ownership information and the performance and quality of the carriages to go on. The UK has privatised its rail system, the Netherlands’ system is still in public ownership; one has high quality modern fast trains, while the other is still operating museum pieces.

In researching this post, I’ve realised that I’m being a bit harsh on Thatcher—it was her successor John Majors who privatised the UK’s railways, and Thatcher herself was apprehensive about how privatisation would actually perform (McCartney and Stittle 2017, p. 2). But apart from that, I’d bet bottom dollar that they would get their locations 100% wrong. When sitting in the modern, high-speed trains, they’d think they were in the UK; and when sitting in the museum pieces slowly vibrating their way forward, they’d think they were in The Netherlands.

As someone who spends a lot of time on both rail systems, I know that the opposite is starkly true. British rail services are antiquated, unreliable, expensive, and slow. Dutch—and most European publicly owned and operated rail services—are modern, reliable, cheap and fast.

So, why did it all go do wrong, Maggie and Ronnie?

The numbers—charts here come from an EU survey (Steer-Davies-Gleave 2016)—are stark. Passenger miles, remarkably, have grown more in the UK than anywhere else in Europe (see Figure 1), so there’s reason to expect the UK might have benefited from economies of scale in this high-fixed-cost industry. If so, fares should have grown more slowly in the UK than elsewhere—and this is what (McCartney and Stittle 2017) predict would have happened, had the UK’s system not been privatised.

Figure 1: (Steer-Davies-Gleave 2016, p. 6 )

But the privatised system has resulted in much higher costs for UK commuters than their European counterparts, at every distance from suburban (see Figure 2) to interurban (see Figure 3, Figure 4, and Figure 5).

Figure 2: (Steer-Davies-Gleave 2016, p. 37) The list is alphabetical, and the UK tops the costs for suburban fares

Figure 3:(Steer-Davies-Gleave 2016, p. 56)

The outcome is worst at the interurban level, where UK trains are two to six times as expensive as their European counterparts—see Figure 4.

Figure 4: (Steer-Davies-Gleave 2016, p. 60)

The practical import of this is to make the UK into a set of isolated city economies, with only one of those—London—being of any significant scale. A rail journey of any distance in the UK is a prohibitively slow and expensive undertaking, so private sector economic activity remains small scale (or roads take the burden, but at higher monetary, environmental and time costs). The poor quality and high cost of the UK’s rail system has led to a fragmented and underperforming private sector as well.

Figure 5: (Steer-Davies-Gleave 2016, p. 63) The Netherlands isn’t listed because it’s too small for +300km interurban routes

McCartney and Stittle conclude that “in cost terms alone, the dismantling of British Rail was ill-judged and has proved to be a major public policy error… proponents of privatisation argued that the private sector would improve efficiency and provide ‘better value for money’ over the ‘dead hand’ of the state. But this was largely an illusion… (McCartney and Stittle 2017, pp. 16-17).

Why did privatisation fail, on its own grounds? McCartney and Stittle give a good analysis of this in terms of the conditions of the rail industry alone, but I’m thinking of the wider issue: is there a general issue behind the failure of privatisation to deliver what its proponents expected in so many areas, from rail to education to sewerage management?

The Payback Period

Avner Offer provides such a general principle in his recent book Understanding the Private-Public Divide: Markets, Governments and Time Horizons (Offer 2022): the time horizon for investment in ventures like railways, schools and sewerage systems lies outside the payback period that private investors expect:

in market societies, undertakings that pay off inside the credit time horizon are typically undertaken by business. This suggests a division of labour: market competition for short-term provision; government, not-for-profits, and the family for long or uncertain durations. This boundary predicts where the limit is likely to run and sets down where it ought to be. When violated in either direction, poor outcomes are likely, inefficiency, corruption, or failure. (Offer 2022, p. 13)

The current UK catastrophe with sewerage-laden rivers provides a particularly stark example of this: it is, as one TV commentator remarked, “immensely profitable” not to invest in the infrastructure that would stop raw sewerage being dumped into UK rivers. The returns from the investment are slow and occur over long time horizon; it’s actually more profitable to not make the investment now, and either dump the cost on the community (degraded waterways), or let some future management wear the disaster management costs in the distant future.

Public ownership, on the other hand, leads to public-service-oriented management, and the investment will be made because it’s the right thing to do in public service terms, and not because it creates a higher return today. Paradoxically, this can lead to lower costs and prices to the public from public ownership, because keeping the infrastructure up to date reduces maintenance and other costs indefinitely.

This perspective takes ideology out of the public-private question: businesses with short-term or medium-term returns are the private sector’s forte; services with long-term returns are best handled by the public sector.

I’d go one step further than Offer in defining the payback period—which he relates mainly to the length of time it takes for interest on a loan to equal the original loan itself:

The higher the market interest rate (or the private discount rate), the less time is available to break even… The time boundary between private and public enterprise is easy to draw. It is the ‘payback period’, the time required for interest on a loan to add up to the original advance, under the prevailing interest rate… A project which takes longer than the payback period to break even cannot pay its capital cost and cannot be undertaken for profit. (Offer 2022, pp. 13-14. Emphasis added)

Even at currently elevated interest rates, that implies that the time-division between favouring private and public ownership is 20 years. But the payback period that businesses contemplate is often much shorter than that: Offer notes that “Three different studies suggest rates of return around 15 per cent (payback 6.6 years)” (Offer 2022, p. 15). Also, investment advisors disparage the payback period “because it ignores the time value of money”. However, the mathematician-turned-economist John Blatt provided an ingenious explanation of how the payback period in fact includes not only “the time value of money” but also uncertainty about the future (Blatt 1983, 1980, 1979).

Standard time value of money calculations like Net Present Value discount expected future returns by the prevailing interest rate, plus sometimes an additional margin for uncertainty. Blatt points out that this implies that uncertainty is constant over time, but at the very least, uncertainty about future returns rises with time. So, he proposed discounting by the interest rate plus an additional amount multiplied by time: discount by , rather than just by : the additional factor accounts for the uncertainty of future returns.

This gives a sharply nonlinear profile to the discount, which can be factored into the desired rate of return and the probability of a failure of expected returns to occur. The payback period combines the two, and it is therefore more sophisticated, in terms of accounting for time, than Net Present Value.

It also explains the very short private sector payback period that Offer found rules in practice, of not 20 years but 6-7. Uncertainty about the future is more important than the interest rate.

Privatising the rail system was thus one of many own-goals in UK public policy. The expectations were that this policy would boost the UK’s performance:

When a commitment to privatise the railways was finally made by the Major government (in the Conservatives’ Election Manifesto in 1992) these problems [considered by Thatcher] were simply ignored. The subsequent White Paper, a slim document of 21 pages ‘rather lightweight’ on the economic rationale behind the privatisation plans blandly asserted that a privatised industry would ‘mean more competition, greater efficiency and a wider choice of services more closely tailored to what customers want’ and ‘provide greater opportunities to . . . reduce costs, without sacrificing quality’ (McCartney and Stittle 2017, p. 2).

Not only did the opposite occur, but fallacy led to farce, in that the UK’s poorly functioning private rail sector is largely owned by the EU’s well-functioning public one:

and indeed now, in a farcical twist that nobody could have foreseen, many of the franchises are actually run, not by private enterprise but by state-owned European rail operators—the very ones that British Rail was out-performing in the 1980s. (McCartney and Stittle 2017, p. 17)

Blatt, John M. 1979. ‘Investment Evaluation Under Uncertainty’, Financial Management Association, Summer 1979: 66-81.

———. 1980. ‘The Utility of Being Hanged on the Gallows’, Journal of Post Keynesian Economics, 2: 231-39.

———. 1983. Dynamic economic systems: a post-Keynesian approach (Routledge: New York).

McCartney, S., and J. Stittle. 2017. ”A Very Costly Industry’: The cost of Britain’s privatised railway’, Critical perspectives on accounting, 49: 1-17.

Offer, Avner. 2022. Understanding the Private-Public Divide: Markets, Governments and Time Horizons (Cambridge University Press: Cambridge).

Steer-Davies-Gleave. 2016. “Study on the prices and quality of rail passenger services.” In, edited by European Commission Directorate General for Mobility and Transport. European Commission Directorate General for Mobility and Transport.

 

Political Economy Forever?

Pardon two nostalgia posts in a row, but after writing The Day I Pranked Paul Samuelson, I realised that I’d let pass possibly the most important anniversary in my peripatetic life: the 50th anniversary of “The Day of Protest” at Sydney University on July 25th 1973.

It was, to my knowledge, the first ever protest over economic theory and how it was taught at a university. Nothing like it happened again for almost 30 years, when French students created the Protest Against Autistic Economics (PAECON) in September 2000.

Another dozen years passed before students at Manchester University formed the Post-Crash Economics Society to revolt against their Neoclassical curriculum in the aftermath to the Global Financial Crisis.

All three protests had the same motivation. To quote from the first issue of the Post-Autistic Economics Newsletter, the French protesters called for “an end to the hegemony of neoclassical theory and approaches derived from it, in favour of a pluralism that will include other approaches, especially those which permit the consideration of “concrete realities”.”

Each of these protests was successful in the short-term. Our protest at Sydney University led to the formation of a Department of Political Economy; the French students’ protest gave birth to the Real-World Economics Review, which is still going today; and the Manchester protests led to the formation of Rethinking Economics, which supports students around the world who are dissatisfied with the state of economics today.

But more than 50 years after our protest, that same hegemony is alive and well—or at least undead. The Neoclassical pedagogy that I railed against in the 1970s has evolved into something even more absurd and anti-realistic than the absurd and anti-realistic dogmas I protested against in 1973 (Krueger 1991). So, since the objective of our protests was to replace the fantasies of Neoclassical economics with a realistic approach to economics, they have failed.

Of course, we didn’t know that we would fail, back in July 1973, when 300+ students voted to hold a “Day of Protest” against the boring and delusional theories we were being fed by the conservative curriculum at Sydney University. Instead, we experienced the thrill of openly challenging our Professors, giving alternative lectures on the actual day, marching, chanting, waving banners, partying, and, in subsequent years, occasionally occupying the Vice-Chancellor’s office.

It was an exciting and life-changing experience for us all—for me more than most, since I devoted the rest of my life to continuing the rebellion we started in 1973. The buzz of optimistic rebellion is hard to explain to today’s harassed students, flitting between one part-time job and another as they try to pay their student debts. In 1973, it felt like real change was possible.

What do we want?
Political Economy!
When do we want it?
Now!

But what have we got, 50 years later? Political economy has continued on, but the mainstream is as ascendant as ever—despite its numerous failings at the time and since.

I had hoped, in those subsequent years, that a clear failure by Neoclassical economics would help expose it for the fraud it is: the failure to realise, in the Noughties, that a serious economic crisis was imminent (Keen 2006, 1995, 2020). Not only did they not realise it, they actually thought that they had eliminated the very possibility of crises. Speaking as the incoming President of the American Economic Association, Robert Lucas—one of the fountainheads of modern Neoclassical macroeconomics—made the following bold and utterly false statement in his Presidential lecture at the end of 2002:

Macroeconomics was born as a distinct field in the 1940’s, as a part, of the intellectual response to the Great Depression. The term then referred to the body of knowledge and expertise that we hoped would prevent the recurrence of that economic disaster. My thesis in this lecture is that macroeconomics in this original sense has succeeded: Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades. (Lucas 2003, p. 1. Emphasis added)

Two decades later, despite the failure of Neoclassical “Dynamic Stochastic General Equilibrium” models to anticipate the biggest economic crisis since the Great Depression, and despite Nobel Laureates like Paul Romer and Joseph Stiglitz rubbishing them, they are still the workhorses of Neoclassical economics.

Today’s students are still required to learn these arcane and misleading models, as if the crisis they failed to anticipate did not occur.

Why has Neoclassical economics persisted, despite its many failures? Mainly because, as Max Planck put it, a real science overthrows a false paradigm, not by persuading existing believers to change their minds, but by generational change:

It is one of the most painful experiences of my entire scientific life that I have but seldom—in fact, I might say, never—succeeded in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof… A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it. (Planck 1949, pp. 22-4. Emphasis added)

Generational change occurs in sciences because, once an anomaly is discovered that contradicts a prediction of the dominant paradigm, the anomaly exists forever, and any student can replicate it for themselves. Once the Michelson-Morely experiment disproved the theory of the “aether“—a substance that, according to the Maxwellian theory of electromagnetics, was supposed to fill space to enable light waves to be transmitted through it—then any student could find the anomaly for themselves.

But in economics, anomalies are historical and transient. The Great Depression, WWII, the post-War “Golden Age of Capitalism”, the 70s Stagflation, the 80s stock market bubble, the 90s recession, the Great Recession, now the post(?)-Covid boom and inflation… Each new crisis knocked the previous one off its pedestal. The fact that Neoclassical economics can’t explain the Great Depression—or that it has an explanation which is an insult to anyone who lived through it (Prescott 1999)—doesn’t matter to any modern economist who is working on the Covid inflation issue. Old failures can be forgotten.

Just as importantly, the underlying Neoclassical vision of capitalism is a highly seductive one: it is a meritocracy in which what you earn is what you deserve, where harmony rules because of equilibrium, and which is free of coercion: there’s no need for government control when the market works perfectly. Therefore, even if some students break away from Neoclassical economics because of one of its failures, enough “true believers” can be found in the student body to replace existing “true believers” when they retire. I spent my late teens fighting against believers in Neoclassical economics who were then two to three times my age. Now in my seventies, I am fighting believers in Neoclassical economics who range from being my contemporaries (Paul Krugman is one month older than me) to being one third of my age.

In a nutshell, funerals aren’t enough for a true scientific revolution in economics.

The final factor that enables economics to escape the revolutionary change it desperately needs is rather ironic: myths in economics survive because, despite the dominance of our politics by economic ideas, you don’t need economic theory or economists to have an economy. The economy exists independently of economists, and would probably function a lot better if economists simply didn’t exist. In contrast, engineering doesn’t exist independently of engineers: you need engineers to create the technological marvels the rest of us take for granted, and when something goes wrong with the things that engineers create, engineers face real consequences: a collapsing bridge fingers the engineers who didn’t take account of its harmonics, a crashing plane implicates the engineers (or their managers) who approved a faulty design.

To use Nicholas Taleb’s phrase, economists don’t have any “skin in the game”: they don’t suffer any serious consequences when their advice is badly wrong, and there is a myriad of complicating factors that they can point at to explain away any failure. Ironically, the fact that economists aren’t strictly necessary is something that gives them great power—they are the witchdoctors of capitalism, holding the leaders of our society in their thrall, while at the same time having no idea how that society actually works.

So, do I regret organizing The Day of Protest, half a century ago? Not a bit of it.

Firstly, though our protest failed to dislodge the mainstream, we were as right about Neoclassical economics being a false paradigm as Galileo was about the Ptolemaic model of the universe—and these days I’ve taken to referring to Neoclassical economics as Ptolemaic Economics to highlight that point. One should never regret fighting for truth and against fallacy.

Secondly, it was rewarding hard work. The 50 or so of us who put The Day of Protest together had never worked so hard in our lives before, and possibly since, and we worked for passion and a commitment to truth, rather than for money. Richard Osborne first proposed the idea, and became a pivotal part of our negotiating team; Greg Crough, Graham Kerridge, Richard Fields and I initiated the revolt and gave several of the alternative lectures; Bill Nichols responded to the need for banners by producing two 25 metre-long calico banners and numerous smaller ones; Kevin McAndrew churned out posters like an automaton.

Thirdly, I saw real courage at work. Several of the staff openly sided with the students: Paul Roberts, Jock Collins, and above all Frank Stilwell, put their careers on the line by speaking out in our favor, literally in full view of the Professors. Margaret Powers and Geelum Simson-Lee assisted us within the confines of the University’s administrative system. Other departments—including Accounting and Politics (then called Government) surreptitiously assisted us with printing facilities, and voted with us in the eventual successful Faculty motion to investigate the Department of Economics.

Fourthly, it was great, great fun. Hundreds of students attended the alternative lectures we put on, and most stayed for the mother of all parties that finished the evening.

Finally, it pointed the right way when the world was about to embark on a journey in the opposite direction. We didn’t realise it then, but our rebellion in 1973 was the counterpoint to the ascendancy of Neoclassical economics in the world of policy: at the same time that we were trying to pull it down, politicians, administrators and journalists around the world were succumbing to its fantasies. The stagnant and crisis-ridden decades that followed showed the gap between its promises and the impact of imposing its false beliefs on the real world.

If I had done nothing else of significance in the following 50 years, I could still look back on The Day of Protest with great pride and pleasure. So, from my 70-year-old self in 2023, to the 20-year old rebel in 1973, here’s lookin’ at you, kid.

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

———. 2006. “Steve Keen’s Monthly Debt Report November 2006 “The Recession We Can’t Avoid?”.” In Steve Keen’s Debtwatch, 21. Sydney.

———. 2020. ‘Emergent Macroeconomics: Deriving Minsky’s Financial Instability Hypothesis Directly from Macroeconomic Definitions’, Review of Political Economy, 32: 342-70.

Krueger, Anne O. 1991. ‘Report of the Commission on Graduate Education in Economics’, Journal of Economic Literature, 29: 1035-53.

Lucas, Robert E., Jr. 2003. ‘Macroeconomic Priorities’, American Economic Review, 93: 1-14.

Planck, Max. 1949. Scientific Autobiography and Other Papers (Philosophical Library; Williams & Norgate: London).

Prescott, Edward C. 1999. ‘Some Observations on the Great Depression’, Federal Reserve Bank of Minneapolis Quarterly Review, 23: 25-31.

 

The Day I Pranked Paul Samuelson

“Poets are the unacknowledged legislators of the World.” It was a poet who said that, exercising occupational license. Some sage, it may have been I, declared in similar vein: “I don’t care who writes a nation’s laws—or crafts its advanced treaties—if I can write its economic textbooks.” The first lick is the privileged one, impinging on the beginner’s tabula rasa at its most impressionable state.

The opening paragraph in Paul Samuelson’s Foreword to The Principles of economics course: a handbook for instructors (Saunders and Walstad 1990, p. ix)

We hear a lot (as in far too much) about and from Larry Summers these days, but in the good old days, Summers was more famous for being Paul Samuelson‘s nephew than he was for being himself.

Samuelson, and not Keynes, was the true father of what became known as “Keynesian economics”. Samuelson developed what came to be known as the “Keynesian-Neoclassical synthesis” in his PhD thesis (Samuelson 1947). His textbook—initially called Economics: An Introductory Analysis (Samuelson 1948), later simplified to just Economics (Samuelson and Nordhaus 2010)—presented a palatable version of this thesis as an introductory textbook for students of economics in the post-WWII world. It quickly became the dominant economics textbook on the planet, and everything that has happened in economics since can be traced back to it.

Samuelson’s impact on the development of economic theory was so profound that he was the first sole recipient of the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel“, in its second year of 1970. That was also the year that an Australian edition of his textbook was published, and adopted as a textbook by the Economics Department at the University of Sydney, where I began my undergraduate degree in 1971.

Two years later, in March 1973, Samuelson came to Sydney. This was big news at the time: “it was the first time an economics Nobel Prize winner had graced Australian shores”, Alex Millmow noted, and “Samuelson’s arrival made front page news in Australia’s business daily The Australian Financial Review (AFR)” (Millmow 2019).

Samuelson’s formal reason for the visit was to promote the Fulbright Scholarship scheme, but he also agreed to give a seminar at the University of Sydney—but not to the students. It was to be a staff-only seminar.

By this stage, I was an implacable critic of the entire Neoclassical edifice that Samuelson had created, and many of the academic staff felt the same way. They let me and a fellow student rebel, Richard Fields, know of the planned staff-only seminar, and we decided to prank it. I think Richard came up with the idea that we should put up posters announcing that Paul Samuelson would give a lecture in Merewether 1—the main lecture theatre in the Economics faculty building—at precisely the same time as his scheduled staff-only seminar. We’d publicity-shame them to move the seminar into the public arena.

Our ruse worked. At the appointed hour on March 22nd 1973, the conservative staff in the department duly rolled into a lecture theatre that had about 150 students and some progressive staff already in attendance. They sat in the front two rows while Samuelson, ever the showman as well as the intellectual, explained that he was going to give us the talk he’d give “if my grandmother asked, what’s this ‘General Equilibrium’ thing?”.

The blackboards filled up with masses of equations, and then Samuelson got even with a prank of his own—though it was at the expense of the conservative staff, rather than us.

I was sitting about halfway back in the lecture theatre, next to Tony Phipps, the quantitative methods lecturer in the department. Tony was a great bloke: everyone liked him regardless of politics, and he didn’t take sides in the then very antagonistic arguments between the Neoclassical and anti-Neoclassical staff. We were sitting together to help each other follow the mathematics.

We were doing well until at one point, Samuelson made what appeared to both of us to be a simple error: putting a plus sign where he should have put a minus. One of us leaned over to the other—I can’t remember whether it was Tony to me or vice versa—and said, “Shouldn’t that have been a minus?”. “Yeah, I think”, the other replied, “but this is Samuelson”. I didn’t have the courage to put my hand up and query the maestro (even if I didn’t like his music), and neither did Tony.

A few minutes later, Samuelson stopped mid-sentence and said “Ooh, wait a minute, I made a mistake a few lines back. I put a plus where I should have put a minus!”. Pointing at the conservative staff in the front rows, he continued “You guys should have corrected me!”.

Touché Samuelson.

When he finished, my fellow student critic Richard C. “Gull” Fields asked the first question, which was why Samuelson had not spoken on “the aims and orientation of economics,” and “meeting the radical challenge to orthodoxy” rather than presenting “an academic model of strictly limited interest.” I can’t remember his answer, but my guess is that the real reason for such an esoteric choice of topic is that it’s what Professor Simkin, the effective head of the department, wanted Samuelson to lecture on.

Some commentators attributed the subsequent rebellion in the department—which began in earnest 4 months later with the “Day of Protest” (Butler, Jones, and Stilwell 2009, pp. 9-10 et seq.)—to Samuelson’s lecture. Millmow expressed this belief in his abstract to “The Samuelson Revolution in Australia”:

For the most part, Samuelson was given a rapturous reception, but his guest lecture at the University of Sydney on an ill-chosen subject merely fanned the flames of rebellion by students wanting an alternative to mainstream economics. (Millmow 2019)

In reality, if mathematical lectures played any role in fomenting the rebellion that began 4 months after Samuelson’s lecture (Butler, Jones, and Stilwell 2009), then the most significant catalyst was Simkin’s macroeconomics course, which he taught from his own textbook, Economics at Large.

The title was the best thing about this book, and it was replete with mathematical errors of its own—possibly due mainly to typesetting problems. Simkin would open every lecture with a list of the errors that students needed to amend in their copies, and in response I once asked him to produce an errata book to save us the trouble.

He replied that maybe I should produce that errata book myself instead.

Ultimately, I did. It’s called Debunking Economics (Keen 2011), and it covers most of the nonsense masquerading as logic that makes up Samuelson’s child of modern Neoclassical economics.

I’d like to think that I got the last laugh on both Simkin and Samuelson, but that’s not true. Having attended the funerals of two leading members of non-mainstream economics in recent years—Geoff Harcourt and Vicki Chick—I’ve realised that I too am likely to die before Neoclassical economics does.

Ultimately, my attempt to bring down this edifice of absurd assumptions and flimsy logic may end up being no more than a prank. The Neoclassical edifice will instead probably outlast not just me and its many other critics, but also the market economy that is supposed to be its subject. The truly dreadful work on climate change that has been done by William Nordhaus, who is also today’s editor of Samuelson’s seminal textbook (Samuelson and Nordhaus 2010), will see to that (Keen 2023).

Butler, Gavan, Evan Jones, and Frank Stilwell. 2009. Political Economy Now!: The struggle for alternative economics at the University of Sydney (Darlington Press: Sydney).

Keen, Steve. 2011. Debunking economics: The naked emperor dethroned? (Zed Books: London).

———. 2023. “Loading the DICE against pension funds: Flawed economic thinking on climate has put your pension at risk ” In. London: Carbon Tracker.

Millmow, Alex. 2019. ‘The Samuelson Revolution in Australia.’ in Robert A. Cord, Richard G. Anderson and William A. Barnett (eds.), Paul Samuelson: Master of Modern Economics (Palgrave Macmillan UK: London).

Offer, Avner, and Gabriel Söderberg. 2016. The Nobel factor: the prize in economics, social democracy, and the market turn (Princeton University Press: Princeton).

Samuelson, P. A. 1947. Foundations of Economic Analysis (Harvard University Press: Cambridge, MA).

———. 1948. Economics: An Introductory Analysis (McGraw-Hill: New York).

Samuelson, Paul A., and William D. Nordhaus. 2010. Economics (McGraw-Hill: New York).

Saunders, Phillip, and William B Walstad. 1990. The Principles of economics course : a handbook for instructors (McGraw-Hill: New York).