The complexity economics view of substitution

A key problem in economics is how much firms can substitute specific inputs in order to carry out production. This issue has become more relevant now than in the past few decades. First, during the Covid-19 pandemic certain industries completely shut down, depriving some downstream industries of certain critical inputs. Now, a crucial policy question is how much European economies can get away without Russian gas in light of the Ukraine-Russia war. An influential policy report recently provided an answer to this question by using a state-of-the-art general equilibrium model. The report contains an interesting discussion on substitutability. It distinguishes between the “engineering view” of substitutability at the very microeconomic level, according to which lack of inputs that are technologically necessary can completely stop production in specific plants, and the “economic view”, according to which there is more substitutability at the macroeconomic level thanks to the ability of firms to find other suppliers, quickly change their production technologies to rely less on lacking inputs, etc.

In this blog post I argue that these mechanisms of substitution are only implicitly included in general equilibrium models, and that the complexity economics view of substitution provides an alternative that merges the engineering and economic views by explicitly representing the engineering constraints in production and how they can be relaxed. I discuss this at a general level and then give a rudimentary example of what I mean by discussing some work that my coauthors and I did on what we call the “partially binding Leontief” production function.

Everyone agrees that the economy has high capability to adapt. The policy report mentioned above gives some examples. When China implemented an export embargo on rare earths against Japan in 2010, Japanese firms found ways to use less rare earths in production or substitute them altogether. When an oil pipeline reaching Germany was shut down due to oil contamination in 2019, German firms found ways to import oil through other channels. In World War II, because the US were cut out of rubber supply, American firms developed synthetic rubber.

How are these substitution mechanisms captured in state-of-the-art general equilibrium models? The starting point is the nested CES production function. To make stuff (or to provide a service) you need several inputs. First, you need “primary factors”. These are mainly labor, capital (e.g. machines) and land. Second, you need “intermediate goods” which are used up in production. For instance, if you want to produce steel, you use iron and electricity as intermediate goods, but you probably also use restaurant services such as the canteen where employees have lunch. The nested CES production function aggregates intermediate goods into a composite intermediate good, primary factors into a composite primary factor, and then it aggregates the composite intermediate good and the composite primary factor specifying a level of production. These composites are also called “nests”.

CES stands for “Constant Elasticity of Substitution”. A nice property of this production function is that you can specify a given value for the elasticity of substitution, which is the easiness with which firms substitute their inputs. In typical calibrations, such as the one used in the policy report, you specify some values for the elasticity of substitution across intermediates, across primary goods, and between the intermediate composite and the primary composite. This means, among other things, that all intermediate inputs can be substituted equally well. Keeping with the example above, you can substitute iron and electricity with restaurant services. If the elasticity of substitution between intermediates and factors is sufficiently high, you may also easily substitute iron and electricity with labor or land. These elasticities of substitutions are usually calibrated based on a combination of econometric studies and plausibility arguments.

This is what I mean when I say that the standard approach to substitution implicitly incorporates engineering constraints and the way they can be relaxed. By assuming a level of substitutability that is in between zero and infinite, CES production functions implicitly capture the idea that substitution is possible but not so easy. One can play with the elasticity of substitution parameter depending on the question at hand. For instance, because technological change takes time, substitution is easier in the long run than in the short run, so it is reasonable to assume higher elasticities when one is concerned with long run responses of the economy. (It is also reasonable to assume higher elasticities at higher aggregation levels.)

To be fair, CES production functions are the best you can do in a world with limited real-world data and little information about production processes and if you want to work with mathematically elegant models that allow for easily interpretable results and closed-form solutions.

But the stakes are high and policy makers need to be sure about the quantitative reliability of macroeconomic models! Under a 30% reduction in gas imports, according to old style “Leontief” models that do not allow substitution there could be up to a 30% reduction in German GDP. By contrast, the policy report that uses a state-of-the-art general equilibrium model predicts up to a 3% reduction because it assumes that gas can be substituted by other inputs. Whatever result turns out to be true (we may never know if the gas import ban is not enacted), in my view policy decisions should be based on models that incorporate real-world data in a much more granular way, whose predictive performance is tested on past episodes.

For instance, imagine a model of the economy with 629 firms, each representing a 4-digit NACE industry. As an example, consider industry 2420, “Manufacture of tubes, pipes, hollow profiles and related fittings, of steel”. One could consult with engineers working in plants classified in this industry (and in all other 628 industries) and get detailed information about the physical processes that take place, which inputs are absolutely necessary and in which ratios, which alternatives can be considered for which inputs, how long it would take to come up with replacements. This information would be incorporated into a dynamic model in which firms buy inputs, replenish or use up their inventories, produce and sell outputs over time. In this way, users of the model can introduce a shock and explicitly see which input bottlenecks are created in the short run and in the long run, which cascading effects can occur (e.g. some industry stops production, and this leads other industries to stop production as well), and obtain a reliable estimate on the overall economic impact that is explicitly based on the industrial structure of the economy. The empirical performance of this model would be tested against several historical episodes in which some inputs became unavailable.

My coauthors and I built a model like that to assess the economic effects of the Covid-19 pandemic on the UK economy. We conducted a survey of industry analysts to determine which inputs were critical for production in a short time frame. We asked this question for each of 55 2-digit NACE industries, for each of 55 inputs. See the answers in the figure below. A column denotes an industry and the corresponding rows its inputs.  Blue colors indicate critical inputs, red and white non-critical inputs (red is an intermediate case of important inputs). The results shown here indicate that the majority of elements are non-critical inputs (2,338 ratings), whereas only 477 industry-inputs are rated as critical and 365 inputs are rated as important. Electricity and Gas (D35) are rated most frequently as critical inputs in the production of other industries (almost 60% of industries). Also frequently rated as critical are Land Transport (H49) and Telecommunications (J61). At the same time, many manufacturing industries (NACE codes starting with C) stand out as relying on a large number of critical inputs. For example, around 27% of inputs to Manufacture of Coke and Refined Petroleum Products (C19) as well as to Manufacture of Chemicals (C20) are rated as critical.

 

 

 

 

 

 

 

 

 

 

Using these data as a starting point, our model assumes that partial lack of any critical input proportionally stops production because of fixed technological recipes (as in Leontief models), while lack of non-critical inputs does not stop production (which is why we call our production function the partially binding Leontief). Our model is dynamic, i.e. it produces time series of production in all industries that take into account depletion of inventory stocks and cascading effects. In the figure below one can see that when the lockdown starts, production in certain industries decreases immediately, while other industries stop production when they run out of critical inputs, and this makes other industries stop production as well.

 

 

 

 

 

 

We also checked the predictive performance of our model, making an out-of-sample forecast of the reduction in UK GDP that turned out to be more precise than competing estimates.

Our approach should be viewed as a first step in a line of research that tries to explicitly incorporate engineering and technological details to make a realistic macroeconomic model of the production side of the economy. First of all, our survey is still too aggregate, as remaining at the level of 2-digit 55 industries makes it impossible to pin down technological details that differ across firms within any 2-digit industry. Next, our modeling does not consider prices, which were not a major factor during the Covid-19 pandemic but look more important in the current situation. Moreover, we do not allow substitution with imported goods. On a higher level, imprecise assumptions at the micro level may well lead to large errors at the macro level (in machine learning parlance, models that follow our approach may have very little bias but a lot of variance, while general equilibrium models that use uniform elasticities of substitution may have more bias but less variance).

Despite these shortcomings, we consider our approach as an example of the complexity economics view of substitution: we use “a lot” of data [1] to initialize industry-input-level substitutability in a non-equilibrium dynamic model that produces macroeconomic results by explicitly aggregating from technologically micro-founded production units, generating cascading effects and reliable forecasts (at least for the pandemic episode). At this point there is still a lot to do, nonetheless I view this as an exciting area of research that the complexity economics community is already focusing on.

[Thanks to François Lafond and Doyne Farmer for comments.]

[1] Our survey of industry analysts produced 3025 data points, which can be compared to the 4 aggregate elasticities that are qualitatively calibrated from data in the policy report.

Behavior change in economic and epidemic models

This post is for epidemiologists to understand what economists mean when they say that epidemic models should be “forward-looking”. And it is for economists to try and persuade them that incorporating behavior change in an “ad-hoc” fashion is just fine. I argue that all differences boil down to the type of mathematics that the two disciplines typically use – economists are used to “fixed-point mathematics”, epidemiologists to “recursive mathematics”. All in all, behavior change is incorporated by default in economic models, although in a highly unrealistic way; on the contrary, epidemiologists need to remind themselves to explicitly introduce behavior change, but when they do so they have the flexibility to make it much more realistic.

(When I talk about economists, I mean the vast majority of economic theorists, that is, economists that mostly reach their conclusions by writing mathematical models, as opposed to analyzing the data without a strong theoretical prior. When I talk about epidemiologists, I mean the ones that I know – mostly coming from the complex systems and network science communities. In this post I express strong opinions, but I try to be factual and fair; if you think I mischaracterized something, please let me know and I’ll be happy to revise.)

I would argue that modeling how humans respond to changes in their environment is as important in epidemiology as it is in economics. Recessions induce people to be cautious with spending out of fear that they could lose their job, in the same way that pandemics induce people to limit their social contacts out of fear of infection. Humans care about health as they care about their economic well-being, devoting at least as much attention to nonpharmaceutical interventions by the government as to monetary and fiscal policies. Yet, economists have been obsessed to model how people react to government policy, while epidemiologists have spent comparatively less attention. Finding out why scientific conventions in the two fields became so different is a super interesting epistemological question.

Let’s start with how economists deal with behavior change. Suppose that households’ income is 100 and taxes are normally 30% of that, letting disposable income be 70. Suddenly, the government cuts taxes to 15%, leaving households with an income of 85. To repay the deficit it created, two years later the government raises taxes to 45%, reducing households’ disposable income to 55. It then repeats this policy every four years: in years 4, 8, 12, …, it reduces taxes to 15% of income, in years 6, 10, 14, … it raises them to 45% of income.  Households want to smooth their consumption over time. If their behavior does not change, they fail, as their consumption is 85 in years 0-2, 4-6, 8-10, … and 55 in years 2-4, 6-8, 10-12, …

A simple way to model behavior change in this setting is to assume that households adaptively form expectations on government policy. After a few years, they learn to anticipate the pattern of taxes, and keep their consumption at 70. Indeed, they save when taxes are low, and repay what they saved in periods when taxes are high. If government policy changes, say taxes increase or decrease every four years instead of every two years, households take time to adapt to the new policy. If government policy changes frequently, households get wrong tax estimates most of the time and systematically overconsume or underconsume.

Economists have never been happy about agents being systematically wrong. Since the 70s, models with adaptive expectations have been replaced by models with so-called “rational expectations”. Rational agents [1], the argument goes, would discover even hard-to-predict patterns in government policy, and replace naïve agents that are unable to do so. Rational agents are “forward-looking” in the sense that they know the equations that drive policy. Therefore, they are able to make consumption decisions in year t based on government policy in year t+1. What if these consumption decisions impact government policy, too?

A rational expectations equilibrium is an infinite sequence of consumption decisions and government policies that are consistent with one another. Finding the equilibrium amounts to finding a fixed point in the (infinite-dimensional) space of consumption and policy sequences. Discovering such a fixed point turns out to be easier if the modeler assumes that households maximize a utility function. Using the mathematics of intertemporal optimization and Bellman equations, the modeler can find these sequences. I call this approach “fixed-point mathematics”.

In contrast, the learning process based on adaptive expectations is simply a difference equation in which households update their beliefs based on past tax values. Variables in year t are only determined based on variables in years t-1, t-2, … I call this approach “recursive mathematics”, and argue that it makes it much easier to include realistic assumptions.

Let’s come to behavior change in epidemiological models. These review articles show that there are quite a few papers trying to incorporate behavior change in basic SIR models. This article from 1976 and some subsequent articles consider non-linear variations to the basic SIR model, capturing for example the idea that a high number of infected makes susceptible individuals more cautious, lowering the transmission rate. The same idea is applied to this paper modeling the COVID-19 pandemic. The authors assume that individuals reduce social contacts when the number of deaths rises; because deaths occur with a delay with respect to infections, this leads to oscillatory dynamics in the number of infections as individuals ease or tighten social distancing. This nice paper assumes that awareness about diseases is transmitted in a social network, but fades with time. Again, this has clear implications for disease dynamics.

All these ways to deal with behavior change remind of the adaptive expectations framework of learning about government fiscal policy. Indeed, these approaches are rooted in recursive mathematics, which epidemiologists coming from biology or physics are well versed in.

Of course, economists aren’t happy of these ways to deal with behavior change in epidemic models, as they aren’t happy about adaptive expectations in economic models. Especially in the last few years, quite a few papers came out that tried to apply the rational expectations framework to epidemiological models. This paper, for example, assumes that individuals receive utility from social contacts, but utility goes down if they become infected. Thus, individuals trade-off utility from contacts with infection risk [2]. “Rational” individuals know the underlying SIR model and so are able to perfectly forecast epidemic paths conditional on their level of social contacts (see these notes for a very accessible explanation on this point).

In the figure below, the solid line is the rational expectations equilibrium, in which the epidemic path optimally satisfies the contact-infection tradeoff. In other words, at all times individuals choose the number of social contacts that they have, taking the optimal level of risk. Now look at the dotted line (ignore the dashed one). This is what happens when individuals don’t respond at all, as in the baseline SIR model. Does this figure look familiar? It should, it really looks like the “flatten the curve” picture that contributed convincing several governments to impose lockdown measures in March 2020. Under these assumptions, though, lockdown was useless, as individuals would have flattened the curve by themselves. In some sense, this is the Swedish approach. I leave it to the reader to judge whether it was a good idea to provide policy recommendations based on this model.

 

 

 

 

 

 

In the last months, the number of epidemiology papers written by economists has exploded [3]. The nice thing about models with rational expectations is that you cannot forget about behavior change. In a sense, you get it for free with the build-up of the model. The bad thing is that, in my opinion, this type of behavior change is clearly unrealistic. Even if real people had been able to act optimally at the onset of the COVID-19 pandemic, the scarcity of data would have prevented them to properly forecast the epidemic trajectory. And I have strong doubts about individuals acting optimally in any case. Thus, let me end this blog post with the following plea.

Epidemiologists, please remember to introduce behavior change in your models. To be fair, the models that had most policy impact were clearly unrealistic in not including any behavioral response. (From looking at the report, I assume that the Imperial study by Ferguson et al. did not have it, but I am not sure as I could not find a full description of the model.) But, please do not include behavior change in the way that economists mean it. In this recent paper on the HIV epidemic published on a top economics journal, individuals decide whether to have protected sex optimally trading off reduced pleasure from using condoms and infection risk. Policy recommendations are drawn from it. Aside from too easy ironies about agents maximizing a utility function before having sex [4], this completely ignores realistic elements such as social norms, decentralized information traveling in social networks of infected people, altruism, etc. These are also key elements characterizing behavior change in the COVID-19 pandemic. These elements could certainly be included in “rational” models, but it is very hard when you have to respect intertemporal fixed point conditions. Indeed, none of the at least 15 papers of epidemiology by economists that I’ve seen so far departs from the baseline assumption of homogenous households maximizing their own utility independently of social pressure. These papers will come, including one deviation from the baseline framework at a time, but most papers will provide policy recommendations based on the baseline. Instead, I hope epidemiologists will keep following the literature on behavior change that they already developed – see below.

Economists, if you must build epidemic models, please accept that you can introduce behavior change in a “reduced-form” way [5]. Some of you are already doing that. This nice paper builds essentially an agent-based model with spatial features, leading to realistic outcomes such as local herd immunity. The authors model behavior change simply by assuming that the transmission rate decreases linearly with the rate of infections. I don’t think they could find a rational expectations equilibrium that is fully consistent with the spatial structure, at least without oversimplifying other aspects of the model. This other paper, modeling behavior change essentially in the same way [5], considers infection spillovers across US counties, with a very accurate calibration based on county-level daily infection data. Instead, papers that go full steam towards rational, forward-looking agents, unavoidably ignore realistic aspects such as space. I understand that models with rational expectations are elegant and comparable and that there is a wilderness of reduced-form behavior-change epidemic models that is difficult to navigate. But, at least for epidemic models, please explore various boundedly-rational, adaptive, “ad-hoc” ways to respond to infection risk: you have a universe of realistic assumptions at your fingertips.

And, if you enjoy being able to play with reduced-form assumptions without the fear to be shot down by a referee, please consider such assumptions for economic models, too. It is so interesting to explore the world of “backward-looking” reactions to the economic environment. In our COVID economics paper, for example, we have sophisticated consumption decisions that depend on “ad-hoc” estimates of permanent income. Having “smart” agents that react to their environment should not almost always mean having optimizing and forward-looking agents in a rational expectations equilibrium.

____________________________________________

Endnotes, or the corner of this blog post where I grumble about the state of economics, except in endnote 4, where I defend practice in economics from a misplaced criticism.

[1] I hate this use of the word “rational”. Here it means two things: that agents are able to maximize an objective function, and to correctly guess what every other agent and the entire economy do. While I agree that maximizing an objective function is consistent with the notion of rationality, I think that guessing what other agents do is a matter of prediction. Rationality and prediction can be in contrast. Rational expectations are effectively “correct expectations”. But using the word “rational” is a great selling point, because it makes “boundedly rational” decision rules look suboptimal to many eyes.

[2] Many people argue that taking decisions under incentives and constraints is what defines “economics”. So epidemiological models in which agents maximize a utility function subject to infection risk are “economic-epidemiological models”. I really really dislike this use of the word “economics” and what it implies. Economics should be the study of the economy. Reaction to incentives under constraints should be a branch of psychology. Economics should be neutral to which psychological theory it uses to model human behavior. Using the word “economics” to mean reaction to incentives under constraints makes it sound like that is the only way to model human behavior to study the economy. It is not.

[3] Interestingly, I haven’t seen any epidemiologist write an economics paper. This is known as economic imperialism: with the hammer of rational choice, every other social science looks like a nail for an economist. After all, economics is the queen of the social sciences, no?

[4] Saying that it is unrealistic that individuals maximize utility somehow misses the point of rational choice theory. Maximizing utility is only a tool to make a point prediction about what individuals do given incentives and constraints. It is a very general way to say, for example, that out of risk of infection individuals will be more cautious. A boundedly rational rule could still be expressed as the optimization of a modified utility function. I personally find utility a convenient analytical device; my real problems with economic theory have to do with equilibrium.

[5] In the 70s, at the same time that economists started to care about rational expectations, they also started caring about “microfoundations”. Every decision rule needed to be rooted in first principles, namely so-called preferences, technology, and resource constraints. By contrast, a “reduced form” assumption is a decision rule that is just postulated. For example, deriving decreases in the contact rate of a SIR model from maximizing a logarithmic utility function is consistent with microfoundations; simply postulating that contacts decrease linearly with the number of infectious individuals is not. While microfoundations are laudable in principle, they are often a straightjacket in practice. Many economists start with reduced-form expressions, and then reverse-engineer microfoundations. This is an art; it too often does not matter if microfoundations are just made up without being based on empirical evidence, as long as they are consistent with axioms of decision theory.

This paper is exemplary in the class of epidemic models by economists. To capture behavioral response, the authors assume a non-linear form on the infection rate, as in the 1976 paper mentioned above. But they justify it from first principles of economic theory. “We assume that all agents receive stochastic shocks z that we interpret as economic needs. The shocks are drawn from a time-invariant distribution F(z) with support z ∈ [0, ∞). […] Facing risk of infection during an excursion, Susceptibles optimally choose to satisfy a given need z only if the benefit exceeds the expected cost of taking an excursion.” In practice, agents go shopping only if z is a larger than an exogenously postulated level depending on the number of infectious individuals. By further assuming that the CDF of the stochastic shocks is z/(1+z), the authors obtain the functional form of the SIR model that they wanted. They will have less problems with referees as they apparently comply to academic social norms, but I find it hard to see the value added of such a build-up, at least in this case. (Note that I think that other than that it is a pretty good paper, especially in the way it is calibrated to data.)

The usefulness of qualitative ABMs in economics: An example

I think it is uncontroversial that, compared to standard economic theory, Agent-Based Models (ABMs) describe human behavior and market dynamics more realistically [1]. This enhanced realism gives ABMs the potential to provide more accurate quantitative forecasts, once we figure out how to use them for prediction. However, if the goal of a model is more qualitative, for example to elucidate a theoretical mechanism, is realism useful?

Many economists would say that it is not, and too much realism may even be counterproductive. For example, to expose his Nobel-winning theory of asymmetric information (Market for Lemons), George Akerlof did not need boundedly rational agents and a detailed depiction of market exchanges. The standard setup, with rational utility-maximizing agents and market equilibrium, allowed a transparent exposition of the issue of asymmetric information. I think this is a fair point; however, which level of realism should be assumed in general qualitative models is mostly a matter of taste. If the modeler likes to highlight some economic force in a way that does not depend on people’s bounded rationality or on the nitty-gritty market details, then the assumptions of standard economic theory are okay. If the modeler wants instead to explain some phenomenon as the outcome of dynamically interacting boundedly-rational heterogenous agents, an ABM may be a more natural choice. In some situations, it may be the best choice.

Our paper “Residential income segregation: A behavioral model of the housing market”, with Jean-Pierre Nadal and Annick Vignes, just published in JEBO (Journal of Economic Behavior and Organization), is in my opinion a good example. In this paper, we study the relations between income inequality, segregation and house prices, and explore which policies best deal with these issues. Most urban economists address these problems using spatial equilibrium models. These models are solved by assuming that individuals in each income category experience the same utility all over the city; the resulting prices determine segregation. In our ABM, agents behave according to fast-and-frugal heuristics, and individual interactions dynamically determine prices and segregation patterns.

First of all, taking our approach provides simpler narratives. For instance, to explain why the rich live in the fanciest locations of a city, spatial equilibrium models need to assume that the rich care about city amenities more than the poor do. In our ABM, this is simply explained by rich buyers bidding up the prices until the poor cannot afford buying there.

Additionally, in our ABM it is straightforward to include as much heterogeneity as we need, as we do not have to solve for equilibrium. This is really useful, for example, to study the effect of income inequality on segregation. In accordance with empirical evidence, we find that stronger inequality increases segregation. However, it also decreases average prices over the city. Indeed, with stronger income inequality fewer buyers bid more, while most buyers bid less: the global effect is negative. Finally, we explore whether subsidies or taxes are better at mitigating income segregation. According to our ABM, subsidies are better, because they directly target the poor, increasing their purchasing power. Taxes instead hit the rich, but all benefits go to the middle class, with no effect on the poor. Modeling heterogeneity is key.

Finally, from a technical point of view, a standard critique from economists is that the reliance on numerical simulations in ABMs makes them less suited to clarify theoretical mechanisms. This is true to some extent. For example, the results in the paragraph above have been obtained by simulating the ABM [2]. Nonetheless, we did solve parts of our ABM analytically, giving insights on the causal mechanisms within the model and on non-linearities. Maths and ABMs are not incompatible; the maths to solve ABMs is just a bit different from the one of optimization and fixed point analysis, more commonly used in economic theory.

In sum, I think that our paper is a good example of how even a qualitative ABM can be useful in economics, to provide more realistic narratives and to easily deal with heterogeneity. [3]

 

[1] Excluding some situations in which sophisticated agents interact strategically, such as Google auctions, where standard economic theory may be a more literal description of reality.

[2] To ensure full reproducibility of our results, we have put the code to generate all figures online on Zenodo, a Cern repository for open science.  Sharing code is sn increasingly common practice in the ABM community, hopefully it will become the norm soon.

[3] For a version of this post with the figures from the paper, you can take a look at the Twitter thread starting from this link.

What is equilibrium in economics and when it is (not) useful

Equilibrium is the most widespread assumption across all subfields of economic theory. It means different things in different subfields, but all equilibrium concepts have a common meaning and purpose, with the same pros and cons. In this post I will argue that the different way in which equilibrium is treated is the distinctive feature of complexity economics, narrowly defined. (This post is mostly methodological. In this blog I will alternate actual research and methodology, always pointing to concrete examples when talking about methodology.)

What equilibrium means in economics

Before talking about what equilibrium is, it is useful to say what it is not. First, equilibrium does not necessarily imply stationarity. Indeed, many equilibrium concepts are dynamic and so for example it is possible to have chaotic equilibria. Conversely, stationary states need not be equilibria. Second, equilibrium in economics has nothing to do with statistically balanced flows, as used in many natural sciences. Third, equilibrium is independent of rationality, if rationality just means choosing the optimal action given available information (I will come back to this).

Equilibrium in economics can generally be thought of as a fixed point in function space, in which beliefs, planned actions and outcomes are mutually consistent. Let me elaborate on this. Differently from particles, economic agents can think, and so have beliefs about states of the economy. Behavioral rules that can be fully or boundedly rational map these beliefs into planned actions. Finally, outcomes resulting from the combined actions of all agents may let each agent realize their planned actions, or may force some agent to choose an action that was not planned. Equilibrium outcomes are such that agents – at least on average – always choose the action that was planned given their beliefs and behavioral rules. In other words, beliefs and planned actions match outcomes.

A few examples should clarify this concept. Perhaps the most famous equilibrium is the Walrasian one. This is usually described as demand=supply, but there is more to that. In a market with one or multiple goods, agents have beliefs about the goods prices, and through some behavioral rule these beliefs determine the quantities that agents try to buy or sell (planned actions). Aggregating up these quantities determines outcomes – the differences between demand and supply for each good. If there is excess demand or excess supply, some agents buy or sell more (or less) than what they planned. Instead, in a Walrasian equilibrium agents have beliefs on prices that make them buy or sell quantities that “clear” the market, i.e. demand=supply. In this way, all agents realize their plans.

When strategic interactions are important, economists use game theory to model interdependent choices. In game theory players have beliefs about what their opponents will do and plan actions according to these beliefs and some behavioral rule. For example, if players are fully rational their behavioral rule is to select the action that maximizes their payoff given their beliefs. In a Nash equilibrium all players’ actions and beliefs are mutually consistent, so no agent can improve her payoff by switching to another action. But agents could be boundedly rational, playing also, with some smaller probability, actions that do not maximize their payoff. In this case it is for example possible to define a Quantal Response Equilibrium, in which again beliefs and planned actions match outcomes.

All equilibrium concepts above are static, but it is straightforward to include a temporal dimension. (Beliefs over time are called expectations.) For example, in many macroeconomic models agents are forward-looking, e.g. they plan how much to consume in each future period of their life. These consumption decisions depend on future interest rates: in periods when the interest rates are high, agents may prefer saving to consuming, so to earn higher interest and afford higher consumption in the future. In a rational expectations equilibrium [1], the expectations for future interest rates are on average correct, so that again beliefs and planned actions (consumption decisions) match outcomes (interest rates). The assumption of rational expectations places no restriction on macroeconomic dynamics: this may reach a stationary state, but also follow limit cycles or chaos.

Many more equilibrium concepts have been proposed in economics, and new ones keep being introduced, but all equilibria share the same rationale. For example, search and matching models are used to go beyond the Walrasian equilibrium concept. When applied to the labor market, these models assume that workers and firms engage in costly search of a good match. This potentially difficult search process may explain involuntary unemployment, which could not be explained if labor demand=labor supply, as in Walrasian models. Yet, the equilibrium of search and matching models can still be viewed in the same way as in the examples above. Workers have beliefs about future vacancy rates, which determine how difficult it is to find a job, and firms have beliefs on future unemployment rates, determining how difficult it is to fill a vacancy. These beliefs determine which minimum wage to accept or offer, or how long to search (planned actions), typically following a rational behavioral rule. Finally, the combined decisions of workers and firms lead to outcomes, namely unemployment and vacancy rates. Again, in equilibrium beliefs, planned actions and outcomes are mutually consistent.

Pros and cons of equilibrium

If equilibrium has been a key concept in economic theory for more than a century, there must be some good reasons. The first reason, I think, is that modeling out-of-equilibrium behavior is harder than modeling equilibrium behavior. What is a realistic way to model what happens when beliefs, planned actions and outcomes are systematically inconsistent? (I give a possible answer at the end.) Equilibrium is then an incredibly useful simplification, that makes it possible to abstract away from this problem. Economic theorists are often interested in adding more and more realistic features about how the economy works in their models, and by assuming equilibrium they keep their models tractable. In addition, contemporary economics is becoming more and more empirical. Many applied economists are happy to just build a model that accounts for some property of the data, and building models with equilibrium is a transparent way to highlight the relevant theoretical mechanisms.

A second reason for the success of equilibrium is that time averages of beliefs, planned actions and outcomes may approximate equilibrium, which would then be a useful point prediction. An example that comes from my research is the game of Matching Pennies. If this game is played repeatedly, under some learning algorithms the players will never converge to a Nash equilibrium. However, it is easy to show that time averaged play is close to equilibrium behavior [2]. Something similar has been observed experimentally.

A third reason is that by assuming equilibrium many variables are determined endogenously, that is within the model. This makes it possible to consider non-trivial interdependencies, called by economists general equilibrium effects. An example comes from a nice paper by Cravino and Levchenko I recently read. In this paper the authors build an equilibrium model to investigate how much multinational corporate control affects international business cycle transmission. Assuming that parent companies are hit by a “shock” in one country, the authors look at aggregate effects on other countries where affiliate companies operate. Interestingly, the effect of the shocks is amplified if workers in the other countries are less willing to change how many hours they work. This general equilibrium effect is due to the interconnections between the good and labor markets, captured by assuming equilibrium.

Despite the advantages of equilibrium assumptions, I think there are two main shortcomings. The first is that, in my opinion, little of what happens in the real world is precisely described by equilibrium. If one is interested in quantitative models, forcing the model to be in equilibrium is a strong mis-specification, even if some aspects of reality are reasonably approximated by equilibrium. Of course many equilibrium models are shown to fit the data, but most analyses are based on in-sample fitting and so could be prone to overfitting.

The second shortcoming is more practical. In some cases solving for equilibrium is technically challenging, and this prevents including some realistic assumptions and fully embracing heterogeneity. In the words of Kaplan and Violante in the Journal of Economic Perspectives “Macroeconomics is about general equilibrium analysis. Dealing with distributions while at the same time respecting the aggregate consistency dictated by equilibrium conditions can be extremely challenging.” Kaplan and Violante propose macroeconomic models named HANK (Heterogeneous Agent New Keynesian), but the way they deal with heterogeneity is extremely stylized. In addition, I think that one of the main reasons why insights from behavioral economics are not routinely added to economic models – in macroeconomics but also in other fields – is that it is technically harder to solve for equilibrium if one departs from full rationality. However, heterogeneity and bounded rationality are key to make serious quantitative models (real people are heterogenous and boundedly rational).

In sum, I think that assuming equilibrium can be really useful if models are used for qualitative reasoning, but it is an obstacle for quantitative analyses.

Complexity economics and equilibrium

My favorite narrow definition of complexity economics is making economic models that are not solved by assuming equilibrium. Rather, the modeler postulates the behavioral rules that each agent will follow and then just lets the system evolve over time. This is what happens in Agent-Based Models (ABMs), often represented as computer programs, or in Heterogenous Agent Models (HAMs), typically represented as dynamical systems. In either case, beliefs and planned actions need not match outcomes. In some cases they might, perhaps after an initial transient, but this is not a primary concern of the modeler. I think that assuming equilibrium is a strong top-down constraint imposed on the system. ABMs and HAMs let outcomes emerge in a bottom-up way without imposing equilibrium constraints, which I think is more in line with a complex systems view of the economy.

Is this useful? I think that the main advantages mirror the shortcomings of equilibrium models. Because one does not have to solve for equilibrium, it is very easy to include any form of heterogeneity and bounded rationality. If one also believes that out-of-equilibrium behavior better describes real economic agents, ABMs and HAMs seem more promising than equilibrium models for quantitative analyses. With the increasing availability of large datasets, we may be able to show this explicitly in the upcoming years. Another advantage is that not assuming equilibrium may lead to more natural descriptions of some problems: for an example, see the housing market ABM in my paper with Jean-Pierre Nadal and Annick Vignes.

The main problems of not assuming equilibrium also mirror the main advantages of doing so. First, being forced to model out-of-equilibrium behavior in each submodule of the model makes ABMs computationally very expensive. Second, it is easy to overlook interdependencies and to take too many variables as exogenous. Third, if beliefs, planned actions and outcomes are systematically inconsistent this may lead to mechanistic behavior that is as unrealistic as equilibrium. For example, in this very nice paper by Gualdi et al., for some parameter settings the ABM economy undergoes a sequence of booms and busts determined by consumers and firms systematically failing to coordinate on equilibrium prices (see first paragraph of Section 5.2). While this may be a realistic description of some economic crises, it seems unlikely that economic agents would systematically fail to recognize the discrepancy between beliefs and outcomes.

I think that the problem of what happens when beliefs and planned actions systematically do not match outcomes can be tackled in ABMs by modeling learning in a sensible way, perhaps including models of agents learning how to learn. In this way, agents may systematically be wrong but in many different ways, and so be unable to find the equilibrium. This view, I think, best describes economic reality.

In sum, complexity economics models are not solved by assuming equilibrium, and this also has its pros and cons. We will see over the upcoming years if the pros outweigh the cons.

_________________________________________

I would like to thank everyone for your interest in this blog: my first post received way more online attention than I expected. Hope you will find my posts interesting! And please give me feedback — I wrote this post with the hope that a natural scientist with just a vague knowledge of economics could understand the basic idea; if you are such a scientist, let me know if I succeeded!

_________________________________________

[1] I find the name “rational expectations” very misleading. Rational expectations equilibria have nothing to do with rationality, rather with the assumption that expectations match outcomes, which does not necessarily imply rationality.

[2] It is not always true that time averages correspond to equilibrium behavior. For example, if the players learn using fictitious play this is not true. And one always has to check ergodicity when using time averages.