Behavior change in economic and epidemic models

This post is for epidemiologists to understand what economists mean when they say that epidemic models should be “forward-looking”. And it is for economists to try and persuade them that incorporating behavior change in an “ad-hoc” fashion is just fine. I argue that all differences boil down to the type of mathematics that the two disciplines typically use – economists are used to “fixed-point mathematics”, epidemiologists to “recursive mathematics”. All in all, behavior change is incorporated by default in economic models, although in a highly unrealistic way; on the contrary, epidemiologists need to remind themselves to explicitly introduce behavior change, but when they do so they have the flexibility to make it much more realistic.

(When I talk about economists, I mean the vast majority of economic theorists, that is, economists that mostly reach their conclusions by writing mathematical models, as opposed to analyzing the data without a strong theoretical prior. When I talk about epidemiologists, I mean the ones that I know – mostly coming from the complex systems and network science communities. In this post I express strong opinions, but I try to be factual and fair; if you think I mischaracterized something, please let me know and I’ll be happy to revise.)

I would argue that modeling how humans respond to changes in their environment is as important in epidemiology as it is in economics. Recessions induce people to be cautious with spending out of fear that they could lose their job, in the same way that pandemics induce people to limit their social contacts out of fear of infection. Humans care about health as they care about their economic well-being, devoting at least as much attention to nonpharmaceutical interventions by the government as to monetary and fiscal policies. Yet, economists have been obsessed to model how people react to government policy, while epidemiologists have spent comparatively less attention. Finding out why scientific conventions in the two fields became so different is a super interesting epistemological question.

Let’s start with how economists deal with behavior change. Suppose that households’ income is 100 and taxes are normally 30% of that, letting disposable income be 70. Suddenly, the government cuts taxes to 15%, leaving households with an income of 85. To repay the deficit it created, two years later the government raises taxes to 45%, reducing households’ disposable income to 55. It then repeats this policy every four years: in years 4, 8, 12, …, it reduces taxes to 15% of income, in years 6, 10, 14, … it raises them to 45% of income.  Households want to smooth their consumption over time. If their behavior does not change, they fail, as their consumption is 85 in years 0-2, 4-6, 8-10, … and 55 in years 2-4, 6-8, 10-12, …

A simple way to model behavior change in this setting is to assume that households adaptively form expectations on government policy. After a few years, they learn to anticipate the pattern of taxes, and keep their consumption at 70. Indeed, they save when taxes are low, and repay what they saved in periods when taxes are high. If government policy changes, say taxes increase or decrease every four years instead of every two years, households take time to adapt to the new policy. If government policy changes frequently, households get wrong tax estimates most of the time and systematically overconsume or underconsume.

Economists have never been happy about agents being systematically wrong. Since the 70s, models with adaptive expectations have been replaced by models with so-called “rational expectations”. Rational agents [1], the argument goes, would discover even hard-to-predict patterns in government policy, and replace naïve agents that are unable to do so. Rational agents are “forward-looking” in the sense that they know the equations that drive policy. Therefore, they are able to make consumption decisions in year t based on government policy in year t+1. What if these consumption decisions impact government policy, too?

A rational expectations equilibrium is an infinite sequence of consumption decisions and government policies that are consistent with one another. Finding the equilibrium amounts to finding a fixed point in the (infinite-dimensional) space of consumption and policy sequences. Discovering such a fixed point turns out to be easier if the modeler assumes that households maximize a utility function. Using the mathematics of intertemporal optimization and Bellman equations, the modeler can find these sequences. I call this approach “fixed-point mathematics”.

In contrast, the learning process based on adaptive expectations is simply a difference equation in which households update their beliefs based on past tax values. Variables in year t are only determined based on variables in years t-1, t-2, … I call this approach “recursive mathematics”, and argue that it makes it much easier to include realistic assumptions.

Let’s come to behavior change in epidemiological models. These review articles show that there are quite a few papers trying to incorporate behavior change in basic SIR models. This article from 1976 and some subsequent articles consider non-linear variations to the basic SIR model, capturing for example the idea that a high number of infected makes susceptible individuals more cautious, lowering the transmission rate. The same idea is applied to this paper modeling the COVID-19 pandemic. The authors assume that individuals reduce social contacts when the number of deaths rises; because deaths occur with a delay with respect to infections, this leads to oscillatory dynamics in the number of infections as individuals ease or tighten social distancing. This nice paper assumes that awareness about diseases is transmitted in a social network, but fades with time. Again, this has clear implications for disease dynamics.

All these ways to deal with behavior change remind of the adaptive expectations framework of learning about government fiscal policy. Indeed, these approaches are rooted in recursive mathematics, which epidemiologists coming from biology or physics are well versed in.

Of course, economists aren’t happy of these ways to deal with behavior change in epidemic models, as they aren’t happy about adaptive expectations in economic models. Especially in the last few years, quite a few papers came out that tried to apply the rational expectations framework to epidemiological models. This paper, for example, assumes that individuals receive utility from social contacts, but utility goes down if they become infected. Thus, individuals trade-off utility from contacts with infection risk [2]. “Rational” individuals know the underlying SIR model and so are able to perfectly forecast epidemic paths conditional on their level of social contacts (see these notes for a very accessible explanation on this point).

In the figure below, the solid line is the rational expectations equilibrium, in which the epidemic path optimally satisfies the contact-infection tradeoff. In other words, at all times individuals choose the number of social contacts that they have, taking the optimal level of risk. Now look at the dotted line (ignore the dashed one). This is what happens when individuals don’t respond at all, as in the baseline SIR model. Does this figure look familiar? It should, it really looks like the “flatten the curve” picture that contributed convincing several governments to impose lockdown measures in March 2020. Under these assumptions, though, lockdown was useless, as individuals would have flattened the curve by themselves. In some sense, this is the Swedish approach. I leave it to the reader to judge whether it was a good idea to provide policy recommendations based on this model.







In the last months, the number of epidemiology papers written by economists has exploded [3]. The nice thing about models with rational expectations is that you cannot forget about behavior change. In a sense, you get it for free with the build-up of the model. The bad thing is that, in my opinion, this type of behavior change is clearly unrealistic. Even if real people had been able to act optimally at the onset of the COVID-19 pandemic, the scarcity of data would have prevented them to properly forecast the epidemic trajectory. And I have strong doubts about individuals acting optimally in any case. Thus, let me end this blog post with the following plea.

Epidemiologists, please remember to introduce behavior change in your models. To be fair, the models that had most policy impact were clearly unrealistic in not including any behavioral response. (From looking at the report, I assume that the Imperial study by Ferguson et al. did not have it, but I am not sure as I could not find a full description of the model.) But, please do not include behavior change in the way that economists mean it. In this recent paper on the HIV epidemic published on a top economics journal, individuals decide whether to have protected sex optimally trading off reduced pleasure from using condoms and infection risk. Policy recommendations are drawn from it. Aside from too easy ironies about agents maximizing a utility function before having sex [4], this completely ignores realistic elements such as social norms, decentralized information traveling in social networks of infected people, altruism, etc. These are also key elements characterizing behavior change in the COVID-19 pandemic. These elements could certainly be included in “rational” models, but it is very hard when you have to respect intertemporal fixed point conditions. Indeed, none of the at least 15 papers of epidemiology by economists that I’ve seen so far departs from the baseline assumption of homogenous households maximizing their own utility independently of social pressure. These papers will come, including one deviation from the baseline framework at a time, but most papers will provide policy recommendations based on the baseline. Instead, I hope epidemiologists will keep following the literature on behavior change that they already developed – see below.

Economists, if you must build epidemic models, please accept that you can introduce behavior change in a “reduced-form” way [5]. Some of you are already doing that. This nice paper builds essentially an agent-based model with spatial features, leading to realistic outcomes such as local herd immunity. The authors model behavior change simply by assuming that the transmission rate decreases linearly with the rate of infections. I don’t think they could find a rational expectations equilibrium that is fully consistent with the spatial structure, at least without oversimplifying other aspects of the model. This other paper, modeling behavior change essentially in the same way [5], considers infection spillovers across US counties, with a very accurate calibration based on county-level daily infection data. Instead, papers that go full steam towards rational, forward-looking agents, unavoidably ignore realistic aspects such as space. I understand that models with rational expectations are elegant and comparable and that there is a wilderness of reduced-form behavior-change epidemic models that is difficult to navigate. But, at least for epidemic models, please explore various boundedly-rational, adaptive, “ad-hoc” ways to respond to infection risk: you have a universe of realistic assumptions at your fingertips.

And, if you enjoy being able to play with reduced-form assumptions without the fear to be shot down by a referee, please consider such assumptions for economic models, too. It is so interesting to explore the world of “backward-looking” reactions to the economic environment. In our COVID economics paper, for example, we have sophisticated consumption decisions that depend on “ad-hoc” estimates of permanent income. Having “smart” agents that react to their environment should not almost always mean having optimizing and forward-looking agents in a rational expectations equilibrium.


Endnotes, or the corner of this blog post where I grumble about the state of economics, except in endnote 4, where I defend practice in economics from a misplaced criticism.

[1] I hate this use of the word “rational”. Here it means two things: that agents are able to maximize an objective function, and to correctly guess what every other agent and the entire economy do. While I agree that maximizing an objective function is consistent with the notion of rationality, I think that guessing what other agents do is a matter of prediction. Rationality and prediction can be in contrast. Rational expectations are effectively “correct expectations”. But using the word “rational” is a great selling point, because it makes “boundedly rational” decision rules look suboptimal to many eyes.

[2] Many people argue that taking decisions under incentives and constraints is what defines “economics”. So epidemiological models in which agents maximize a utility function subject to infection risk are “economic-epidemiological models”. I really really dislike this use of the word “economics” and what it implies. Economics should be the study of the economy. Reaction to incentives under constraints should be a branch of psychology. Economics should be neutral to which psychological theory it uses to model human behavior. Using the word “economics” to mean reaction to incentives under constraints makes it sound like that is the only way to model human behavior to study the economy. It is not.

[3] Interestingly, I haven’t seen any epidemiologist write an economics paper. This is known as economic imperialism: with the hammer of rational choice, every other social science looks like a nail for an economist. After all, economics is the queen of the social sciences, no?

[4] Saying that it is unrealistic that individuals maximize utility somehow misses the point of rational choice theory. Maximizing utility is only a tool to make a point prediction about what individuals do given incentives and constraints. It is a very general way to say, for example, that out of risk of infection individuals will be more cautious. A boundedly rational rule could still be expressed as the optimization of a modified utility function. I personally find utility a convenient analytical device; my real problems with economic theory have to do with equilibrium.

[5] In the 70s, at the same time that economists started to care about rational expectations, they also started caring about “microfoundations”. Every decision rule needed to be rooted in first principles, namely so-called preferences, technology, and resource constraints. By contrast, a “reduced form” assumption is a decision rule that is just postulated. For example, deriving decreases in the contact rate of a SIR model from maximizing a logarithmic utility function is consistent with microfoundations; simply postulating that contacts decrease linearly with the number of infectious individuals is not. While microfoundations are laudable in principle, they are often a straightjacket in practice. Many economists start with reduced-form expressions, and then reverse-engineer microfoundations. This is an art; it too often does not matter if microfoundations are just made up without being based on empirical evidence, as long as they are consistent with axioms of decision theory.

This paper is exemplary in the class of epidemic models by economists. To capture behavioral response, the authors assume a non-linear form on the infection rate, as in the 1976 paper mentioned above. But they justify it from first principles of economic theory. “We assume that all agents receive stochastic shocks z that we interpret as economic needs. The shocks are drawn from a time-invariant distribution F(z) with support z ∈ [0, ∞). […] Facing risk of infection during an excursion, Susceptibles optimally choose to satisfy a given need z only if the benefit exceeds the expected cost of taking an excursion.” In practice, agents go shopping only if z is a larger than an exogenously postulated level depending on the number of infectious individuals. By further assuming that the CDF of the stochastic shocks is z/(1+z), the authors obtain the functional form of the SIR model that they wanted. They will have less problems with referees as they apparently comply to academic social norms, but I find it hard to see the value added of such a build-up, at least in this case. (Note that I think that other than that it is a pretty good paper, especially in the way it is calibrated to data.)

A complex systems take on the economics of COVID-19

Hibernating a complex system is a formidable task: No wonder that human hibernation is still science fiction! The economy, which is a system at least as complex as a human being, makes no exception. Yet, perhaps for the first time, we face the problem of how to put the economy into hibernation in the least disruptive way. Indeed, to stop the spread of COVID-19, we need to reduce economic activity to a minimum for as long as necessary. At the same time, to avoid further suffering due to poverty and unemployment, we need to jump-start the global economy as soon as the pandemic is over. As this is a first time, economic theory offers little guidance on how to effectively hibernate and then restart the economy. In this post, I will argue that Agent-Based Models are the best tool to address this issue, as they represent the complexity of the economy in a more faithful way than traditional equilibrium models.

If the economy wasn’t a complex system, hibernating it could be straightforward. Imagine an economy with households-employees, firms, banks, a government and a central bank. Suppose that at a certain time all firms shut down (except essential ones such as health care, food, utilities, transports, telecommunications). Households stop buying all non-essential goods and services, firms stop producing and paying their employees. All loan and mortgage repayments are suspended and banks also shut down. The government and central bank provide all households with a basic income, which they use to consume the essential goods and services that are still produced [1]. Once the pandemic is over, firms reopen, households go back to work, banks’ loans and mortgages are repaid, fiscal and monetary policies go back to normal. If this scenario was plausible, we would face a few months of hibernation, and then the economy would restart as if nothing happened. While the practical difficulties with implementing such a plan would be enormous, conceptually it would be quite simple.

Unfortunately, it’s not that easy. The economy is an interconnected web of work, trade and financial linkages, where beliefs, hysteresis and lags play a key role. Let me mention a few examples of what could happen when restrictions are lifted.

(1) Pessimistic expectations. Households may hold pessimistic expectations about the state of the economy, and thus reduce their consumption out of precautionary motives.

(2) Frictions in re-hiring. If firms lay off workers so that they can receive unemployment insurance, re-establishing work relations with them may be difficult, in particular if demand drops due to households’ pessimistic expectations.

(3) Lags in supply chains. In supply chain management, the devil is in the details. To have firms produce at full steam once restrictions are lifted would require an enormous coordination effort, both nationally and internationally. The currently asynchronous response to the health crisis across sectors and countries suggests that shipment of intermediate goods could face substantial delays. So, most manufacturing firms would have to remain closed for longer.

(4) Credit markets. Workers in those firms would remain unemployed for longer, depressing aggregate consumption and potentially being unable to repay their mortgages. Firms unable to operate would fail to repay their loans. Banks would not open new credit lines facing much higher risk of bankruptcy.

(5) Stock markets. Stock markets would crash due to a combination of pessimistic beliefs and real problems, leading to lower consumption through wealth effects and lower credit through the financial accelerator mechanism.

All these effects would be magnified if the economy was not put into hibernation in the first place.

None of these five effects is explicitly included in the model by McKibbin and Fernando that international organizations are using to estimate the economic impacts of the COVID-19 pandemic. This is a dynamic stochastic general equilibrium model with 24 countries and regions, 6 aggregate sectors, a representative household for each country and a government. In this model, households and firms behave optimally given their beliefs about current and future economic outcomes, and their beliefs are consistent with outcomes (rational expectations) [2]. McKibbin and Fernando model the impact of the pandemic in five dimensions: (a) reduction in labor supply due to illness, caregiving and school closures; (b) increase in aggregate equity risk premia; (c) disruptions to supply chains at the 6-sector level, averaged over a quarter (e.g, reduction in the supply of goods from “mining” to “durable manufacturing” over three months); (d) shocks to consumer demand, differentially across sectors, during the lockdown; (e) increase in government expenditure to compensate for economic losses. With these effects, economic activity would reduce by up to 10% in 2020, depending on countries and scenarios. By 2021, it would largely return to 2019 levels. [3]

I argue that effects (1) to (5) can potentially reduce output by much more, and more permanently, as they impact the structure of the economy at a more fundamental level than the sector-aggregate transitory shocks (a) to (e). It should not be surprising that many analyses and policy proposals (e.g., see here and here) are aimed at tackling the “microeconomic” effects (1) to (5), by discouraging layoffs, guaranteeing most of the income of workers, providing long-term loans at no interest to firms to help with cash flows, and providing liquidity to banks. Some proposals even consider having the government pay firms for maintenance costs, utilities, interest and other costs. Unsurprisingly, these policies are very expensive, so it would be ideal to make them as targeted as possible. At the same time, it would be great to know which mix of policies aimed at addressing effects (1) to (5) is most effective.

Unfortunately, it is impossible to use the McKibbin and Fernando model for this goal, as it lacks most of the heterogeneity, networks and detailed time structure that would be necessary. Mainstream economics has thought about all these effects, but one-at-a-time, and often not embedded in a macroeconomic model. This is not a criticism: as mentioned at the beginning, this situation is new, and a model cannot include everything. However, I think that standard macroeconomic models will have hard time including these effects, as respecting equilibrium conditions with heterogenous households, firms and banks who have very different balance sheets is mathematically and computationally untractable. The analyses and policy proposals mentioned above come out of the intuition of economists, rather than from quantitative models.

Macroeconomic Agent-Based Models (ABMs) could include microeconomic effects much more easily, as they are simply solved recursively without the need to satisfy equilibrium constraints. For example, the ABM by Caiani et al. explicitly models balance sheets of firms and banks, so it could be used to test policies aimed at providing liquidity. The Keynes meets Schumpeter ABM developed in Sant’Anna by my new colleagues, in its various incarnations, can be used to test the effect of policies aimed at keeping workers employed in firms during the pandemic, at preventing pessimistic expectations, at avoiding financial crises induced by firm bankruptcies.  While the above are theoretical models that are not directly calibrated on real-world data (unlike McKibbin and Fernando), Poledna et al. are the first to build an ABM that is calibrated on real-world data and used for forecasting. As Poledna et al. represent the full population of households and firms, one could test policies that target individual firms depending on their liquidity shortages (link in Italian).

Results may come too late to inform the current policy debate, as policy makers need to make decisions in a few weeks. However, modeling the economics effects of the COVID-19 pandemic would be useful at least academically, for our understanding of the economy under extreme circumstances. It would also be useful in case there is a second wave of the COVID-19 pandemic and we need to hibernate the economy again. Finally, theoretical guidance on how to restart the economy after hibernation could be useful in the future should we need to put similar measures in place, e.g. in face of climate risks.

I think that complexity economics and agent-based modeling, by being particularly good at capturing heterogeneity, networks, and non-linear dynamics, have a good shot at providing insights into the current economic crisis. Having an important role in the policy debate would be a great signal for the maturity of the field.


[1] This is clearly a caricature of the economy. It does not consider, for example, that many service workers can work effectively from remote, and that certain factories cannot completely shut down as some machinery can be damaged if switched off (e.g., industrial furnaces).

[2] McKibbin and Fernando claim that 70% of firms do not follow rational expectations, rather they follow “rule-of-thumb” behavior (see the description of the model in a 2018 paper). However, non-rational expectations behavior means adjusting slowly to rational expectations (see Eqs. 13 and 14 in the appendix). Likewise, a fraction of households consume a fixed fraction of their income, irrespective of their expectations (Eq. 20).  None of these modeling assumptions allows for animal spirits and pessimistic expectations.

[3] The authors admit that scenarios could be much worse, but it is unclear if their model can endogenously produce worse scenarios.

The usefulness of qualitative ABMs in economics: An example

I think it is uncontroversial that, compared to standard economic theory, Agent-Based Models (ABMs) describe human behavior and market dynamics more realistically [1]. This enhanced realism gives ABMs the potential to provide more accurate quantitative forecasts, once we figure out how to use them for prediction. However, if the goal of a model is more qualitative, for example to elucidate a theoretical mechanism, is realism useful?

Many economists would say that it is not, and too much realism may even be counterproductive. For example, to expose his Nobel-winning theory of asymmetric information (Market for Lemons), George Akerlof did not need boundedly rational agents and a detailed depiction of market exchanges. The standard setup, with rational utility-maximizing agents and market equilibrium, allowed a transparent exposition of the issue of asymmetric information. I think this is a fair point; however, which level of realism should be assumed in general qualitative models is mostly a matter of taste. If the modeler likes to highlight some economic force in a way that does not depend on people’s bounded rationality or on the nitty-gritty market details, then the assumptions of standard economic theory are okay. If the modeler wants instead to explain some phenomenon as the outcome of dynamically interacting boundedly-rational heterogenous agents, an ABM may be a more natural choice. In some situations, it may be the best choice.

Our paper “Residential income segregation: A behavioral model of the housing market”, with Jean-Pierre Nadal and Annick Vignes, just published in JEBO (Journal of Economic Behavior and Organization), is in my opinion a good example. In this paper, we study the relations between income inequality, segregation and house prices, and explore which policies best deal with these issues. Most urban economists address these problems using spatial equilibrium models. These models are solved by assuming that individuals in each income category experience the same utility all over the city; the resulting prices determine segregation. In our ABM, agents behave according to fast-and-frugal heuristics, and individual interactions dynamically determine prices and segregation patterns.

First of all, taking our approach provides simpler narratives. For instance, to explain why the rich live in the fanciest locations of a city, spatial equilibrium models need to assume that the rich care about city amenities more than the poor do. In our ABM, this is simply explained by rich buyers bidding up the prices until the poor cannot afford buying there.

Additionally, in our ABM it is straightforward to include as much heterogeneity as we need, as we do not have to solve for equilibrium. This is really useful, for example, to study the effect of income inequality on segregation. In accordance with empirical evidence, we find that stronger inequality increases segregation. However, it also decreases average prices over the city. Indeed, with stronger income inequality fewer buyers bid more, while most buyers bid less: the global effect is negative. Finally, we explore whether subsidies or taxes are better at mitigating income segregation. According to our ABM, subsidies are better, because they directly target the poor, increasing their purchasing power. Taxes instead hit the rich, but all benefits go to the middle class, with no effect on the poor. Modeling heterogeneity is key.

Finally, from a technical point of view, a standard critique from economists is that the reliance on numerical simulations in ABMs makes them less suited to clarify theoretical mechanisms. This is true to some extent. For example, the results in the paragraph above have been obtained by simulating the ABM [2]. Nonetheless, we did solve parts of our ABM analytically, giving insights on the causal mechanisms within the model and on non-linearities. Maths and ABMs are not incompatible; the maths to solve ABMs is just a bit different from the one of optimization and fixed point analysis, more commonly used in economic theory.

In sum, I think that our paper is a good example of how even a qualitative ABM can be useful in economics, to provide more realistic narratives and to easily deal with heterogeneity. [3]


[1] Excluding some situations in which sophisticated agents interact strategically, such as Google auctions, where standard economic theory may be a more literal description of reality.

[2] To ensure full reproducibility of our results, we have put the code to generate all figures online on Zenodo, a Cern repository for open science.  Sharing code is sn increasingly common practice in the ABM community, hopefully it will become the norm soon.

[3] For a version of this post with the figures from the paper, you can take a look at the Twitter thread starting from this link.

Bank of England conference on big data and machine learning

I recently presented our work on big housing data at the Bank of England conference on “Modelling with Big Data and Machine Learning”. This has been a super-interesting conference where I learned a lot. Now that the slides of the workshop have been uploaded online, I thought I would write a blog post to share something of what I learned. I’ll also take this chance to write about how big data are related to this blog and have the potential to influence theoretical economic models.

The first session of the conference was about nowcasting. I particularly liked the talk by Xinyuan Li, a PhD student at London Business School. In her job market paper, she asks if Google information is useful for nowcasting even when other macroeconomic time series are available. Indeed, most papers showing that Google Trends data improve nowcasting accuracy of, say, the unemployment rate, do not check if this improvement still holds once the researcher considers series of payrolls, industrial production, capacity utilization, etc. Li combines macroeconomic and Google Trends time series in a state-of-the-art dynamic factor model and shows that Google Trends add little, if any, nowcasting accuracy. However, if one increases the number of associated Google Trends time series by using Google Correlate, a tool that finds the Google searches most correlated with a given series, nowcasting accuracy improves. So under some conditions Google information is indeed useful.

The first keynote speaker was Domenico Giannone, from the New York FED. The question in his paper is whether predictive models of economic variables should be dense or sparse. In a sparse model only few predictors are important, while in a dense model most predictors matter. To answer this question it is not enough to estimate a LASSO model and count how many coefficients “survive”. Indeed, for LASSO to be well-specified, the correct model must be sparse. The key idea of the paper is to allow for sparsity, without assuming it, and let the data decide. This is done via a “spike and slab” model, that contains two elements: a parameter q that quantifies the probability that a coefficient is positive; and a parameter γ that shrinks the coefficients. The same predictive power can be achieved in principle by only including few coefficients or by keeping all coefficients but shrinking them. In a Bayesian setting, if the posterior distribution is concentrated at high values of q (and so low values of γ) it means that the model should be dense. This is what happens in the figure below, in five out of six datasets in micro, macro and finance. Yellow means high value for the posterior, and only in the case of micro 1 it is high for q ≈ 0. So in most cases a significant fraction of predictors is useful for forecasting, leading to an illusion of sparsity.

The most thought-provoking speech in the panel discussion on “Opportunities and risks using big data and machine learning” was again by Giannone. What he said is best summarized in a paper that everyone interested in time series forecasting with economic big data should read. His main point is that macroeconomists had to deal with “big data” since the birth of national accounting and business cycle measurement. State-of-the-art nowcasting and forecasting techniques that he jointly developed at the New York FED include a multitude of time series at different frequencies, such as the ones shown in the figure below. These series are highly collinear and rise and fall together, as shown in the heat map in the horizontal plane. According to Giannone, apart from a few exceptions, big data coming from the internet have little chance to improve over carefully collected data from established statistical national agencies.

On a different note, in a following Methodology session I found out about a very interesting technique: Shapley regressions. Andreas Joseph from the Bank of England talked about the analogy between Shapley values in game theory and in machine learning. In cooperative game theory Shapley values quantify how much every player contributes to the collective payoff. A recent paper advanced the idea of applying the same formalism to machine learning. Players become predictors and Shapley values quantify the contribution of each predictor. While there exist several ways to quantify the importance of predictors in linear models, Shapley values extend nicely to potentially highly non-linear models. His colleague Marcus Buckmann presented an application to financial crisis forecasting, using data back to 1870 (see figure below). Interestingly, global and domestic credit contribute a lot to forecasting, while current account and broad money are not so important. In general, Shapley regressions might help with the interpretability of machine learning “black boxes”.

The last session I’d like to write about is the one on text analytics. Eleni Kalamara, a PhD student in King’s College, presented her work on “making text count”. The general goal of her project is to see whether text from UK newspapers proxies sentiment and uncertainty and is useful to predict macroeconomic variables. What I found most interesting was the comparison of 13 different dictionaries that turn text into sentiment and uncertainty indicators. Given such a proliferation of metrics, it seems very useful to systematically compare them. Another interesting talk in the same session was given by Paul Soto. In his job market paper “breaking the word bank”, he used Word2Vec to find words related to “uncertainty” in transcripts of banks’ conference calls. Word2Vec is a machine learning algorithm that finds a vector representation for words taking into account both syntactics and semantics. The figure below shows a two-dimensional projection of the vector space; words related to uncertainty are highlighted in yellow to the right. In his paper, Soto shows that banks with higher idiosyncratic uncertainty are less likely to give loans and more likely to increase their liquidity.

There were a lot of other great talks. For example, Thomas Renault from Sorbonne showed how to detect financial market manipulation—in particular, pump and dump schemes—from Twitter. Luca Onorante from the European Central Bank demonstrated how to select the most relevant Google Trends in a context of Bayesian Model Averaging. Emanuele Ciani from the Bank of Italy developed on a method first introduced by Jon Kleinberg to predict the agents that would most benefit from policies, nicely combining ideas from prediction and from causal inference. For the many other interesting talks, please check the program or look at the slides.

So, what do big data have to do with complexity economics? This conference was purely about statistical models. My sense is that economic theorists are not responding to big data as much as empirical economists. True, heterogenous agent models that use micro evidence to discriminate between different macro models that produce the same macro outcomes are increasingly popular, but I don’t think they quite exploit the power of big data. On the other hand, large-scale “microsimulation” Agent-Based Models (ABMs) that are directly feeded with data and solved forward without imposing equilibrium constraints seem more promising to exploit the big data opportunities. A nice example of this is the ongoing work by Sebastian Poledna and coauthors on “Economic forecasting with an agent-based model”, exploiting comprehensive datasets for the Austrian economy. I plan to work on prediction with ABMs too during my postdoc funded by the James S. Mc Donnell Foundation — better out-of-sample forecasting performance would be a compelling motivation for the enhanced realism of ABMs that comes at the cost of other features that are considered important in mainstream theoretical models.

What is equilibrium in economics and when it is (not) useful

Equilibrium is the most widespread assumption across all subfields of economic theory. It means different things in different subfields, but all equilibrium concepts have a common meaning and purpose, with the same pros and cons. In this post I will argue that the different way in which equilibrium is treated is the distinctive feature of complexity economics, narrowly defined. (This post is mostly methodological. In this blog I will alternate actual research and methodology, always pointing to concrete examples when talking about methodology.)

What equilibrium means in economics

Before talking about what equilibrium is, it is useful to say what it is not. First, equilibrium does not necessarily imply stationarity. Indeed, many equilibrium concepts are dynamic and so for example it is possible to have chaotic equilibria. Conversely, stationary states need not be equilibria. Second, equilibrium in economics has nothing to do with statistically balanced flows, as used in many natural sciences. Third, equilibrium is independent of rationality, if rationality just means choosing the optimal action given available information (I will come back to this).

Equilibrium in economics can generally be thought of as a fixed point in function space, in which beliefs, planned actions and outcomes are mutually consistent. Let me elaborate on this. Differently from particles, economic agents can think, and so have beliefs about states of the economy. Behavioral rules that can be fully or boundedly rational map these beliefs into planned actions. Finally, outcomes resulting from the combined actions of all agents may let each agent realize their planned actions, or may force some agent to choose an action that was not planned. Equilibrium outcomes are such that agents – at least on average – always choose the action that was planned given their beliefs and behavioral rules. In other words, beliefs and planned actions match outcomes.

A few examples should clarify this concept. Perhaps the most famous equilibrium is the Walrasian one. This is usually described as demand=supply, but there is more to that. In a market with one or multiple goods, agents have beliefs about the goods prices, and through some behavioral rule these beliefs determine the quantities that agents try to buy or sell (planned actions). Aggregating up these quantities determines outcomes – the differences between demand and supply for each good. If there is excess demand or excess supply, some agents buy or sell more (or less) than what they planned. Instead, in a Walrasian equilibrium agents have beliefs on prices that make them buy or sell quantities that “clear” the market, i.e. demand=supply. In this way, all agents realize their plans.

When strategic interactions are important, economists use game theory to model interdependent choices. In game theory players have beliefs about what their opponents will do and plan actions according to these beliefs and some behavioral rule. For example, if players are fully rational their behavioral rule is to select the action that maximizes their payoff given their beliefs. In a Nash equilibrium all players’ actions and beliefs are mutually consistent, so no agent can improve her payoff by switching to another action. But agents could be boundedly rational, playing also, with some smaller probability, actions that do not maximize their payoff. In this case it is for example possible to define a Quantal Response Equilibrium, in which again beliefs and planned actions match outcomes.

All equilibrium concepts above are static, but it is straightforward to include a temporal dimension. (Beliefs over time are called expectations.) For example, in many macroeconomic models agents are forward-looking, e.g. they plan how much to consume in each future period of their life. These consumption decisions depend on future interest rates: in periods when the interest rates are high, agents may prefer saving to consuming, so to earn higher interest and afford higher consumption in the future. In a rational expectations equilibrium [1], the expectations for future interest rates are on average correct, so that again beliefs and planned actions (consumption decisions) match outcomes (interest rates). The assumption of rational expectations places no restriction on macroeconomic dynamics: this may reach a stationary state, but also follow limit cycles or chaos.

Many more equilibrium concepts have been proposed in economics, and new ones keep being introduced, but all equilibria share the same rationale. For example, search and matching models are used to go beyond the Walrasian equilibrium concept. When applied to the labor market, these models assume that workers and firms engage in costly search of a good match. This potentially difficult search process may explain involuntary unemployment, which could not be explained if labor demand=labor supply, as in Walrasian models. Yet, the equilibrium of search and matching models can still be viewed in the same way as in the examples above. Workers have beliefs about future vacancy rates, which determine how difficult it is to find a job, and firms have beliefs on future unemployment rates, determining how difficult it is to fill a vacancy. These beliefs determine which minimum wage to accept or offer, or how long to search (planned actions), typically following a rational behavioral rule. Finally, the combined decisions of workers and firms lead to outcomes, namely unemployment and vacancy rates. Again, in equilibrium beliefs, planned actions and outcomes are mutually consistent.

Pros and cons of equilibrium

If equilibrium has been a key concept in economic theory for more than a century, there must be some good reasons. The first reason, I think, is that modeling out-of-equilibrium behavior is harder than modeling equilibrium behavior. What is a realistic way to model what happens when beliefs, planned actions and outcomes are systematically inconsistent? (I give a possible answer at the end.) Equilibrium is then an incredibly useful simplification, that makes it possible to abstract away from this problem. Economic theorists are often interested in adding more and more realistic features about how the economy works in their models, and by assuming equilibrium they keep their models tractable. In addition, contemporary economics is becoming more and more empirical. Many applied economists are happy to just build a model that accounts for some property of the data, and building models with equilibrium is a transparent way to highlight the relevant theoretical mechanisms.

A second reason for the success of equilibrium is that time averages of beliefs, planned actions and outcomes may approximate equilibrium, which would then be a useful point prediction. An example that comes from my research is the game of Matching Pennies. If this game is played repeatedly, under some learning algorithms the players will never converge to a Nash equilibrium. However, it is easy to show that time averaged play is close to equilibrium behavior [2]. Something similar has been observed experimentally.

A third reason is that by assuming equilibrium many variables are determined endogenously, that is within the model. This makes it possible to consider non-trivial interdependencies, called by economists general equilibrium effects. An example comes from a nice paper by Cravino and Levchenko I recently read. In this paper the authors build an equilibrium model to investigate how much multinational corporate control affects international business cycle transmission. Assuming that parent companies are hit by a “shock” in one country, the authors look at aggregate effects on other countries where affiliate companies operate. Interestingly, the effect of the shocks is amplified if workers in the other countries are less willing to change how many hours they work. This general equilibrium effect is due to the interconnections between the good and labor markets, captured by assuming equilibrium.

Despite the advantages of equilibrium assumptions, I think there are two main shortcomings. The first is that, in my opinion, little of what happens in the real world is precisely described by equilibrium. If one is interested in quantitative models, forcing the model to be in equilibrium is a strong mis-specification, even if some aspects of reality are reasonably approximated by equilibrium. Of course many equilibrium models are shown to fit the data, but most analyses are based on in-sample fitting and so could be prone to overfitting.

The second shortcoming is more practical. In some cases solving for equilibrium is technically challenging, and this prevents including some realistic assumptions and fully embracing heterogeneity. In the words of Kaplan and Violante in the Journal of Economic Perspectives “Macroeconomics is about general equilibrium analysis. Dealing with distributions while at the same time respecting the aggregate consistency dictated by equilibrium conditions can be extremely challenging.” Kaplan and Violante propose macroeconomic models named HANK (Heterogeneous Agent New Keynesian), but the way they deal with heterogeneity is extremely stylized. In addition, I think that one of the main reasons why insights from behavioral economics are not routinely added to economic models – in macroeconomics but also in other fields – is that it is technically harder to solve for equilibrium if one departs from full rationality. However, heterogeneity and bounded rationality are key to make serious quantitative models (real people are heterogenous and boundedly rational).

In sum, I think that assuming equilibrium can be really useful if models are used for qualitative reasoning, but it is an obstacle for quantitative analyses.

Complexity economics and equilibrium

My favorite narrow definition of complexity economics is making economic models that are not solved by assuming equilibrium. Rather, the modeler postulates the behavioral rules that each agent will follow and then just lets the system evolve over time. This is what happens in Agent-Based Models (ABMs), often represented as computer programs, or in Heterogenous Agent Models (HAMs), typically represented as dynamical systems. In either case, beliefs and planned actions need not match outcomes. In some cases they might, perhaps after an initial transient, but this is not a primary concern of the modeler. I think that assuming equilibrium is a strong top-down constraint imposed on the system. ABMs and HAMs let outcomes emerge in a bottom-up way without imposing equilibrium constraints, which I think is more in line with a complex systems view of the economy.

Is this useful? I think that the main advantages mirror the shortcomings of equilibrium models. Because one does not have to solve for equilibrium, it is very easy to include any form of heterogeneity and bounded rationality. If one also believes that out-of-equilibrium behavior better describes real economic agents, ABMs and HAMs seem more promising than equilibrium models for quantitative analyses. With the increasing availability of large datasets, we may be able to show this explicitly in the upcoming years. Another advantage is that not assuming equilibrium may lead to more natural descriptions of some problems: for an example, see the housing market ABM in my paper with Jean-Pierre Nadal and Annick Vignes.

The main problems of not assuming equilibrium also mirror the main advantages of doing so. First, being forced to model out-of-equilibrium behavior in each submodule of the model makes ABMs computationally very expensive. Second, it is easy to overlook interdependencies and to take too many variables as exogenous. Third, if beliefs, planned actions and outcomes are systematically inconsistent this may lead to mechanistic behavior that is as unrealistic as equilibrium. For example, in this very nice paper by Gualdi et al., for some parameter settings the ABM economy undergoes a sequence of booms and busts determined by consumers and firms systematically failing to coordinate on equilibrium prices (see first paragraph of Section 5.2). While this may be a realistic description of some economic crises, it seems unlikely that economic agents would systematically fail to recognize the discrepancy between beliefs and outcomes.

I think that the problem of what happens when beliefs and planned actions systematically do not match outcomes can be tackled in ABMs by modeling learning in a sensible way, perhaps including models of agents learning how to learn. In this way, agents may systematically be wrong but in many different ways, and so be unable to find the equilibrium. This view, I think, best describes economic reality.

In sum, complexity economics models are not solved by assuming equilibrium, and this also has its pros and cons. We will see over the upcoming years if the pros outweigh the cons.


I would like to thank everyone for your interest in this blog: my first post received way more online attention than I expected. Hope you will find my posts interesting! And please give me feedback — I wrote this post with the hope that a natural scientist with just a vague knowledge of economics could understand the basic idea; if you are such a scientist, let me know if I succeeded!


[1] I find the name “rational expectations” very misleading. Rational expectations equilibria have nothing to do with rationality, rather with the assumption that expectations match outcomes, which does not necessarily imply rationality.

[2] It is not always true that time averages correspond to equilibrium behavior. For example, if the players learn using fictitious play this is not true. And one always has to check ergodicity when using time averages.