Final report of my JSMF fellowship

I recently finished my JSMF fellowship at Sant’Anna Pisa, and took up a position as a researcher at CENTAI. I just took the chance to submit the final report to JSMF to reflect on the last 3 years, and thought to share this report as a somewhat autobiographical note for this blog, which I’m unfortunately not maintaining as I would like to.

_______________________________________________________________

Thanks to the unique flexibility that the JSMF fellowship provides, during my 3-year postdoctoral period I have been able to translate my relatively vague research proposal into a precise agenda. This happened through the collaboration with multiple people coming from several institutions, and while adapting my plans to contingent events such as the Covid-19 pandemic. In the spirit of the fellowship, I broadened my research areas from theoretical to data-driven modeling.

My JSMF project is titled “A theory of prediction for economic agent-based models”. Agent-based models (ABMs) are computational representations of complex systems in which individual agents interact adopting simple behavioral rules, and non-obvious patterns emerge from their interactions. I started working on ABMs during my PhD, as I worked on stylized ABMs in game theory, but I wanted to make my ABMs more data-driven. Moreover, I was interested in understanding when ABMs outperform traditional economic models in out-of-sample prediction. In such situation, ABMs would be a more accurate description of reality than traditional models, and this would justify their use for both scientific understanding and policy advice, making them more widely accepted. In the bigger picture, in my view using ABMs rather than traditional models would lead to a better representation of the economy as a complex system.

My initial plans were mostly on the “theory” of data-driven ABMs. I wanted to understand the theoretical conditions under which ABMs outperform traditional models in forecasting, mostly by using synthetic data generated by other models used as ground truth, before attempting to compare predictions in the real world. This has been the focus of Ref. [1]. A key problem for prediction with ABMs is that many variables of individual agents are unobserved, or latent. To the extent that the ABM dynamics depends on the values of these latent individual-level variables, unless these are estimated precisely the model cannot produce reliable forecasts. Using a specific ABM, we have been able to learn some lessons on the conditions of ABMs that make it possible to estimate latent variables. First, the amount of stochasticity of the ABM must be commensurate with data availability. If the model has many stochastic elements providing outcomes that cannot be observed, it is very difficult to write a computationally tractable likelihood function that enables precise estimates of latent variables. Second, the model must be continuous when possible, keeping discrete elements only when discreteness is crucial for the mechanisms that the ABM represents. This work [1] paves the way to a research agenda to make agent-based models “learnable”, i.e. such that their latent variables can be estimated from real-world data, and thus amenable to forecasting.

Further zooming into the structure of ABMs, I started thinking of ABMs as dynamic causal networks in which nodes correspond to the values of variables at a given time step and links indicate a dependency in the computer code describing the ABM. For instance, when z(t) <- x(t) + y(t-1), the causal network would have links from x(t) and y(t-1) to z(t), for all time steps t. Together with colleagues at Sant’Anna Pisa (my JSMF host institution), we developed a programming language that makes it possible to automatically derive the dynamic causal network of an ABM from the model code as it is executed [8]. In work in progress, we are using this causal network formalism to classify simulation models into a taxonomy with several dimensions, such as how stochastic, discrete, interactive, heterogeneous, complex a model is [9]. This taxonomy is not restricted to ABMs, as it extends to all simulation models. As a first goal, we hope to use this causal network to see which features make ABMs unique; we are the first to analyze this from a formal, rather than conceptual, point of view. This is useful in several respects. First, it would make it possible to open the “black-box” of an ABM. Indeed, usual methods treat the ABM as a black box that takes inputs and produces outputs [2, 11], but since we know the code of the model it is a big waste of information not to use it. Moreover, building the causal network of the ABM makes it possible to replace certain parts or the entire model by machine learning metamodels, which may be easier to deal with in the presence of latent variables [1,10]. Putting all pieces together I am getting closer to understanding the theoretical conditions under which ABMs can be used for forecasting.

My postdoctoral research also involved applications of data-driven ABMs. This line of work started from the Covid-19 pandemic. Stuck at home during the first lockdown, in spring 2020, my co-authors and I started working intensely on macroeconomic ABMs with an industry-to-industry input-output structure. We wanted to forecast the economic impacts of lockdowns, both at the aggregate level and across specific industries, and also to provide policy recommendations on which industries might be closed to minimize economic harm and maximize health benefits. In an early paper representing the UK economy [3], we predicted a 21.5% reduction in UK GDP in spring 2020, two months before the official release stating a 22.1% contraction. Our forecast was much closer to reality than the one by the Bank of England (around 30%) and the median forecast by several commercial banks and institutions (around 16%). At the policy level, we recommended against closing manufacturing industries, because they are relatively “upstream”, in the sense that they may provide outputs that are necessary inputs by other industries, and at the same time do not involve as many face-to-face contacts as more “downstream” industries such as entertainment and food. Our paper was widely circulated within the UK Treasury.

Building on this paper [3], I proposed to join forces with a team of epidemiologists to create an integrated epidemic-economic ABM that could address the most debated epidemic-economic tradeoffs [6]. This project took more than two years, but in the end we came up with what we think is the most granular and data-driven epidemic-economic model to date. We represent the New York metropolitan area, simulating the mobility and consumption decisions of a synthetic population of half a million individuals that closely resembles the real population. Mobility decisions are obtained from a privacy-preserving algorithm that reads individual-level mobility traces extracted from cell phone data and associates them to synthetic individuals. Households may reduce consumption for fear of infection as the number of Covid-related deaths increases. We find several results, including that epidemic-economic tradeoffs affect low-income more than high-income individuals, and that mandated government closures have similar tradeoffs as spontaneous consumption avoidance due to fear of infection.

A last line of research on the applications of data-driven ABMs, which is still in progress, is about housing markets and climate change [13]. We obtained access to a very rich dataset comprising all properties, transactions and mortgages in the Miami area, and we used it to initialize an ABM in which households buy and sell houses. In this ABM, buyers may avoid properties that may be most at risk of sea level rise, and this brings down the value of these properties. We reproduce interesting patterns of climate gentrification, such as the fact that prices are increasing in low-income but relatively high-altitude areas such as Little Haiti and decreasing in high-income low-altitude area such as Miami beach, because of a flux of affluent individuals from low-altitude areas at high risk of sea level rise to safer areas. We plan to use this model to test several climate adaptation strategies and study scenarios according to different climate pathways.

Finally, in addition to the new research lines that the JSMF fellowship enabled me to start, I had the chance to conclude papers on game theory [4, 14] and business cycles synchronization [7].

Following such an ambitious and wide-ranging research agenda has only been possible thanks to the unique characteristics of the JSMF fellowship. First of all, I greatly benefited from interacting with multiple coauthors coming from different backgrounds, many of whom I met when searching for a host institution. As the JSMF fellowship is not tied to a host institution, the search period is a great opportunity for finding new collaborators. Second, because I did not have to adhere to a strict reporting schedule, I had the flexibility to adapt my research to the circumstances, such as the Covid-19 pandemic, enriching my initial plans. Third, the 3-year period of the fellowship gave me time and independence to build my own research agenda. Fourth, the generous research budget made it possible to organize an international workshop on the topics of my fellowship, better connecting with the community working on data-driven ABMs, which I believe is in a great position to bridge theoretical and empirical approaches across disciplines [12].

Thank you for giving me the opportunity to pursue this research line. In my view, many postdoctoral programs around the world should follow in the footsteps of the JSMF Postdoctoral Fellowship.

___________________________________________________

[1] Monti, C., Pangallo, M., Morales, G. D. F., & Bonchi, F. (2023a). On learning agent-based models from data. arXiv:2205.05052.

[2] Borgonovo, E., Pangallo, M., Rivkin, J., Rizzo, L., & Siggelkow, N. (2022). Sensitivity analysis of agent-based models: a new protocol. Computational and Mathematical Organization Theory, 28(1), 52-94.

[3] Pichler, A., Pangallo, M., del Rio-Chanona, R. M., Lafond, F., & Farmer, J. D. (2022). Forecasting the propagation of pandemic shocks with a dynamic input-output model. Journal of Economic Dynamics and Control, 144, 104527.

[4] Heinrich, T., Jang, Y., Mungo, L., Pangallo, M., Scott, A., Tarbush, B., & Wiese, S. (2023). Best-response dynamics, playing sequences, and convergence to equilibrium in random games. arXiv:2101.04222.

[5] Loberto, M., Luciani, A., & Pangallo, M. (2022). What Do Online Listings Tell Us about the Housing Market? International Journal of Central Banking, 18(4), 325.

[6] Pangallo, M., Aleta, A., Chanona, R., Pichler, A., Martín-Corral, D., Chinazzi, M., Lafond, F., Ajelli, M., Moro, E., Moreno, Y., Vespignani, A., & Farmer, J.D. (2022). The unequal effects of the health-economy tradeoff during the COVID-19 pandemic. arXiv:2212.03567.

[7] Pangallo, M. (2023). Synchronization of endogenous business cycles. arXiv:2002.06555.

[8]  Comparing causal networks extracted from model code and derived from time series. In preparation.

[9] Quantifying features of simulation models directly from model code. In preparation.

[10]  Learning agent-based models through graph neural networks. In preparation.

[11] Pangallo, M., Giachini, D., & Vandin, A. (2023c). Statistical Model Checking of NetLogo Models. In preparation.

[12] Prediction and understanding in data-driven agent-based models. In preparation.

[13] Pangallo, M., Coronese, M., Lamperti, F., Cervone, G., & Chiaromonte, F. (2023d). Climate change attitudes in a data-driven agent-based model of the housing market. In preparation.

[14]  Best-response dynamics in multiplayer network games. In preparation.

The complexity economics view of substitution

A key problem in economics is how much firms can substitute specific inputs in order to carry out production. This issue has become more relevant now than in the past few decades. First, during the Covid-19 pandemic certain industries completely shut down, depriving some downstream industries of certain critical inputs. Now, a crucial policy question is how much European economies can get away without Russian gas in light of the Ukraine-Russia war. An influential policy report recently provided an answer to this question by using a state-of-the-art general equilibrium model. The report contains an interesting discussion on substitutability. It distinguishes between the “engineering view” of substitutability at the very microeconomic level, according to which lack of inputs that are technologically necessary can completely stop production in specific plants, and the “economic view”, according to which there is more substitutability at the macroeconomic level thanks to the ability of firms to find other suppliers, quickly change their production technologies to rely less on lacking inputs, etc.

In this blog post I argue that these mechanisms of substitution are only implicitly included in general equilibrium models, and that the complexity economics view of substitution provides an alternative that merges the engineering and economic views by explicitly representing the engineering constraints in production and how they can be relaxed. I discuss this at a general level and then give a rudimentary example of what I mean by discussing some work that my coauthors and I did on what we call the “partially binding Leontief” production function.

Everyone agrees that the economy has high capability to adapt. The policy report mentioned above gives some examples. When China implemented an export embargo on rare earths against Japan in 2010, Japanese firms found ways to use less rare earths in production or substitute them altogether. When an oil pipeline reaching Germany was shut down due to oil contamination in 2019, German firms found ways to import oil through other channels. In World War II, because the US were cut out of rubber supply, American firms developed synthetic rubber.

How are these substitution mechanisms captured in state-of-the-art general equilibrium models? The starting point is the nested CES production function. To make stuff (or to provide a service) you need several inputs. First, you need “primary factors”. These are mainly labor, capital (e.g. machines) and land. Second, you need “intermediate goods” which are used up in production. For instance, if you want to produce steel, you use iron and electricity as intermediate goods, but you probably also use restaurant services such as the canteen where employees have lunch. The nested CES production function aggregates intermediate goods into a composite intermediate good, primary factors into a composite primary factor, and then it aggregates the composite intermediate good and the composite primary factor specifying a level of production. These composites are also called “nests”.

CES stands for “Constant Elasticity of Substitution”. A nice property of this production function is that you can specify a given value for the elasticity of substitution, which is the easiness with which firms substitute their inputs. In typical calibrations, such as the one used in the policy report, you specify some values for the elasticity of substitution across intermediates, across primary goods, and between the intermediate composite and the primary composite. This means, among other things, that all intermediate inputs can be substituted equally well. Keeping with the example above, you can substitute iron and electricity with restaurant services. If the elasticity of substitution between intermediates and factors is sufficiently high, you may also easily substitute iron and electricity with labor or land. These elasticities of substitutions are usually calibrated based on a combination of econometric studies and plausibility arguments.

This is what I mean when I say that the standard approach to substitution implicitly incorporates engineering constraints and the way they can be relaxed. By assuming a level of substitutability that is in between zero and infinite, CES production functions implicitly capture the idea that substitution is possible but not so easy. One can play with the elasticity of substitution parameter depending on the question at hand. For instance, because technological change takes time, substitution is easier in the long run than in the short run, so it is reasonable to assume higher elasticities when one is concerned with long run responses of the economy. (It is also reasonable to assume higher elasticities at higher aggregation levels.)

To be fair, CES production functions are the best you can do in a world with limited real-world data and little information about production processes and if you want to work with mathematically elegant models that allow for easily interpretable results and closed-form solutions.

But the stakes are high and policy makers need to be sure about the quantitative reliability of macroeconomic models! Under a 30% reduction in gas imports, according to old style “Leontief” models that do not allow substitution there could be up to a 30% reduction in German GDP. By contrast, the policy report that uses a state-of-the-art general equilibrium model predicts up to a 3% reduction because it assumes that gas can be substituted by other inputs. Whatever result turns out to be true (we may never know if the gas import ban is not enacted), in my view policy decisions should be based on models that incorporate real-world data in a much more granular way, whose predictive performance is tested on past episodes.

For instance, imagine a model of the economy with 629 firms, each representing a 4-digit NACE industry. As an example, consider industry 2420, “Manufacture of tubes, pipes, hollow profiles and related fittings, of steel”. One could consult with engineers working in plants classified in this industry (and in all other 628 industries) and get detailed information about the physical processes that take place, which inputs are absolutely necessary and in which ratios, which alternatives can be considered for which inputs, how long it would take to come up with replacements. This information would be incorporated into a dynamic model in which firms buy inputs, replenish or use up their inventories, produce and sell outputs over time. In this way, users of the model can introduce a shock and explicitly see which input bottlenecks are created in the short run and in the long run, which cascading effects can occur (e.g. some industry stops production, and this leads other industries to stop production as well), and obtain a reliable estimate on the overall economic impact that is explicitly based on the industrial structure of the economy. The empirical performance of this model would be tested against several historical episodes in which some inputs became unavailable.

My coauthors and I built a model like that to assess the economic effects of the Covid-19 pandemic on the UK economy. We conducted a survey of industry analysts to determine which inputs were critical for production in a short time frame. We asked this question for each of 55 2-digit NACE industries, for each of 55 inputs. See the answers in the figure below. A column denotes an industry and the corresponding rows its inputs.  Blue colors indicate critical inputs, red and white non-critical inputs (red is an intermediate case of important inputs). The results shown here indicate that the majority of elements are non-critical inputs (2,338 ratings), whereas only 477 industry-inputs are rated as critical and 365 inputs are rated as important. Electricity and Gas (D35) are rated most frequently as critical inputs in the production of other industries (almost 60% of industries). Also frequently rated as critical are Land Transport (H49) and Telecommunications (J61). At the same time, many manufacturing industries (NACE codes starting with C) stand out as relying on a large number of critical inputs. For example, around 27% of inputs to Manufacture of Coke and Refined Petroleum Products (C19) as well as to Manufacture of Chemicals (C20) are rated as critical.

 

 

 

 

 

 

 

 

 

 

Using these data as a starting point, our model assumes that partial lack of any critical input proportionally stops production because of fixed technological recipes (as in Leontief models), while lack of non-critical inputs does not stop production (which is why we call our production function the partially binding Leontief). Our model is dynamic, i.e. it produces time series of production in all industries that take into account depletion of inventory stocks and cascading effects. In the figure below one can see that when the lockdown starts, production in certain industries decreases immediately, while other industries stop production when they run out of critical inputs, and this makes other industries stop production as well.

 

 

 

 

 

 

We also checked the predictive performance of our model, making an out-of-sample forecast of the reduction in UK GDP that turned out to be more precise than competing estimates.

Our approach should be viewed as a first step in a line of research that tries to explicitly incorporate engineering and technological details to make a realistic macroeconomic model of the production side of the economy. First of all, our survey is still too aggregate, as remaining at the level of 2-digit 55 industries makes it impossible to pin down technological details that differ across firms within any 2-digit industry. Next, our modeling does not consider prices, which were not a major factor during the Covid-19 pandemic but look more important in the current situation. Moreover, we do not allow substitution with imported goods. On a higher level, imprecise assumptions at the micro level may well lead to large errors at the macro level (in machine learning parlance, models that follow our approach may have very little bias but a lot of variance, while general equilibrium models that use uniform elasticities of substitution may have more bias but less variance).

Despite these shortcomings, we consider our approach as an example of the complexity economics view of substitution: we use “a lot” of data [1] to initialize industry-input-level substitutability in a non-equilibrium dynamic model that produces macroeconomic results by explicitly aggregating from technologically micro-founded production units, generating cascading effects and reliable forecasts (at least for the pandemic episode). At this point there is still a lot to do, nonetheless I view this as an exciting area of research that the complexity economics community is already focusing on.

[Thanks to François Lafond and Doyne Farmer for comments.]

[1] Our survey of industry analysts produced 3025 data points, which can be compared to the 4 aggregate elasticities that are qualitatively calibrated from data in the policy report.

Behavior change in economic and epidemic models

This post is for epidemiologists to understand what economists mean when they say that epidemic models should be “forward-looking”. And it is for economists to try and persuade them that incorporating behavior change in an “ad-hoc” fashion is just fine. I argue that all differences boil down to the type of mathematics that the two disciplines typically use – economists are used to “fixed-point mathematics”, epidemiologists to “recursive mathematics”. All in all, behavior change is incorporated by default in economic models, although in a highly unrealistic way; on the contrary, epidemiologists need to remind themselves to explicitly introduce behavior change, but when they do so they have the flexibility to make it much more realistic.

(When I talk about economists, I mean the vast majority of economic theorists, that is, economists that mostly reach their conclusions by writing mathematical models, as opposed to analyzing the data without a strong theoretical prior. When I talk about epidemiologists, I mean the ones that I know – mostly coming from the complex systems and network science communities. In this post I express strong opinions, but I try to be factual and fair; if you think I mischaracterized something, please let me know and I’ll be happy to revise.)

I would argue that modeling how humans respond to changes in their environment is as important in epidemiology as it is in economics. Recessions induce people to be cautious with spending out of fear that they could lose their job, in the same way that pandemics induce people to limit their social contacts out of fear of infection. Humans care about health as they care about their economic well-being, devoting at least as much attention to nonpharmaceutical interventions by the government as to monetary and fiscal policies. Yet, economists have been obsessed to model how people react to government policy, while epidemiologists have spent comparatively less attention. Finding out why scientific conventions in the two fields became so different is a super interesting epistemological question.

Let’s start with how economists deal with behavior change. Suppose that households’ income is 100 and taxes are normally 30% of that, letting disposable income be 70. Suddenly, the government cuts taxes to 15%, leaving households with an income of 85. To repay the deficit it created, two years later the government raises taxes to 45%, reducing households’ disposable income to 55. It then repeats this policy every four years: in years 4, 8, 12, …, it reduces taxes to 15% of income, in years 6, 10, 14, … it raises them to 45% of income.  Households want to smooth their consumption over time. If their behavior does not change, they fail, as their consumption is 85 in years 0-2, 4-6, 8-10, … and 55 in years 2-4, 6-8, 10-12, …

A simple way to model behavior change in this setting is to assume that households adaptively form expectations on government policy. After a few years, they learn to anticipate the pattern of taxes, and keep their consumption at 70. Indeed, they save when taxes are low, and repay what they saved in periods when taxes are high. If government policy changes, say taxes increase or decrease every four years instead of every two years, households take time to adapt to the new policy. If government policy changes frequently, households get wrong tax estimates most of the time and systematically overconsume or underconsume.

Economists have never been happy about agents being systematically wrong. Since the 70s, models with adaptive expectations have been replaced by models with so-called “rational expectations”. Rational agents [1], the argument goes, would discover even hard-to-predict patterns in government policy, and replace naïve agents that are unable to do so. Rational agents are “forward-looking” in the sense that they know the equations that drive policy. Therefore, they are able to make consumption decisions in year t based on government policy in year t+1. What if these consumption decisions impact government policy, too?

A rational expectations equilibrium is an infinite sequence of consumption decisions and government policies that are consistent with one another. Finding the equilibrium amounts to finding a fixed point in the (infinite-dimensional) space of consumption and policy sequences. Discovering such a fixed point turns out to be easier if the modeler assumes that households maximize a utility function. Using the mathematics of intertemporal optimization and Bellman equations, the modeler can find these sequences. I call this approach “fixed-point mathematics”.

In contrast, the learning process based on adaptive expectations is simply a difference equation in which households update their beliefs based on past tax values. Variables in year t are only determined based on variables in years t-1, t-2, … I call this approach “recursive mathematics”, and argue that it makes it much easier to include realistic assumptions.

Let’s come to behavior change in epidemiological models. These review articles show that there are quite a few papers trying to incorporate behavior change in basic SIR models. This article from 1976 and some subsequent articles consider non-linear variations to the basic SIR model, capturing for example the idea that a high number of infected makes susceptible individuals more cautious, lowering the transmission rate. The same idea is applied to this paper modeling the COVID-19 pandemic. The authors assume that individuals reduce social contacts when the number of deaths rises; because deaths occur with a delay with respect to infections, this leads to oscillatory dynamics in the number of infections as individuals ease or tighten social distancing. This nice paper assumes that awareness about diseases is transmitted in a social network, but fades with time. Again, this has clear implications for disease dynamics.

All these ways to deal with behavior change remind of the adaptive expectations framework of learning about government fiscal policy. Indeed, these approaches are rooted in recursive mathematics, which epidemiologists coming from biology or physics are well versed in.

Of course, economists aren’t happy of these ways to deal with behavior change in epidemic models, as they aren’t happy about adaptive expectations in economic models. Especially in the last few years, quite a few papers came out that tried to apply the rational expectations framework to epidemiological models. This paper, for example, assumes that individuals receive utility from social contacts, but utility goes down if they become infected. Thus, individuals trade-off utility from contacts with infection risk [2]. “Rational” individuals know the underlying SIR model and so are able to perfectly forecast epidemic paths conditional on their level of social contacts (see these notes for a very accessible explanation on this point).

In the figure below, the solid line is the rational expectations equilibrium, in which the epidemic path optimally satisfies the contact-infection tradeoff. In other words, at all times individuals choose the number of social contacts that they have, taking the optimal level of risk. Now look at the dotted line (ignore the dashed one). This is what happens when individuals don’t respond at all, as in the baseline SIR model. Does this figure look familiar? It should, it really looks like the “flatten the curve” picture that contributed convincing several governments to impose lockdown measures in March 2020. Under these assumptions, though, lockdown was useless, as individuals would have flattened the curve by themselves. In some sense, this is the Swedish approach. I leave it to the reader to judge whether it was a good idea to provide policy recommendations based on this model.

 

 

 

 

 

 

In the last months, the number of epidemiology papers written by economists has exploded [3]. The nice thing about models with rational expectations is that you cannot forget about behavior change. In a sense, you get it for free with the build-up of the model. The bad thing is that, in my opinion, this type of behavior change is clearly unrealistic. Even if real people had been able to act optimally at the onset of the COVID-19 pandemic, the scarcity of data would have prevented them to properly forecast the epidemic trajectory. And I have strong doubts about individuals acting optimally in any case. Thus, let me end this blog post with the following plea.

Epidemiologists, please remember to introduce behavior change in your models. To be fair, the models that had most policy impact were clearly unrealistic in not including any behavioral response. (From looking at the report, I assume that the Imperial study by Ferguson et al. did not have it, but I am not sure as I could not find a full description of the model.) But, please do not include behavior change in the way that economists mean it. In this recent paper on the HIV epidemic published on a top economics journal, individuals decide whether to have protected sex optimally trading off reduced pleasure from using condoms and infection risk. Policy recommendations are drawn from it. Aside from too easy ironies about agents maximizing a utility function before having sex [4], this completely ignores realistic elements such as social norms, decentralized information traveling in social networks of infected people, altruism, etc. These are also key elements characterizing behavior change in the COVID-19 pandemic. These elements could certainly be included in “rational” models, but it is very hard when you have to respect intertemporal fixed point conditions. Indeed, none of the at least 15 papers of epidemiology by economists that I’ve seen so far departs from the baseline assumption of homogenous households maximizing their own utility independently of social pressure. These papers will come, including one deviation from the baseline framework at a time, but most papers will provide policy recommendations based on the baseline. Instead, I hope epidemiologists will keep following the literature on behavior change that they already developed – see below.

Economists, if you must build epidemic models, please accept that you can introduce behavior change in a “reduced-form” way [5]. Some of you are already doing that. This nice paper builds essentially an agent-based model with spatial features, leading to realistic outcomes such as local herd immunity. The authors model behavior change simply by assuming that the transmission rate decreases linearly with the rate of infections. I don’t think they could find a rational expectations equilibrium that is fully consistent with the spatial structure, at least without oversimplifying other aspects of the model. This other paper, modeling behavior change essentially in the same way [5], considers infection spillovers across US counties, with a very accurate calibration based on county-level daily infection data. Instead, papers that go full steam towards rational, forward-looking agents, unavoidably ignore realistic aspects such as space. I understand that models with rational expectations are elegant and comparable and that there is a wilderness of reduced-form behavior-change epidemic models that is difficult to navigate. But, at least for epidemic models, please explore various boundedly-rational, adaptive, “ad-hoc” ways to respond to infection risk: you have a universe of realistic assumptions at your fingertips.

And, if you enjoy being able to play with reduced-form assumptions without the fear to be shot down by a referee, please consider such assumptions for economic models, too. It is so interesting to explore the world of “backward-looking” reactions to the economic environment. In our COVID economics paper, for example, we have sophisticated consumption decisions that depend on “ad-hoc” estimates of permanent income. Having “smart” agents that react to their environment should not almost always mean having optimizing and forward-looking agents in a rational expectations equilibrium.

____________________________________________

Endnotes, or the corner of this blog post where I grumble about the state of economics, except in endnote 4, where I defend practice in economics from a misplaced criticism.

[1] I hate this use of the word “rational”. Here it means two things: that agents are able to maximize an objective function, and to correctly guess what every other agent and the entire economy do. While I agree that maximizing an objective function is consistent with the notion of rationality, I think that guessing what other agents do is a matter of prediction. Rationality and prediction can be in contrast. Rational expectations are effectively “correct expectations”. But using the word “rational” is a great selling point, because it makes “boundedly rational” decision rules look suboptimal to many eyes.

[2] Many people argue that taking decisions under incentives and constraints is what defines “economics”. So epidemiological models in which agents maximize a utility function subject to infection risk are “economic-epidemiological models”. I really really dislike this use of the word “economics” and what it implies. Economics should be the study of the economy. Reaction to incentives under constraints should be a branch of psychology. Economics should be neutral to which psychological theory it uses to model human behavior. Using the word “economics” to mean reaction to incentives under constraints makes it sound like that is the only way to model human behavior to study the economy. It is not.

[3] Interestingly, I haven’t seen any epidemiologist write an economics paper. This is known as economic imperialism: with the hammer of rational choice, every other social science looks like a nail for an economist. After all, economics is the queen of the social sciences, no?

[4] Saying that it is unrealistic that individuals maximize utility somehow misses the point of rational choice theory. Maximizing utility is only a tool to make a point prediction about what individuals do given incentives and constraints. It is a very general way to say, for example, that out of risk of infection individuals will be more cautious. A boundedly rational rule could still be expressed as the optimization of a modified utility function. I personally find utility a convenient analytical device; my real problems with economic theory have to do with equilibrium.

[5] In the 70s, at the same time that economists started to care about rational expectations, they also started caring about “microfoundations”. Every decision rule needed to be rooted in first principles, namely so-called preferences, technology, and resource constraints. By contrast, a “reduced form” assumption is a decision rule that is just postulated. For example, deriving decreases in the contact rate of a SIR model from maximizing a logarithmic utility function is consistent with microfoundations; simply postulating that contacts decrease linearly with the number of infectious individuals is not. While microfoundations are laudable in principle, they are often a straightjacket in practice. Many economists start with reduced-form expressions, and then reverse-engineer microfoundations. This is an art; it too often does not matter if microfoundations are just made up without being based on empirical evidence, as long as they are consistent with axioms of decision theory.

This paper is exemplary in the class of epidemic models by economists. To capture behavioral response, the authors assume a non-linear form on the infection rate, as in the 1976 paper mentioned above. But they justify it from first principles of economic theory. “We assume that all agents receive stochastic shocks z that we interpret as economic needs. The shocks are drawn from a time-invariant distribution F(z) with support z ∈ [0, ∞). […] Facing risk of infection during an excursion, Susceptibles optimally choose to satisfy a given need z only if the benefit exceeds the expected cost of taking an excursion.” In practice, agents go shopping only if z is a larger than an exogenously postulated level depending on the number of infectious individuals. By further assuming that the CDF of the stochastic shocks is z/(1+z), the authors obtain the functional form of the SIR model that they wanted. They will have less problems with referees as they apparently comply to academic social norms, but I find it hard to see the value added of such a build-up, at least in this case. (Note that I think that other than that it is a pretty good paper, especially in the way it is calibrated to data.)

A complex systems take on the economics of COVID-19

Hibernating a complex system is a formidable task: No wonder that human hibernation is still science fiction! The economy, which is a system at least as complex as a human being, makes no exception. Yet, perhaps for the first time, we face the problem of how to put the economy into hibernation in the least disruptive way. Indeed, to stop the spread of COVID-19, we need to reduce economic activity to a minimum for as long as necessary. At the same time, to avoid further suffering due to poverty and unemployment, we need to jump-start the global economy as soon as the pandemic is over. As this is a first time, economic theory offers little guidance on how to effectively hibernate and then restart the economy. In this post, I will argue that Agent-Based Models are the best tool to address this issue, as they represent the complexity of the economy in a more faithful way than traditional equilibrium models.

If the economy wasn’t a complex system, hibernating it could be straightforward. Imagine an economy with households-employees, firms, banks, a government and a central bank. Suppose that at a certain time all firms shut down (except essential ones such as health care, food, utilities, transports, telecommunications). Households stop buying all non-essential goods and services, firms stop producing and paying their employees. All loan and mortgage repayments are suspended and banks also shut down. The government and central bank provide all households with a basic income, which they use to consume the essential goods and services that are still produced [1]. Once the pandemic is over, firms reopen, households go back to work, banks’ loans and mortgages are repaid, fiscal and monetary policies go back to normal. If this scenario was plausible, we would face a few months of hibernation, and then the economy would restart as if nothing happened. While the practical difficulties with implementing such a plan would be enormous, conceptually it would be quite simple.

Unfortunately, it’s not that easy. The economy is an interconnected web of work, trade and financial linkages, where beliefs, hysteresis and lags play a key role. Let me mention a few examples of what could happen when restrictions are lifted.

(1) Pessimistic expectations. Households may hold pessimistic expectations about the state of the economy, and thus reduce their consumption out of precautionary motives.

(2) Frictions in re-hiring. If firms lay off workers so that they can receive unemployment insurance, re-establishing work relations with them may be difficult, in particular if demand drops due to households’ pessimistic expectations.

(3) Lags in supply chains. In supply chain management, the devil is in the details. To have firms produce at full steam once restrictions are lifted would require an enormous coordination effort, both nationally and internationally. The currently asynchronous response to the health crisis across sectors and countries suggests that shipment of intermediate goods could face substantial delays. So, most manufacturing firms would have to remain closed for longer.

(4) Credit markets. Workers in those firms would remain unemployed for longer, depressing aggregate consumption and potentially being unable to repay their mortgages. Firms unable to operate would fail to repay their loans. Banks would not open new credit lines facing much higher risk of bankruptcy.

(5) Stock markets. Stock markets would crash due to a combination of pessimistic beliefs and real problems, leading to lower consumption through wealth effects and lower credit through the financial accelerator mechanism.

All these effects would be magnified if the economy was not put into hibernation in the first place.

None of these five effects is explicitly included in the model by McKibbin and Fernando that international organizations are using to estimate the economic impacts of the COVID-19 pandemic. This is a dynamic stochastic general equilibrium model with 24 countries and regions, 6 aggregate sectors, a representative household for each country and a government. In this model, households and firms behave optimally given their beliefs about current and future economic outcomes, and their beliefs are consistent with outcomes (rational expectations) [2]. McKibbin and Fernando model the impact of the pandemic in five dimensions: (a) reduction in labor supply due to illness, caregiving and school closures; (b) increase in aggregate equity risk premia; (c) disruptions to supply chains at the 6-sector level, averaged over a quarter (e.g, reduction in the supply of goods from “mining” to “durable manufacturing” over three months); (d) shocks to consumer demand, differentially across sectors, during the lockdown; (e) increase in government expenditure to compensate for economic losses. With these effects, economic activity would reduce by up to 10% in 2020, depending on countries and scenarios. By 2021, it would largely return to 2019 levels. [3]

I argue that effects (1) to (5) can potentially reduce output by much more, and more permanently, as they impact the structure of the economy at a more fundamental level than the sector-aggregate transitory shocks (a) to (e). It should not be surprising that many analyses and policy proposals (e.g., see here and here) are aimed at tackling the “microeconomic” effects (1) to (5), by discouraging layoffs, guaranteeing most of the income of workers, providing long-term loans at no interest to firms to help with cash flows, and providing liquidity to banks. Some proposals even consider having the government pay firms for maintenance costs, utilities, interest and other costs. Unsurprisingly, these policies are very expensive, so it would be ideal to make them as targeted as possible. At the same time, it would be great to know which mix of policies aimed at addressing effects (1) to (5) is most effective.

Unfortunately, it is impossible to use the McKibbin and Fernando model for this goal, as it lacks most of the heterogeneity, networks and detailed time structure that would be necessary. Mainstream economics has thought about all these effects, but one-at-a-time, and often not embedded in a macroeconomic model. This is not a criticism: as mentioned at the beginning, this situation is new, and a model cannot include everything. However, I think that standard macroeconomic models will have hard time including these effects, as respecting equilibrium conditions with heterogenous households, firms and banks who have very different balance sheets is mathematically and computationally untractable. The analyses and policy proposals mentioned above come out of the intuition of economists, rather than from quantitative models.

Macroeconomic Agent-Based Models (ABMs) could include microeconomic effects much more easily, as they are simply solved recursively without the need to satisfy equilibrium constraints. For example, the ABM by Caiani et al. explicitly models balance sheets of firms and banks, so it could be used to test policies aimed at providing liquidity. The Keynes meets Schumpeter ABM developed in Sant’Anna by my new colleagues, in its various incarnations, can be used to test the effect of policies aimed at keeping workers employed in firms during the pandemic, at preventing pessimistic expectations, at avoiding financial crises induced by firm bankruptcies.  While the above are theoretical models that are not directly calibrated on real-world data (unlike McKibbin and Fernando), Poledna et al. are the first to build an ABM that is calibrated on real-world data and used for forecasting. As Poledna et al. represent the full population of households and firms, one could test policies that target individual firms depending on their liquidity shortages (link in Italian).

Results may come too late to inform the current policy debate, as policy makers need to make decisions in a few weeks. However, modeling the economics effects of the COVID-19 pandemic would be useful at least academically, for our understanding of the economy under extreme circumstances. It would also be useful in case there is a second wave of the COVID-19 pandemic and we need to hibernate the economy again. Finally, theoretical guidance on how to restart the economy after hibernation could be useful in the future should we need to put similar measures in place, e.g. in face of climate risks.

I think that complexity economics and agent-based modeling, by being particularly good at capturing heterogeneity, networks, and non-linear dynamics, have a good shot at providing insights into the current economic crisis. Having an important role in the policy debate would be a great signal for the maturity of the field.

_______

[1] This is clearly a caricature of the economy. It does not consider, for example, that many service workers can work effectively from remote, and that certain factories cannot completely shut down as some machinery can be damaged if switched off (e.g., industrial furnaces).

[2] McKibbin and Fernando claim that 70% of firms do not follow rational expectations, rather they follow “rule-of-thumb” behavior (see the description of the model in a 2018 paper). However, non-rational expectations behavior means adjusting slowly to rational expectations (see Eqs. 13 and 14 in the appendix). Likewise, a fraction of households consume a fixed fraction of their income, irrespective of their expectations (Eq. 20).  None of these modeling assumptions allows for animal spirits and pessimistic expectations.

[3] The authors admit that scenarios could be much worse, but it is unclear if their model can endogenously produce worse scenarios.

Starting a new chapter

I’m happy to announce that I’ve joined the Institute of Economics and EMbeDS at the Sant’Anna School of Advanced Studies in Pisa as a JSMF postdoctoral fellow. I had been awarded a JSMF fellowship in September 2018: these are amazing grants that fund 2/3-year projects leaving almost complete freedom to grantees, both in terms of research projects and host institutions. My position will last until December 2022.

I will spend part of my time following the research lines that I started developing during my PhD. But I will mainly focus on a new research line: understanding the potential of Agent-Based Models for time series forecasting. ABMs are much more detailed than more traditional equation-based models, yet their additional flexibility has rarely been used to provide more accurate quantitative predictions. Showing that ABMs can produce more accurate forecasts would particularly be useful with economists, who tend to be skeptical about the value added of ABMs. It would explicitly demonstrate that ABMs can give more reliable answers to key policy questions than traditional models, because they more realistically represent the workings of the economy. There is already some research in this direction: I predict that it will grow even more in the next few years.

Researchers at the Institute of Economics at Sant’Anna have pioneered the use of Agent-Based Models in economics. Recently, Sant’Anna was awarded funding from the Italian Ministry of Research to establish a “Department of Excellence” called EMbeDS (Economics and Management in the era of Data Science). This has led to a cluster of hires of statisticians, computer scientists and econometricians, and to the acquisition of several large datasets. This focus on big data, which is further reinforced by the interactions with computer scientists at the University of Pisa, is really useful for my project, as I believe that ABMs can be successful in forecasting only if they can be calibrated starting from micro data. Thus, the combined expertise on economics, agent-based modeling and data science makes Sant’Anna an ideal choice for my fellowship. The environment in Sant’Anna is also extremely informal, friendly and collaborative!

Last spring, I also visited a few other research centers that I was considering as host institutions. I learned so much by interacting with researchers in those centers, and I have already started collaborating with a few of them on some exciting new projects! I really hope to make as many of these collaborations as possible happen.

Finally, I will remain linked to the complexity group at INET Oxford as an associate member, planning on visiting once a term. Some exciting projects are starting, and I especially hope to contribute to the ones on business cycles. INET Oxford has been an exceptional place to do my PhD, and I’m really grateful to everyone there. You won’t get rid of me!

(I’ve unfortunately neglected this blog over the last few months. Looking for this position while writing my PhD thesis has absorbed all my energies. Now that I’ve started my postdoc I hope to come back blogging with a certain regularity.)

When Does One of the Central Ideas in Economic Theory Work?

This blog post, written in collaboration with Doyne Farmer and Torsten Heinrich, was originally published on the blog of Rebuilding Macroeconomics.

The concept of equilibrium is central to economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy to climate change, trade policy and the minimum wage.  But is it a good assumption?

In a newly published Science Advances paper, we investigate this question in the simple framework of games, and show that when the game gets complicated this assumption is problematic. If these results carry over from games to economics, this raises deep questions about economic models and when they are useful to understand the real world.

Kids love to play noughts and crosses, but when they are about 8 years old they learn that there is a strategy for the second player that always results in a draw. This strategy is what is called an equilibrium in economics.  If all the players in the game are rational they will play an equilibrium strategy.

In economics, the word rational means that the player can evaluate every possible move and explore its consequences to their endpoint and choose the best move. Once kids are old enough to discover the equilibrium of noughts and crosses they quit playing because the same thing always happens and the game is boring. One way to view this is that, for the purposes of understanding how children play noughts and crosses, rationality is a good behavioral model for eight year olds but not for six year olds.

In a more complicated game like chess, rationality is never a good behavioral model.  The problem is that chess is a much harder game, hard enough that no one can analyze all the possibilities, and the usefulness of the concept of equilibrium breaks down. In chess no one is smart enough to discover the equilibrium, and so the game never gets boring. This illustrates that whether or not rationality is a sensible model of the behavior of real people depends on the problem they have to solve. If the problem is simple, it is a good behavioral model, but if the problem is hard, it may break down.

Theories in economics nearly universally assume equilibrium from the outset. But is this always a reasonable thing to do?  To get insight into this question, we study when equilibrium is a good assumption in games. We don’t just study games like noughts and crosses or chess, but rather we study all possible gamesof a certain type (called normal form games).

We literally make up games at random and have two simulated players play them to see what happens.  The simulated players use strategies that do a good job of describing what real people do in psychology experiments. These strategies are simple rules of thumb, like doing what has worked well in the past or picking the move that is most likely to beat the opponent’s recent moves.

We demonstrate that the intuition about noughts and crosses versus chess holds up in general, but with a new twist. When the game is simple enough, rationality is a good behavioral model:  players easily find the equilibrium strategy and play it. When the game is more complicated, whether or not the strategies will converge to equilibrium depends on whether or not the game is competitive.

If the game is not competitive, or the incentives of the players are lined up, players are likely to find the equilibrium strategy, even if the game is complicated. But when the game is competitive and it gets complicated, they are unlikely to find the equilibrium. When this happens their strategies always keep changing in time, usually chaotically, and they never settle down to the equilibrium. In these cases equilibrium is a poor behavioral model.

A key insight from the paper is that cycles in the logical structure of the game influence the convergence to equilibrium. We analyze what happens when both players are myopic, and play their best response to the last move of the other player. In some cases this results in convergence to equilibrium, where the two players settle on their best move and play it again and again forever.

However, in other cases the sequence of moves never settles down and instead follows a best reply cycle, in which the players’ moves keep changing but periodically repeat – like the movie “ground hog day” – over and over again. When a game has best reply cycles, convergence to equilibrium becomes less likely. Using this result we are able to derive quantitative formulas for when the players of the game will converge to equilibrium and when they won’t, and show explicitly that in complicated and competitive games cycles are prevalent and convergence to equilibrium is unlikely.

When the strategies of the players do not converge to a Nash equilibrium, they perpetually change in time. In many cases the learning trajectories do not follow a periodic cycle, but rather fluctuate around chaotically. For the learning rules we study, the players never converge to any sort of “intertemporal equilibrium”, in the sense that their expectations do not match the outcomes of the game even in a statistical sense. For the cases in which learning dynamics are highly chaotic, no player can easily forecast the other player’s strategies, making it realistic that this mismatch between expectations and outcomes persists over time.

Are these results relevant for macroeconomics? Can we expect insights that hold at the small scale of strategic interactions between two players to also be valid at much larger scales?

While our theory does not directly map to more general settings, many economic scenarios – buying and selling in financial markets, innovation strategies in competing firms, supply chain management – are complicated and competitive. This raises the possibility that some important theories in economics may be inaccurate. Challenges to the behavioral assumption of equilibrium also challenge the predictions of the model. In this case, new approaches are required that explicitly simulate the behavior of economic agents and take into account the fact that real people are not good at solving complicated problems.

The usefulness of qualitative ABMs in economics: An example

I think it is uncontroversial that, compared to standard economic theory, Agent-Based Models (ABMs) describe human behavior and market dynamics more realistically [1]. This enhanced realism gives ABMs the potential to provide more accurate quantitative forecasts, once we figure out how to use them for prediction. However, if the goal of a model is more qualitative, for example to elucidate a theoretical mechanism, is realism useful?

Many economists would say that it is not, and too much realism may even be counterproductive. For example, to expose his Nobel-winning theory of asymmetric information (Market for Lemons), George Akerlof did not need boundedly rational agents and a detailed depiction of market exchanges. The standard setup, with rational utility-maximizing agents and market equilibrium, allowed a transparent exposition of the issue of asymmetric information. I think this is a fair point; however, which level of realism should be assumed in general qualitative models is mostly a matter of taste. If the modeler likes to highlight some economic force in a way that does not depend on people’s bounded rationality or on the nitty-gritty market details, then the assumptions of standard economic theory are okay. If the modeler wants instead to explain some phenomenon as the outcome of dynamically interacting boundedly-rational heterogenous agents, an ABM may be a more natural choice. In some situations, it may be the best choice.

Our paper “Residential income segregation: A behavioral model of the housing market”, with Jean-Pierre Nadal and Annick Vignes, just published in JEBO (Journal of Economic Behavior and Organization), is in my opinion a good example. In this paper, we study the relations between income inequality, segregation and house prices, and explore which policies best deal with these issues. Most urban economists address these problems using spatial equilibrium models. These models are solved by assuming that individuals in each income category experience the same utility all over the city; the resulting prices determine segregation. In our ABM, agents behave according to fast-and-frugal heuristics, and individual interactions dynamically determine prices and segregation patterns.

First of all, taking our approach provides simpler narratives. For instance, to explain why the rich live in the fanciest locations of a city, spatial equilibrium models need to assume that the rich care about city amenities more than the poor do. In our ABM, this is simply explained by rich buyers bidding up the prices until the poor cannot afford buying there.

Additionally, in our ABM it is straightforward to include as much heterogeneity as we need, as we do not have to solve for equilibrium. This is really useful, for example, to study the effect of income inequality on segregation. In accordance with empirical evidence, we find that stronger inequality increases segregation. However, it also decreases average prices over the city. Indeed, with stronger income inequality fewer buyers bid more, while most buyers bid less: the global effect is negative. Finally, we explore whether subsidies or taxes are better at mitigating income segregation. According to our ABM, subsidies are better, because they directly target the poor, increasing their purchasing power. Taxes instead hit the rich, but all benefits go to the middle class, with no effect on the poor. Modeling heterogeneity is key.

Finally, from a technical point of view, a standard critique from economists is that the reliance on numerical simulations in ABMs makes them less suited to clarify theoretical mechanisms. This is true to some extent. For example, the results in the paragraph above have been obtained by simulating the ABM [2]. Nonetheless, we did solve parts of our ABM analytically, giving insights on the causal mechanisms within the model and on non-linearities. Maths and ABMs are not incompatible; the maths to solve ABMs is just a bit different from the one of optimization and fixed point analysis, more commonly used in economic theory.

In sum, I think that our paper is a good example of how even a qualitative ABM can be useful in economics, to provide more realistic narratives and to easily deal with heterogeneity. [3]

 

[1] Excluding some situations in which sophisticated agents interact strategically, such as Google auctions, where standard economic theory may be a more literal description of reality.

[2] To ensure full reproducibility of our results, we have put the code to generate all figures online on Zenodo, a Cern repository for open science.  Sharing code is sn increasingly common practice in the ABM community, hopefully it will become the norm soon.

[3] For a version of this post with the figures from the paper, you can take a look at the Twitter thread starting from this link.

Bank of England conference on big data and machine learning

I recently presented our work on big housing data at the Bank of England conference on “Modelling with Big Data and Machine Learning”. This has been a super-interesting conference where I learned a lot. Now that the slides of the workshop have been uploaded online, I thought I would write a blog post to share something of what I learned. I’ll also take this chance to write about how big data are related to this blog and have the potential to influence theoretical economic models.

The first session of the conference was about nowcasting. I particularly liked the talk by Xinyuan Li, a PhD student at London Business School. In her job market paper, she asks if Google information is useful for nowcasting even when other macroeconomic time series are available. Indeed, most papers showing that Google Trends data improve nowcasting accuracy of, say, the unemployment rate, do not check if this improvement still holds once the researcher considers series of payrolls, industrial production, capacity utilization, etc. Li combines macroeconomic and Google Trends time series in a state-of-the-art dynamic factor model and shows that Google Trends add little, if any, nowcasting accuracy. However, if one increases the number of associated Google Trends time series by using Google Correlate, a tool that finds the Google searches most correlated with a given series, nowcasting accuracy improves. So under some conditions Google information is indeed useful.

The first keynote speaker was Domenico Giannone, from the New York FED. The question in his paper is whether predictive models of economic variables should be dense or sparse. In a sparse model only few predictors are important, while in a dense model most predictors matter. To answer this question it is not enough to estimate a LASSO model and count how many coefficients “survive”. Indeed, for LASSO to be well-specified, the correct model must be sparse. The key idea of the paper is to allow for sparsity, without assuming it, and let the data decide. This is done via a “spike and slab” model, that contains two elements: a parameter q that quantifies the probability that a coefficient is positive; and a parameter γ that shrinks the coefficients. The same predictive power can be achieved in principle by only including few coefficients or by keeping all coefficients but shrinking them. In a Bayesian setting, if the posterior distribution is concentrated at high values of q (and so low values of γ) it means that the model should be dense. This is what happens in the figure below, in five out of six datasets in micro, macro and finance. Yellow means high value for the posterior, and only in the case of micro 1 it is high for q ≈ 0. So in most cases a significant fraction of predictors is useful for forecasting, leading to an illusion of sparsity.

The most thought-provoking speech in the panel discussion on “Opportunities and risks using big data and machine learning” was again by Giannone. What he said is best summarized in a paper that everyone interested in time series forecasting with economic big data should read. His main point is that macroeconomists had to deal with “big data” since the birth of national accounting and business cycle measurement. State-of-the-art nowcasting and forecasting techniques that he jointly developed at the New York FED include a multitude of time series at different frequencies, such as the ones shown in the figure below. These series are highly collinear and rise and fall together, as shown in the heat map in the horizontal plane. According to Giannone, apart from a few exceptions, big data coming from the internet have little chance to improve over carefully collected data from established statistical national agencies.

On a different note, in a following Methodology session I found out about a very interesting technique: Shapley regressions. Andreas Joseph from the Bank of England talked about the analogy between Shapley values in game theory and in machine learning. In cooperative game theory Shapley values quantify how much every player contributes to the collective payoff. A recent paper advanced the idea of applying the same formalism to machine learning. Players become predictors and Shapley values quantify the contribution of each predictor. While there exist several ways to quantify the importance of predictors in linear models, Shapley values extend nicely to potentially highly non-linear models. His colleague Marcus Buckmann presented an application to financial crisis forecasting, using data back to 1870 (see figure below). Interestingly, global and domestic credit contribute a lot to forecasting, while current account and broad money are not so important. In general, Shapley regressions might help with the interpretability of machine learning “black boxes”.

The last session I’d like to write about is the one on text analytics. Eleni Kalamara, a PhD student in King’s College, presented her work on “making text count”. The general goal of her project is to see whether text from UK newspapers proxies sentiment and uncertainty and is useful to predict macroeconomic variables. What I found most interesting was the comparison of 13 different dictionaries that turn text into sentiment and uncertainty indicators. Given such a proliferation of metrics, it seems very useful to systematically compare them. Another interesting talk in the same session was given by Paul Soto. In his job market paper “breaking the word bank”, he used Word2Vec to find words related to “uncertainty” in transcripts of banks’ conference calls. Word2Vec is a machine learning algorithm that finds a vector representation for words taking into account both syntactics and semantics. The figure below shows a two-dimensional projection of the vector space; words related to uncertainty are highlighted in yellow to the right. In his paper, Soto shows that banks with higher idiosyncratic uncertainty are less likely to give loans and more likely to increase their liquidity.

There were a lot of other great talks. For example, Thomas Renault from Sorbonne showed how to detect financial market manipulation—in particular, pump and dump schemes—from Twitter. Luca Onorante from the European Central Bank demonstrated how to select the most relevant Google Trends in a context of Bayesian Model Averaging. Emanuele Ciani from the Bank of Italy developed on a method first introduced by Jon Kleinberg to predict the agents that would most benefit from policies, nicely combining ideas from prediction and from causal inference. For the many other interesting talks, please check the program or look at the slides.

So, what do big data have to do with complexity economics? This conference was purely about statistical models. My sense is that economic theorists are not responding to big data as much as empirical economists. True, heterogenous agent models that use micro evidence to discriminate between different macro models that produce the same macro outcomes are increasingly popular, but I don’t think they quite exploit the power of big data. On the other hand, large-scale “microsimulation” Agent-Based Models (ABMs) that are directly feeded with data and solved forward without imposing equilibrium constraints seem more promising to exploit the big data opportunities. A nice example of this is the ongoing work by Sebastian Poledna and coauthors on “Economic forecasting with an agent-based model”, exploiting comprehensive datasets for the Austrian economy. I plan to work on prediction with ABMs too during my postdoc funded by the James S. Mc Donnell Foundation — better out-of-sample forecasting performance would be a compelling motivation for the enhanced realism of ABMs that comes at the cost of other features that are considered important in mainstream theoretical models.

What is equilibrium in economics and when it is (not) useful

Equilibrium is the most widespread assumption across all subfields of economic theory. It means different things in different subfields, but all equilibrium concepts have a common meaning and purpose, with the same pros and cons. In this post I will argue that the different way in which equilibrium is treated is the distinctive feature of complexity economics, narrowly defined. (This post is mostly methodological. In this blog I will alternate actual research and methodology, always pointing to concrete examples when talking about methodology.)

What equilibrium means in economics

Before talking about what equilibrium is, it is useful to say what it is not. First, equilibrium does not necessarily imply stationarity. Indeed, many equilibrium concepts are dynamic and so for example it is possible to have chaotic equilibria. Conversely, stationary states need not be equilibria. Second, equilibrium in economics has nothing to do with statistically balanced flows, as used in many natural sciences. Third, equilibrium is independent of rationality, if rationality just means choosing the optimal action given available information (I will come back to this).

Equilibrium in economics can generally be thought of as a fixed point in function space, in which beliefs, planned actions and outcomes are mutually consistent. Let me elaborate on this. Differently from particles, economic agents can think, and so have beliefs about states of the economy. Behavioral rules that can be fully or boundedly rational map these beliefs into planned actions. Finally, outcomes resulting from the combined actions of all agents may let each agent realize their planned actions, or may force some agent to choose an action that was not planned. Equilibrium outcomes are such that agents – at least on average – always choose the action that was planned given their beliefs and behavioral rules. In other words, beliefs and planned actions match outcomes.

A few examples should clarify this concept. Perhaps the most famous equilibrium is the Walrasian one. This is usually described as demand=supply, but there is more to that. In a market with one or multiple goods, agents have beliefs about the goods prices, and through some behavioral rule these beliefs determine the quantities that agents try to buy or sell (planned actions). Aggregating up these quantities determines outcomes – the differences between demand and supply for each good. If there is excess demand or excess supply, some agents buy or sell more (or less) than what they planned. Instead, in a Walrasian equilibrium agents have beliefs on prices that make them buy or sell quantities that “clear” the market, i.e. demand=supply. In this way, all agents realize their plans.

When strategic interactions are important, economists use game theory to model interdependent choices. In game theory players have beliefs about what their opponents will do and plan actions according to these beliefs and some behavioral rule. For example, if players are fully rational their behavioral rule is to select the action that maximizes their payoff given their beliefs. In a Nash equilibrium all players’ actions and beliefs are mutually consistent, so no agent can improve her payoff by switching to another action. But agents could be boundedly rational, playing also, with some smaller probability, actions that do not maximize their payoff. In this case it is for example possible to define a Quantal Response Equilibrium, in which again beliefs and planned actions match outcomes.

All equilibrium concepts above are static, but it is straightforward to include a temporal dimension. (Beliefs over time are called expectations.) For example, in many macroeconomic models agents are forward-looking, e.g. they plan how much to consume in each future period of their life. These consumption decisions depend on future interest rates: in periods when the interest rates are high, agents may prefer saving to consuming, so to earn higher interest and afford higher consumption in the future. In a rational expectations equilibrium [1], the expectations for future interest rates are on average correct, so that again beliefs and planned actions (consumption decisions) match outcomes (interest rates). The assumption of rational expectations places no restriction on macroeconomic dynamics: this may reach a stationary state, but also follow limit cycles or chaos.

Many more equilibrium concepts have been proposed in economics, and new ones keep being introduced, but all equilibria share the same rationale. For example, search and matching models are used to go beyond the Walrasian equilibrium concept. When applied to the labor market, these models assume that workers and firms engage in costly search of a good match. This potentially difficult search process may explain involuntary unemployment, which could not be explained if labor demand=labor supply, as in Walrasian models. Yet, the equilibrium of search and matching models can still be viewed in the same way as in the examples above. Workers have beliefs about future vacancy rates, which determine how difficult it is to find a job, and firms have beliefs on future unemployment rates, determining how difficult it is to fill a vacancy. These beliefs determine which minimum wage to accept or offer, or how long to search (planned actions), typically following a rational behavioral rule. Finally, the combined decisions of workers and firms lead to outcomes, namely unemployment and vacancy rates. Again, in equilibrium beliefs, planned actions and outcomes are mutually consistent.

Pros and cons of equilibrium

If equilibrium has been a key concept in economic theory for more than a century, there must be some good reasons. The first reason, I think, is that modeling out-of-equilibrium behavior is harder than modeling equilibrium behavior. What is a realistic way to model what happens when beliefs, planned actions and outcomes are systematically inconsistent? (I give a possible answer at the end.) Equilibrium is then an incredibly useful simplification, that makes it possible to abstract away from this problem. Economic theorists are often interested in adding more and more realistic features about how the economy works in their models, and by assuming equilibrium they keep their models tractable. In addition, contemporary economics is becoming more and more empirical. Many applied economists are happy to just build a model that accounts for some property of the data, and building models with equilibrium is a transparent way to highlight the relevant theoretical mechanisms.

A second reason for the success of equilibrium is that time averages of beliefs, planned actions and outcomes may approximate equilibrium, which would then be a useful point prediction. An example that comes from my research is the game of Matching Pennies. If this game is played repeatedly, under some learning algorithms the players will never converge to a Nash equilibrium. However, it is easy to show that time averaged play is close to equilibrium behavior [2]. Something similar has been observed experimentally.

A third reason is that by assuming equilibrium many variables are determined endogenously, that is within the model. This makes it possible to consider non-trivial interdependencies, called by economists general equilibrium effects. An example comes from a nice paper by Cravino and Levchenko I recently read. In this paper the authors build an equilibrium model to investigate how much multinational corporate control affects international business cycle transmission. Assuming that parent companies are hit by a “shock” in one country, the authors look at aggregate effects on other countries where affiliate companies operate. Interestingly, the effect of the shocks is amplified if workers in the other countries are less willing to change how many hours they work. This general equilibrium effect is due to the interconnections between the good and labor markets, captured by assuming equilibrium.

Despite the advantages of equilibrium assumptions, I think there are two main shortcomings. The first is that, in my opinion, little of what happens in the real world is precisely described by equilibrium. If one is interested in quantitative models, forcing the model to be in equilibrium is a strong mis-specification, even if some aspects of reality are reasonably approximated by equilibrium. Of course many equilibrium models are shown to fit the data, but most analyses are based on in-sample fitting and so could be prone to overfitting.

The second shortcoming is more practical. In some cases solving for equilibrium is technically challenging, and this prevents including some realistic assumptions and fully embracing heterogeneity. In the words of Kaplan and Violante in the Journal of Economic Perspectives “Macroeconomics is about general equilibrium analysis. Dealing with distributions while at the same time respecting the aggregate consistency dictated by equilibrium conditions can be extremely challenging.” Kaplan and Violante propose macroeconomic models named HANK (Heterogeneous Agent New Keynesian), but the way they deal with heterogeneity is extremely stylized. In addition, I think that one of the main reasons why insights from behavioral economics are not routinely added to economic models – in macroeconomics but also in other fields – is that it is technically harder to solve for equilibrium if one departs from full rationality. However, heterogeneity and bounded rationality are key to make serious quantitative models (real people are heterogenous and boundedly rational).

In sum, I think that assuming equilibrium can be really useful if models are used for qualitative reasoning, but it is an obstacle for quantitative analyses.

Complexity economics and equilibrium

My favorite narrow definition of complexity economics is making economic models that are not solved by assuming equilibrium. Rather, the modeler postulates the behavioral rules that each agent will follow and then just lets the system evolve over time. This is what happens in Agent-Based Models (ABMs), often represented as computer programs, or in Heterogenous Agent Models (HAMs), typically represented as dynamical systems. In either case, beliefs and planned actions need not match outcomes. In some cases they might, perhaps after an initial transient, but this is not a primary concern of the modeler. I think that assuming equilibrium is a strong top-down constraint imposed on the system. ABMs and HAMs let outcomes emerge in a bottom-up way without imposing equilibrium constraints, which I think is more in line with a complex systems view of the economy.

Is this useful? I think that the main advantages mirror the shortcomings of equilibrium models. Because one does not have to solve for equilibrium, it is very easy to include any form of heterogeneity and bounded rationality. If one also believes that out-of-equilibrium behavior better describes real economic agents, ABMs and HAMs seem more promising than equilibrium models for quantitative analyses. With the increasing availability of large datasets, we may be able to show this explicitly in the upcoming years. Another advantage is that not assuming equilibrium may lead to more natural descriptions of some problems: for an example, see the housing market ABM in my paper with Jean-Pierre Nadal and Annick Vignes.

The main problems of not assuming equilibrium also mirror the main advantages of doing so. First, being forced to model out-of-equilibrium behavior in each submodule of the model makes ABMs computationally very expensive. Second, it is easy to overlook interdependencies and to take too many variables as exogenous. Third, if beliefs, planned actions and outcomes are systematically inconsistent this may lead to mechanistic behavior that is as unrealistic as equilibrium. For example, in this very nice paper by Gualdi et al., for some parameter settings the ABM economy undergoes a sequence of booms and busts determined by consumers and firms systematically failing to coordinate on equilibrium prices (see first paragraph of Section 5.2). While this may be a realistic description of some economic crises, it seems unlikely that economic agents would systematically fail to recognize the discrepancy between beliefs and outcomes.

I think that the problem of what happens when beliefs and planned actions systematically do not match outcomes can be tackled in ABMs by modeling learning in a sensible way, perhaps including models of agents learning how to learn. In this way, agents may systematically be wrong but in many different ways, and so be unable to find the equilibrium. This view, I think, best describes economic reality.

In sum, complexity economics models are not solved by assuming equilibrium, and this also has its pros and cons. We will see over the upcoming years if the pros outweigh the cons.

_________________________________________

I would like to thank everyone for your interest in this blog: my first post received way more online attention than I expected. Hope you will find my posts interesting! And please give me feedback — I wrote this post with the hope that a natural scientist with just a vague knowledge of economics could understand the basic idea; if you are such a scientist, let me know if I succeeded!

_________________________________________

[1] I find the name “rational expectations” very misleading. Rational expectations equilibria have nothing to do with rationality, rather with the assumption that expectations match outcomes, which does not necessarily imply rationality.

[2] It is not always true that time averages correspond to equilibrium behavior. For example, if the players learn using fictitious play this is not true. And one always has to check ergodicity when using time averages.

Complexity Economics

Welcome to my research blog! I have always found reading other people’s research blogs tremendously useful, as blogs give unique perspectives on aspects of research that do not show up in papers. This blog is my perspective as a junior scientist on the research topics I am passionate about – economics and complex systems – as well as on general topics in science and on careers in research.

My blog is about complexity economics, broadly defined as the application of complex systems methods in economics. In complex systems the whole is more than the sum of its parts, and complex systems scientists investigate how collective behavior emerges from interactive individual components. Practically speaking, complex systems science is a collection of computational and mathematical methods that are applied across the natural and social sciences.  This leads to the broad characterization of complexity economics as a theoretical and empirical focus in economics on networks, non-linear dynamics, adaptation, learning and heterogeneity.

I favor a narrower definition that applies to economic theory. According to this definition, complexity economics is economic modeling without equilibrium. I will write a separate blog post about what equilibrium means in economics, why economists make equilibrium assumptions and what building non-equilibrium models means. Here I only want to stress that equilibrium is a top-down constraint imposed on the economic system. A complex systems view of economics would rather suggest to take a bottom-up approach, for example using agent-based models. This is what makes the (narrowly-defined) complexity economics approach non-mainstream in economics, and perhaps “heterodox”.

I aim my blog at both complex systems scientists – mathematicians, physicists, biologists, computer scientists, etc. – and economists. To this end, I will try to avoid jargon and explain basic things that may be obvious in a field but not clear in others. This for example applies to my research. Every time I publish a paper I plan to write a blog post that describes the paper’s contribution in general terms. But my goal is also to discuss other people’s research, and general topics across economics and science. I would also like to give my opinion as a PhD student and (in the future) postdoc about careers in research, e.g. whether interdisciplinarity pays off or whether the economics job market chokes risky and innovative research.

I will not talk about politics. While I am deeply worried about recent trends across the world, I want to keep this blog only about science, and will only discuss political issues in a positive way (jargon alert: see link). I will also probably not talk about technical aspects of research such as coding and visualization. While this is my day-to-day job and I like having good code and good figures, I am neither a professional developer nor a visual designer, so other people would do a better job at talking about this.

I hope to write a blog post on average every month/two months. There are excellent blogs in economics and in complex systems, I hope this blog will contribute linking the two fields – making economists more aware of what it practically means to take a complex systems approach to economics, and making interdisciplinary scientists more aware of what economists are really doing and why they do it the way they do. This blog will not be the usual critique of economics from a natural sciences perspective, but will rather illustrate mainstream economists’ point of view while promoting an alternative approach.