When Does One of the Central Ideas in Economic Theory Work?

This blog post, written in collaboration with Doyne Farmer and Torsten Heinrich, was originally published on the blog of Rebuilding Macroeconomics.

The concept of equilibrium is central to economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy to climate change, trade policy and the minimum wage.  But is it a good assumption?

In a newly published Science Advances paper, we investigate this question in the simple framework of games, and show that when the game gets complicated this assumption is problematic. If these results carry over from games to economics, this raises deep questions about economic models and when they are useful to understand the real world.

Kids love to play noughts and crosses, but when they are about 8 years old they learn that there is a strategy for the second player that always results in a draw. This strategy is what is called an equilibrium in economics.  If all the players in the game are rational they will play an equilibrium strategy.

In economics, the word rational means that the player can evaluate every possible move and explore its consequences to their endpoint and choose the best move. Once kids are old enough to discover the equilibrium of noughts and crosses they quit playing because the same thing always happens and the game is boring. One way to view this is that, for the purposes of understanding how children play noughts and crosses, rationality is a good behavioral model for eight year olds but not for six year olds.

In a more complicated game like chess, rationality is never a good behavioral model.  The problem is that chess is a much harder game, hard enough that no one can analyze all the possibilities, and the usefulness of the concept of equilibrium breaks down. In chess no one is smart enough to discover the equilibrium, and so the game never gets boring. This illustrates that whether or not rationality is a sensible model of the behavior of real people depends on the problem they have to solve. If the problem is simple, it is a good behavioral model, but if the problem is hard, it may break down.

Theories in economics nearly universally assume equilibrium from the outset. But is this always a reasonable thing to do?  To get insight into this question, we study when equilibrium is a good assumption in games. We don’t just study games like noughts and crosses or chess, but rather we study all possible gamesof a certain type (called normal form games).

We literally make up games at random and have two simulated players play them to see what happens.  The simulated players use strategies that do a good job of describing what real people do in psychology experiments. These strategies are simple rules of thumb, like doing what has worked well in the past or picking the move that is most likely to beat the opponent’s recent moves.

We demonstrate that the intuition about noughts and crosses versus chess holds up in general, but with a new twist. When the game is simple enough, rationality is a good behavioral model:  players easily find the equilibrium strategy and play it. When the game is more complicated, whether or not the strategies will converge to equilibrium depends on whether or not the game is competitive.

If the game is not competitive, or the incentives of the players are lined up, players are likely to find the equilibrium strategy, even if the game is complicated. But when the game is competitive and it gets complicated, they are unlikely to find the equilibrium. When this happens their strategies always keep changing in time, usually chaotically, and they never settle down to the equilibrium. In these cases equilibrium is a poor behavioral model.

A key insight from the paper is that cycles in the logical structure of the game influence the convergence to equilibrium. We analyze what happens when both players are myopic, and play their best response to the last move of the other player. In some cases this results in convergence to equilibrium, where the two players settle on their best move and play it again and again forever.

However, in other cases the sequence of moves never settles down and instead follows a best reply cycle, in which the players’ moves keep changing but periodically repeat – like the movie “ground hog day” – over and over again. When a game has best reply cycles, convergence to equilibrium becomes less likely. Using this result we are able to derive quantitative formulas for when the players of the game will converge to equilibrium and when they won’t, and show explicitly that in complicated and competitive games cycles are prevalent and convergence to equilibrium is unlikely.

When the strategies of the players do not converge to a Nash equilibrium, they perpetually change in time. In many cases the learning trajectories do not follow a periodic cycle, but rather fluctuate around chaotically. For the learning rules we study, the players never converge to any sort of “intertemporal equilibrium”, in the sense that their expectations do not match the outcomes of the game even in a statistical sense. For the cases in which learning dynamics are highly chaotic, no player can easily forecast the other player’s strategies, making it realistic that this mismatch between expectations and outcomes persists over time.

Are these results relevant for macroeconomics? Can we expect insights that hold at the small scale of strategic interactions between two players to also be valid at much larger scales?

While our theory does not directly map to more general settings, many economic scenarios – buying and selling in financial markets, innovation strategies in competing firms, supply chain management – are complicated and competitive. This raises the possibility that some important theories in economics may be inaccurate. Challenges to the behavioral assumption of equilibrium also challenge the predictions of the model. In this case, new approaches are required that explicitly simulate the behavior of economic agents and take into account the fact that real people are not good at solving complicated problems.

The usefulness of qualitative ABMs in economics: An example

I think it is uncontroversial that, compared to standard economic theory, Agent-Based Models (ABMs) describe human behavior and market dynamics more realistically [1]. This enhanced realism gives ABMs the potential to provide more accurate quantitative forecasts, once we figure out how to use them for prediction. However, if the goal of a model is more qualitative, for example to elucidate a theoretical mechanism, is realism useful?

Many economists would say that it is not, and too much realism may even be counterproductive. For example, to expose his Nobel-winning theory of asymmetric information (Market for Lemons), George Akerlof did not need boundedly rational agents and a detailed depiction of market exchanges. The standard setup, with rational utility-maximizing agents and market equilibrium, allowed a transparent exposition of the issue of asymmetric information. I think this is a fair point; however, which level of realism should be assumed in general qualitative models is mostly a matter of taste. If the modeler likes to highlight some economic force in a way that does not depend on people’s bounded rationality or on the nitty-gritty market details, then the assumptions of standard economic theory are okay. If the modeler wants instead to explain some phenomenon as the outcome of dynamically interacting boundedly-rational heterogenous agents, an ABM may be a more natural choice. In some situations, it may be the best choice.

Our paper “Residential income segregation: A behavioral model of the housing market”, with Jean-Pierre Nadal and Annick Vignes, just published in JEBO (Journal of Economic Behavior and Organization), is in my opinion a good example. In this paper, we study the relations between income inequality, segregation and house prices, and explore which policies best deal with these issues. Most urban economists address these problems using spatial equilibrium models. These models are solved by assuming that individuals in each income category experience the same utility all over the city; the resulting prices determine segregation. In our ABM, agents behave according to fast-and-frugal heuristics, and individual interactions dynamically determine prices and segregation patterns.

First of all, taking our approach provides simpler narratives. For instance, to explain why the rich live in the fanciest locations of a city, spatial equilibrium models need to assume that the rich care about city amenities more than the poor do. In our ABM, this is simply explained by rich buyers bidding up the prices until the poor cannot afford buying there.

Additionally, in our ABM it is straightforward to include as much heterogeneity as we need, as we do not have to solve for equilibrium. This is really useful, for example, to study the effect of income inequality on segregation. In accordance with empirical evidence, we find that stronger inequality increases segregation. However, it also decreases average prices over the city. Indeed, with stronger income inequality fewer buyers bid more, while most buyers bid less: the global effect is negative. Finally, we explore whether subsidies or taxes are better at mitigating income segregation. According to our ABM, subsidies are better, because they directly target the poor, increasing their purchasing power. Taxes instead hit the rich, but all benefits go to the middle class, with no effect on the poor. Modeling heterogeneity is key.

Finally, from a technical point of view, a standard critique from economists is that the reliance on numerical simulations in ABMs makes them less suited to clarify theoretical mechanisms. This is true to some extent. For example, the results in the paragraph above have been obtained by simulating the ABM [2]. Nonetheless, we did solve parts of our ABM analytically, giving insights on the causal mechanisms within the model and on non-linearities. Maths and ABMs are not incompatible; the maths to solve ABMs is just a bit different from the one of optimization and fixed point analysis, more commonly used in economic theory.

In sum, I think that our paper is a good example of how even a qualitative ABM can be useful in economics, to provide more realistic narratives and to easily deal with heterogeneity. [3]

 

[1] Excluding some situations in which sophisticated agents interact strategically, such as Google auctions, where standard economic theory may be a more literal description of reality.

[2] To ensure full reproducibility of our results, we have put the code to generate all figures online on Zenodo, a Cern repository for open science.  Sharing code is sn increasingly common practice in the ABM community, hopefully it will become the norm soon.

[3] For a version of this post with the figures from the paper, you can take a look at the Twitter thread starting from this link.

Bank of England conference on big data and machine learning

I recently presented our work on big housing data at the Bank of England conference on “Modelling with Big Data and Machine Learning”. This has been a super-interesting conference where I learned a lot. Now that the slides of the workshop have been uploaded online, I thought I would write a blog post to share something of what I learned. I’ll also take this chance to write about how big data are related to this blog and have the potential to influence theoretical economic models.

The first session of the conference was about nowcasting. I particularly liked the talk by Xinyuan Li, a PhD student at London Business School. In her job market paper, she asks if Google information is useful for nowcasting even when other macroeconomic time series are available. Indeed, most papers showing that Google Trends data improve nowcasting accuracy of, say, the unemployment rate, do not check if this improvement still holds once the researcher considers series of payrolls, industrial production, capacity utilization, etc. Li combines macroeconomic and Google Trends time series in a state-of-the-art dynamic factor model and shows that Google Trends add little, if any, nowcasting accuracy. However, if one increases the number of associated Google Trends time series by using Google Correlate, a tool that finds the Google searches most correlated with a given series, nowcasting accuracy improves. So under some conditions Google information is indeed useful.

The first keynote speaker was Domenico Giannone, from the New York FED. The question in his paper is whether predictive models of economic variables should be dense or sparse. In a sparse model only few predictors are important, while in a dense model most predictors matter. To answer this question it is not enough to estimate a LASSO model and count how many coefficients “survive”. Indeed, for LASSO to be well-specified, the correct model must be sparse. The key idea of the paper is to allow for sparsity, without assuming it, and let the data decide. This is done via a “spike and slab” model, that contains two elements: a parameter q that quantifies the probability that a coefficient is positive; and a parameter γ that shrinks the coefficients. The same predictive power can be achieved in principle by only including few coefficients or by keeping all coefficients but shrinking them. In a Bayesian setting, if the posterior distribution is concentrated at high values of q (and so low values of γ) it means that the model should be dense. This is what happens in the figure below, in five out of six datasets in micro, macro and finance. Yellow means high value for the posterior, and only in the case of micro 1 it is high for q ≈ 0. So in most cases a significant fraction of predictors is useful for forecasting, leading to an illusion of sparsity.

The most thought-provoking speech in the panel discussion on “Opportunities and risks using big data and machine learning” was again by Giannone. What he said is best summarized in a paper that everyone interested in time series forecasting with economic big data should read. His main point is that macroeconomists had to deal with “big data” since the birth of national accounting and business cycle measurement. State-of-the-art nowcasting and forecasting techniques that he jointly developed at the New York FED include a multitude of time series at different frequencies, such as the ones shown in the figure below. These series are highly collinear and rise and fall together, as shown in the heat map in the horizontal plane. According to Giannone, apart from a few exceptions, big data coming from the internet have little chance to improve over carefully collected data from established statistical national agencies.

On a different note, in a following Methodology session I found out about a very interesting technique: Shapley regressions. Andreas Joseph from the Bank of England talked about the analogy between Shapley values in game theory and in machine learning. In cooperative game theory Shapley values quantify how much every player contributes to the collective payoff. A recent paper advanced the idea of applying the same formalism to machine learning. Players become predictors and Shapley values quantify the contribution of each predictor. While there exist several ways to quantify the importance of predictors in linear models, Shapley values extend nicely to potentially highly non-linear models. His colleague Marcus Buckmann presented an application to financial crisis forecasting, using data back to 1870 (see figure below). Interestingly, global and domestic credit contribute a lot to forecasting, while current account and broad money are not so important. In general, Shapley regressions might help with the interpretability of machine learning “black boxes”.

The last session I’d like to write about is the one on text analytics. Eleni Kalamara, a PhD student in King’s College, presented her work on “making text count”. The general goal of her project is to see whether text from UK newspapers proxies sentiment and uncertainty and is useful to predict macroeconomic variables. What I found most interesting was the comparison of 13 different dictionaries that turn text into sentiment and uncertainty indicators. Given such a proliferation of metrics, it seems very useful to systematically compare them. Another interesting talk in the same session was given by Paul Soto. In his job market paper “breaking the word bank”, he used Word2Vec to find words related to “uncertainty” in transcripts of banks’ conference calls. Word2Vec is a machine learning algorithm that finds a vector representation for words taking into account both syntactics and semantics. The figure below shows a two-dimensional projection of the vector space; words related to uncertainty are highlighted in yellow to the right. In his paper, Soto shows that banks with higher idiosyncratic uncertainty are less likely to give loans and more likely to increase their liquidity.

There were a lot of other great talks. For example, Thomas Renault from Sorbonne showed how to detect financial market manipulation—in particular, pump and dump schemes—from Twitter. Luca Onorante from the European Central Bank demonstrated how to select the most relevant Google Trends in a context of Bayesian Model Averaging. Emanuele Ciani from the Bank of Italy developed on a method first introduced by Jon Kleinberg to predict the agents that would most benefit from policies, nicely combining ideas from prediction and from causal inference. For the many other interesting talks, please check the program or look at the slides.

So, what do big data have to do with complexity economics? This conference was purely about statistical models. My sense is that economic theorists are not responding to big data as much as empirical economists. True, heterogenous agent models that use micro evidence to discriminate between different macro models that produce the same macro outcomes are increasingly popular, but I don’t think they quite exploit the power of big data. On the other hand, large-scale “microsimulation” Agent-Based Models (ABMs) that are directly feeded with data and solved forward without imposing equilibrium constraints seem more promising to exploit the big data opportunities. A nice example of this is the ongoing work by Sebastian Poledna and coauthors on “Economic forecasting with an agent-based model”, exploiting comprehensive datasets for the Austrian economy. I plan to work on prediction with ABMs too during my postdoc funded by the James S. Mc Donnell Foundation — better out-of-sample forecasting performance would be a compelling motivation for the enhanced realism of ABMs that comes at the cost of other features that are considered important in mainstream theoretical models.

What is equilibrium in economics and when it is (not) useful

Equilibrium is the most widespread assumption across all subfields of economic theory. It means different things in different subfields, but all equilibrium concepts have a common meaning and purpose, with the same pros and cons. In this post I will argue that the different way in which equilibrium is treated is the distinctive feature of complexity economics, narrowly defined. (This post is mostly methodological. In this blog I will alternate actual research and methodology, always pointing to concrete examples when talking about methodology.)

What equilibrium means in economics

Before talking about what equilibrium is, it is useful to say what it is not. First, equilibrium does not necessarily imply stationarity. Indeed, many equilibrium concepts are dynamic and so for example it is possible to have chaotic equilibria. Conversely, stationary states need not be equilibria. Second, equilibrium in economics has nothing to do with statistically balanced flows, as used in many natural sciences. Third, equilibrium is independent of rationality, if rationality just means choosing the optimal action given available information (I will come back to this).

Equilibrium in economics can generally be thought of as a fixed point in function space, in which beliefs, planned actions and outcomes are mutually consistent. Let me elaborate on this. Differently from particles, economic agents can think, and so have beliefs about states of the economy. Behavioral rules that can be fully or boundedly rational map these beliefs into planned actions. Finally, outcomes resulting from the combined actions of all agents may let each agent realize their planned actions, or may force some agent to choose an action that was not planned. Equilibrium outcomes are such that agents – at least on average – always choose the action that was planned given their beliefs and behavioral rules. In other words, beliefs and planned actions match outcomes.

A few examples should clarify this concept. Perhaps the most famous equilibrium is the Walrasian one. This is usually described as demand=supply, but there is more to that. In a market with one or multiple goods, agents have beliefs about the goods prices, and through some behavioral rule these beliefs determine the quantities that agents try to buy or sell (planned actions). Aggregating up these quantities determines outcomes – the differences between demand and supply for each good. If there is excess demand or excess supply, some agents buy or sell more (or less) than what they planned. Instead, in a Walrasian equilibrium agents have beliefs on prices that make them buy or sell quantities that “clear” the market, i.e. demand=supply. In this way, all agents realize their plans.

When strategic interactions are important, economists use game theory to model interdependent choices. In game theory players have beliefs about what their opponents will do and plan actions according to these beliefs and some behavioral rule. For example, if players are fully rational their behavioral rule is to select the action that maximizes their payoff given their beliefs. In a Nash equilibrium all players’ actions and beliefs are mutually consistent, so no agent can improve her payoff by switching to another action. But agents could be boundedly rational, playing also, with some smaller probability, actions that do not maximize their payoff. In this case it is for example possible to define a Quantal Response Equilibrium, in which again beliefs and planned actions match outcomes.

All equilibrium concepts above are static, but it is straightforward to include a temporal dimension. (Beliefs over time are called expectations.) For example, in many macroeconomic models agents are forward-looking, e.g. they plan how much to consume in each future period of their life. These consumption decisions depend on future interest rates: in periods when the interest rates are high, agents may prefer saving to consuming, so to earn higher interest and afford higher consumption in the future. In a rational expectations equilibrium [1], the expectations for future interest rates are on average correct, so that again beliefs and planned actions (consumption decisions) match outcomes (interest rates). The assumption of rational expectations places no restriction on macroeconomic dynamics: this may reach a stationary state, but also follow limit cycles or chaos.

Many more equilibrium concepts have been proposed in economics, and new ones keep being introduced, but all equilibria share the same rationale. For example, search and matching models are used to go beyond the Walrasian equilibrium concept. When applied to the labor market, these models assume that workers and firms engage in costly search of a good match. This potentially difficult search process may explain involuntary unemployment, which could not be explained if labor demand=labor supply, as in Walrasian models. Yet, the equilibrium of search and matching models can still be viewed in the same way as in the examples above. Workers have beliefs about future vacancy rates, which determine how difficult it is to find a job, and firms have beliefs on future unemployment rates, determining how difficult it is to fill a vacancy. These beliefs determine which minimum wage to accept or offer, or how long to search (planned actions), typically following a rational behavioral rule. Finally, the combined decisions of workers and firms lead to outcomes, namely unemployment and vacancy rates. Again, in equilibrium beliefs, planned actions and outcomes are mutually consistent.

Pros and cons of equilibrium

If equilibrium has been a key concept in economic theory for more than a century, there must be some good reasons. The first reason, I think, is that modeling out-of-equilibrium behavior is harder than modeling equilibrium behavior. What is a realistic way to model what happens when beliefs, planned actions and outcomes are systematically inconsistent? (I give a possible answer at the end.) Equilibrium is then an incredibly useful simplification, that makes it possible to abstract away from this problem. Economic theorists are often interested in adding more and more realistic features about how the economy works in their models, and by assuming equilibrium they keep their models tractable. In addition, contemporary economics is becoming more and more empirical. Many applied economists are happy to just build a model that accounts for some property of the data, and building models with equilibrium is a transparent way to highlight the relevant theoretical mechanisms.

A second reason for the success of equilibrium is that time averages of beliefs, planned actions and outcomes may approximate equilibrium, which would then be a useful point prediction. An example that comes from my research is the game of Matching Pennies. If this game is played repeatedly, under some learning algorithms the players will never converge to a Nash equilibrium. However, it is easy to show that time averaged play is close to equilibrium behavior [2]. Something similar has been observed experimentally.

A third reason is that by assuming equilibrium many variables are determined endogenously, that is within the model. This makes it possible to consider non-trivial interdependencies, called by economists general equilibrium effects. An example comes from a nice paper by Cravino and Levchenko I recently read. In this paper the authors build an equilibrium model to investigate how much multinational corporate control affects international business cycle transmission. Assuming that parent companies are hit by a “shock” in one country, the authors look at aggregate effects on other countries where affiliate companies operate. Interestingly, the effect of the shocks is amplified if workers in the other countries are less willing to change how many hours they work. This general equilibrium effect is due to the interconnections between the good and labor markets, captured by assuming equilibrium.

Despite the advantages of equilibrium assumptions, I think there are two main shortcomings. The first is that, in my opinion, little of what happens in the real world is precisely described by equilibrium. If one is interested in quantitative models, forcing the model to be in equilibrium is a strong mis-specification, even if some aspects of reality are reasonably approximated by equilibrium. Of course many equilibrium models are shown to fit the data, but most analyses are based on in-sample fitting and so could be prone to overfitting.

The second shortcoming is more practical. In some cases solving for equilibrium is technically challenging, and this prevents including some realistic assumptions and fully embracing heterogeneity. In the words of Kaplan and Violante in the Journal of Economic Perspectives “Macroeconomics is about general equilibrium analysis. Dealing with distributions while at the same time respecting the aggregate consistency dictated by equilibrium conditions can be extremely challenging.” Kaplan and Violante propose macroeconomic models named HANK (Heterogeneous Agent New Keynesian), but the way they deal with heterogeneity is extremely stylized. In addition, I think that one of the main reasons why insights from behavioral economics are not routinely added to economic models – in macroeconomics but also in other fields – is that it is technically harder to solve for equilibrium if one departs from full rationality. However, heterogeneity and bounded rationality are key to make serious quantitative models (real people are heterogenous and boundedly rational).

In sum, I think that assuming equilibrium can be really useful if models are used for qualitative reasoning, but it is an obstacle for quantitative analyses.

Complexity economics and equilibrium

My favorite narrow definition of complexity economics is making economic models that are not solved by assuming equilibrium. Rather, the modeler postulates the behavioral rules that each agent will follow and then just lets the system evolve over time. This is what happens in Agent-Based Models (ABMs), often represented as computer programs, or in Heterogenous Agent Models (HAMs), typically represented as dynamical systems. In either case, beliefs and planned actions need not match outcomes. In some cases they might, perhaps after an initial transient, but this is not a primary concern of the modeler. I think that assuming equilibrium is a strong top-down constraint imposed on the system. ABMs and HAMs let outcomes emerge in a bottom-up way without imposing equilibrium constraints, which I think is more in line with a complex systems view of the economy.

Is this useful? I think that the main advantages mirror the shortcomings of equilibrium models. Because one does not have to solve for equilibrium, it is very easy to include any form of heterogeneity and bounded rationality. If one also believes that out-of-equilibrium behavior better describes real economic agents, ABMs and HAMs seem more promising than equilibrium models for quantitative analyses. With the increasing availability of large datasets, we may be able to show this explicitly in the upcoming years. Another advantage is that not assuming equilibrium may lead to more natural descriptions of some problems: for an example, see the housing market ABM in my paper with Jean-Pierre Nadal and Annick Vignes.

The main problems of not assuming equilibrium also mirror the main advantages of doing so. First, being forced to model out-of-equilibrium behavior in each submodule of the model makes ABMs computationally very expensive. Second, it is easy to overlook interdependencies and to take too many variables as exogenous. Third, if beliefs, planned actions and outcomes are systematically inconsistent this may lead to mechanistic behavior that is as unrealistic as equilibrium. For example, in this very nice paper by Gualdi et al., for some parameter settings the ABM economy undergoes a sequence of booms and busts determined by consumers and firms systematically failing to coordinate on equilibrium prices (see first paragraph of Section 5.2). While this may be a realistic description of some economic crises, it seems unlikely that economic agents would systematically fail to recognize the discrepancy between beliefs and outcomes.

I think that the problem of what happens when beliefs and planned actions systematically do not match outcomes can be tackled in ABMs by modeling learning in a sensible way, perhaps including models of agents learning how to learn. In this way, agents may systematically be wrong but in many different ways, and so be unable to find the equilibrium. This view, I think, best describes economic reality.

In sum, complexity economics models are not solved by assuming equilibrium, and this also has its pros and cons. We will see over the upcoming years if the pros outweigh the cons.

_________________________________________

I would like to thank everyone for your interest in this blog: my first post received way more online attention than I expected. Hope you will find my posts interesting! And please give me feedback — I wrote this post with the hope that a natural scientist with just a vague knowledge of economics could understand the basic idea; if you are such a scientist, let me know if I succeeded!

_________________________________________

[1] I find the name “rational expectations” very misleading. Rational expectations equilibria have nothing to do with rationality, rather with the assumption that expectations match outcomes, which does not necessarily imply rationality.

[2] It is not always true that time averages correspond to equilibrium behavior. For example, if the players learn using fictitious play this is not true. And one always has to check ergodicity when using time averages.

Complexity Economics

Welcome to my research blog! I have always found reading other people’s research blogs tremendously useful, as blogs give unique perspectives on aspects of research that do not show up in papers. This blog is my perspective as a junior scientist on the research topics I am passionate about – economics and complex systems – as well as on general topics in science and on careers in research.

My blog is about complexity economics, broadly defined as the application of complex systems methods in economics. In complex systems the whole is more than the sum of its parts, and complex systems scientists investigate how collective behavior emerges from interactive individual components. Practically speaking, complex systems science is a collection of computational and mathematical methods that are applied across the natural and social sciences.  This leads to the broad characterization of complexity economics as a theoretical and empirical focus in economics on networks, non-linear dynamics, adaptation, learning and heterogeneity.

I favor a narrower definition that applies to economic theory. According to this definition, complexity economics is economic modeling without equilibrium. I will write a separate blog post about what equilibrium means in economics, why economists make equilibrium assumptions and what building non-equilibrium models means. Here I only want to stress that equilibrium is a top-down constraint imposed on the economic system. A complex systems view of economics would rather suggest to take a bottom-up approach, for example using agent-based models. This is what makes the (narrowly-defined) complexity economics approach non-mainstream in economics, and perhaps “heterodox”.

I aim my blog at both complex systems scientists – mathematicians, physicists, biologists, computer scientists, etc. – and economists. To this end, I will try to avoid jargon and explain basic things that may be obvious in a field but not clear in others. This for example applies to my research. Every time I publish a paper I plan to write a blog post that describes the paper’s contribution in general terms. But my goal is also to discuss other people’s research, and general topics across economics and science. I would also like to give my opinion as a PhD student and (in the future) postdoc about careers in research, e.g. whether interdisciplinarity pays off or whether the economics job market chokes risky and innovative research.

I will not talk about politics. While I am deeply worried about recent trends across the world, I want to keep this blog only about science, and will only discuss political issues in a positive way (jargon alert: see link). I will also probably not talk about technical aspects of research such as coding and visualization. While this is my day-to-day job and I like having good code and good figures, I am neither a professional developer nor a visual designer, so other people would do a better job at talking about this.

I hope to write a blog post on average every month/two months. There are excellent blogs in economics and in complex systems, I hope this blog will contribute linking the two fields – making economists more aware of what it practically means to take a complex systems approach to economics, and making interdisciplinary scientists more aware of what economists are really doing and why they do it the way they do. This blog will not be the usual critique of economics from a natural sciences perspective, but will rather illustrate mainstream economists’ point of view while promoting an alternative approach.