Economic models are broken, and economists have wildly different ideas about how to fix them

Addressing the problems.
Addressing the problems.
Image: Reuters/Shannon Stapleton
We may earn a commission from links on this page.

Ten years after the global financial crisis, economists are still puzzling over how they (mostly) failed to predict such a massive crash. Given all the data and experience at their disposal, how did they miss something so consequential?

In a new paper (pdf) entitled “Where Modern Macroeconomics Went Wrong,” Nobel laureate Joseph Stiglitz of Columbia University lays much of the blame on the models used to understand the economy. These Dynamic Stochastic General Equilibrium (DSGE) models have become increasingly popular among macroeconomists, central bankers, and other analysts.

According to Stiglitz, for an economic model to be useful it should be able to provide insights into the common features of economic downturns and help inform policy responses to them. Better still, the models should be able to predict a crisis. The standard DSGE model is a “poor basis” for policy decisions, he writes, and more than a few tweaks are needed to improve it. 

“The core of the failings of the DSGE model can be traced to the attempt, decades ago, to reconcile macroeconomics with micro-economics,” writes Stiglitz. Here, Stiglitz challenges one of the primary appeals of DSGE models: their “micro foundations.” This means that all models are built up from the decisions of an individual or “representative agent.” These models generally assume that individuals act to maximize their utility “over an infinite lifetime without borrowing constraints,” he writes.

As a result, the models don’t typically incorporate the actual behavior of people, companies, and markets to changes in their circumstances or incentives. This critique has been made before (pdf) and Stiglitz describes three main problems.

First, the models haven’t been good enough at predicting economic trends, particularly around crises, because they are built to detect short-term fluctuations and not large shocks. Second, they don’t sufficiently incorporate the significant influence of the finance industry, because the models are better at incorporating information about individuals instead of institutions. Third, shocks in DSGE-based systems assume that they are caused by external factors and don’t account for the fact that some crises arise from within.

Ultimately, Stiglitz suggests that models with micro-economic foundations should be replaced with simpler alternatives, like the ones Robert Shiller used before 2007 to warn about the US housing bubble.

But instead of scrapping efforts to use micro insights to model the macro economy, researchers at the Bank of England suggest doubling down on a data-heavy approach. In a recent paper (pdf), they also acknowledge the problems with modern economic models. They say that machine learning could address some of these shortfalls by taking advantage of the increasingly large amounts of “micro and high-frequency data” available to central banks and regulators, such as transactions between financial institutions and detailed household consumption patterns.

The authors of the central bank’s paper explain that macroeconomic modelling often takes a “deductive” approach, which means starting off with a set of assumptions and arriving at the largest possible generality. Instead, machine learning could encourage an “inductive” approach that would allow economists to analyze a vast amount of data that could be used to detect and investigate underlying patterns. For example, artificial neural networks may find unknown interactions between different variables which can be used to build more accurate economic models. Economists could make a “consistent transition between a micro and macro view of the economy” this way, the researchers write.

However, there are still large barriers to machine learning really changing economic models. The Bank of England’s researchers point to a few. One example is that most AI algorithms don’t explain how some outputs are generated from particular inputs, making their results difficult to interpret—this is why some refer to algorithmic systems as “black boxes.” Another problem is that many machine-learning techniques don’t account for the the flow of time, which could lead to either too much or too little focus on certain types of information. Algorithms are still created by humans, so code is susceptible to our mistakes and biases.

While Stiglitz recommends taking a step back from micro-economic models, and researchers at the Bank of England suggest a deeper dive into detailed data, it could be that a combination of the two will produce more effective economic models than the ones that have performed so poorly in recent years.