Lars Hansen is an expert on the future. Or, more precisely, what we don’t know about the future. The University of Chicago economist is famous for his work on uncertainty in the macroeconomy and financial markets (and the links between them). He won the Nobel Prize in 2013 for his work in this area.
His insights are particularly relevant now, as the world feels more uncertain that ever: technology is changing work as we know it, the fractious political climate adds a layer of unpredictability to everything, and despite booming economic indicators, many people feel left behind. Economists use models—mathematical descriptions that attempt to quantify what might happen in the future—to offer guidance on what to expect and how to plan. They are the road maps to navigating an uncertain future.
Hansen, as his name reveals, has deep Scandinavian roots. His great-great grandparents settled in Utah where his family lived for generations. Hansen says he was not a great high school student, but once at college at Utah State he applied himself and excelled. He went on to earn his PhD in economics at the University of Minnesota, a program notorious for using arcane mathematical methods.
Over the course of his career Hansen developed economic models that are widely used by economists and financial engineers, and his groundbreaking work focuses on the role that uncertainty plays in models. It is very technical, which is one reason why his work is less known to non-economists compared with the other members of his class of Nobel economic laureates, Robert Schiller and Gene Fama. But to experts who try to make sense of the economy and financial markets, the tools Hansen has developed are invaluable. As the increasingly sprawling and complex financial world appears to make less sense, Hansen’s work helps us understand how to think about the future.
An introvert with a childhood speech impediment he worked to overcome, Hansen never relished the spot light. But winning the Nobel prize changed his life. He says on his website, “Because of the prize, I suddenly began receiving more attention. I tell people, I’m the same person I was before the prize. But somehow you’re treated differently, as if your IQ just jumped by 40 points. So that part of the experience was interesting along some dimensions. I go to public events now and people want to talk to me, but before, they were happy to ignore me. And that’s okay. I think it’s an opportunity, too.”
Following the global financial crisis, many are skeptical that economic models, intended to offer clarity, provide much value. Some think they may even do harm. How does Hansen feel about this? Quartz asked him about how to make sense of the uncertainty around us, and the limits of what we can measure and, ultimately, know.
Quartz: “Uncertainty” is a big, broad concept. How do you break it down?
Hansen: From a modeling perspective, I find it advantageous to distinguish the three components to uncertainty.
First, uncertainty within a model, which I and others call “risk.” My favorite example of a so-called “risk model” is rolling dice or flipping coins: we don’t know outcomes but we know probabilities. This component is the one we typically feature in economic analyses.
Second, there is uncertainty about which among a collection of possible models is the best one, which I call “ambiguity.” Take the idea of secular stagnation. Some models say we may be in a situation of permanent secular stagnation, whereby we had these wonderful economic growth rates in the past, but going forward we just can’t expect that to occur. Others might argue that, yes, we’ve had a short-term downturn, and obviously the financial crisis was a bad experience, although we’re already seeing a recovery out of it. These represent two different viewpoints—or models—but we’re not sure what’s the right one. Maybe we’re in secular stagnation or maybe we’re not. In some sense both are wrong, but you have to assess that.
Third, there is uncertainty about how models might be flawed or misspecified. This component is the most difficult to wrestle with, but may also be the most important one when understanding decision making. All models, by their nature, are wrong because they are necessarily simplifications and abstractions. But some provide valuable insights and guidance for making smart decisions. Assessing how models might be wrong is an important part of uncertainty analysis.
Do these three components impact markets in different ways?
Usually when we talk about financial markets, we use the term “risk.” So empirical finance people are going out and measuring risk premia, [the price financial markets put on risk], the press talk about markets being risk-averse or the like, and all use the language of the first component of uncertainty.
In a more complex environment, investors have to struggle with forming the right viewpoint going forward. Is now a good time to make investments that will have productive payouts over the next 10 or 15 years? They have to speculate about what’s going on with the macroeconomy.
Can you give me an example?
One example I like to use is to suppose we are thinking about economic growth. We are uncertain about technological progress or secular stagnation or the like. And you ask yourself, am I in a good or bad growth state? One model tells you growth is persistent and the other tells you it is not very persistent. If I am in bad times, what I fear is persistence. If I am in good times, what I fear is a lack of persistence, because I want to the good time to carry on.
Or, if I am someone deciding go to graduate school, I am picking something I have a passion for, but I also want employment opportunities in the future. Part of what I wrestle with is how to make guesses about job prospects.
And once I change the story and people aren’t sure what the right model is (or more generally, viewpoint or perspective), so their perception of models change over time, it adds a new dynamic in the behavior of how markets work.
There is so much uncertainty about the future economy and yet not a lot of volatility in markets. Can the way we measure risk explain why?
The question is over what horizons do market measures of volatility capture uncertainty. A lot of the important uncertainties don’t get resolved very quickly, and so as a consequence you don’t see big responses in market volatility. If you we’re looking at things that are going to play out over decades, I am not sure how quickly that will translate into the VIX [a measure of risk a few months ahead]; financial measures of volatility only get a piece of the uncertainty we are talking about. They are interesting barometers but there are big pieces they will not pick up on.
Do you think the financial crisis made people less confident in their models?
Yes, I hope they did! It should have made macro economists less confident, too. I hope we all learned a lesson.
How can we manage the fragility that uncertainty creates?
“Manage” it is a strong term. I wish I knew. Using economic models in naïve ways was a mistake going into the financial crisis for both macroeconomists and financial analysts. Hopefully we’ve learned that lesson. We must try to think more about broader consequences of uncertainty and how to be more sensible about it.
It is also important to have government policies that don’t unnecessarily add to it or exacerbate it. It can be counterproductive to have the private sector speculate about what the government will do next. The private sector doesn’t want to speculate about what the government will do next, making that part more predictable.
Many people critique economic models for being unrealistic, especially about how they model people’s behavior. Does this mean economic models need a serious overhaul?
I am not advocating a whole chucking and throwing away of models. I want to replace models with improved models that offer powerful and tractable insights. I think a lot of earlier generation macro models by construction had uncertainty play minor role. I believe it is important to push away from that notion, and make it a first order.
A lot of the criticism of models I’ve seen is too superficial. Of course, a model is a simplification or abstraction, and not a perfect description of reality. When are those mistakes consequential to the question at hand? When do those mistakes impact the quantitative outcome? Models are always wrong, but they guide our thinking. How do we use them in a smart way?
The hope is that big data and machine learning will offer even more accurate risk measurements, for both companies and individuals, who before couldn’t quantify risks precisely. Could this mean we can expect less uncertainty in markets in the future?
Data seldom, if ever, speaks for itself. To use data effectively requires valid and revealing conceptual frameworks for understanding and interpreting patterns in data. Uncertainty as confronted by decision makers necessarily includes challenges for how best to use data in insightful ways.
When economists do policy, they like to do counterfactual analysis: they study what would happen hypothetically if this policy were in place rather than this other one. If you are looking at policy especially in this dynamic context, you push away from where the data is. You have to do that through economic analysis. You can’t let the data speak along all dimensions. You need an underlying intellectual framework that helps you do this policy or counter-factual analysis.
Acknowledging uncertainty is hard for many people, especially policymakers. Lately it seems like, outside of academic economics, adherence to one kind of economic model to solve every problem is a litmus test of ideological purity. Are we doomed forever to see leaders ignore complexity and feign certainty for political gains?
This is a terrific question that I wish I had a great answer to. I do worry when political leaders want fully confident narratives to justify the policies they implement. This seeming confidence entails a pretense of knowledge that is not supported by scientific evidence.
One could argue that if the policy to be implemented is a wise one, why not defend it with an ex-post narrative, as a version of Plato’s noble falsehood, to gain public confidence in it? This, of course, begs the question of how to determine the wise course of action to begin with. Moreover, the excessive confidence in our projected understanding over the longer term erodes public trust in the messages from policymakers and from their advisors.
The best hope is for an informed public to accept the fact that although our knowledge base may be limited on some important matters of concern, we may still be able to construct sensible policies that acknowledge the underlying uncertainty in our understanding. The resulting policy analyses seek courses of action that remain prudent over the potential models or perspectives that have conceptual and empirical credibility.
Complexity also comes into play. It may well be that even though the social and economic problems we confront are complex, the wise course of action is simple and avoids adding to the uncertainties confronted by the private sector.
More recently, you have been thinking about another source of long-term uncertainty: climate change. How can your work help us understand the problem better, and what we can do about it?
Potential climate change both impacts and is impacted by economic activity. While there is a substantial body of evidence documenting these interactions, there remain substantial limits to our quantitative knowledge of how this interaction will play out over time. My research is in its early stages, but I am optimistic that we can progress as we wrestle with climate uncertainty.
One naïve but unproductive reaction to our work is a fear that a lack of precise knowledge will imply inaction until we have a more complete and precise quantification. This is not an implication of decision theory under uncertainty. The possibility of very bad outcomes can suffice to justify addressing the problem now when it may be less costly to do so. My hope is that a more exposed and realistic treatment of uncertainty can contribute to productive policymaking.