If you have a few thousand dollars to spare and tire of investing in index funds, you might try something a bit more speculative: the futures market. Here, investors can use their money to buy a commodity (say, gold or coffee) at a future date at a pre-arranged price. If the price goes up between the initial investment and that date, the investor keeps the profits; if it goes down, they lose the money they started with.
Simple, right? Except for one big factor: Humans are terrible at predicting the future.
Futures markets exist (and have existed for millennia) because it helps producers gauge the price of their commodities and to stabilize the market that could otherwise be turned upside down by random events like weather or the discovery of a new vein of a desirable mineral.
But for investors, putting money into futures markets can be pretty risky. Turns out that individuals simply aren’t that good at looking at how much a commodity cost and anticipating how it could change (though as more people invest in futures markets, the more stable those markets—and more reliable those predictions—become).
Today, investors in futures markets have a little boost from tools and indices that inform their decisions with the help of extensive historical data. But uncertainty about the future, and our inability to reliably anticipate what comes next, permeates the financial markets. Do it well and you can get rich on stocks or prevent a recession. Do it poorly and you could go broke, or throw the world into financial turmoil, depending on how much power you have and just how wrong you are.
The problem, of course, extends to realms beyond finance. And unless you are somehow clairvoyant, you might find yourself grappling with your own psychology as you try to envision a future version of the world. Here are a few of the major reasons why we find predicting the future is so darn difficult.
Goodness, do we have them. We’re overly optimistic that the things we want to happen will actually happen. We erroneously base our predictions on our past experiences. When we get new information, we often think it fits into what we already believe to be true. We notice immediate things but not when they happen gradually, especially over the course of generations. We think bad things will happen, but not to us—in fact, we don’t care about them much if they won’t.
“We think that however we feel and whatever we believe is going to be how we feel and what we believe forever,” says Susan Weinschenk, the chief behavioral scientist at the Team W, a consulting and training company. “We don’t take into account how, as we get older or things are different, that we change our minds.”
Computers are very good at taking lots of information and giving us the takeaway or gist of it. Humans, not so much.
Consider the ways we predict the weather. According to an article from Slate, in order to determine if your town will receive snowfall (and how much), meteorologists have to take into account: temperature on the ground; temperature in the atmosphere; the “snow liquid ratio” (how much liquid, when combined with air, will produce what quantity of snow); how the movements of high- and low-pressure systems will combine with other atmospheric forces like the lake effect. It’s so complex that it seems beyond the ability for humans to fully understand.
Add in the factor of time, and we somehow get even worse—in weather, but also in general. While meteorologists can predict a five-day forecast with 90% accuracy, it drops to 50% for a 10-day forecast.
“Humans are very bad at understanding statistical trends and long-term changes,” political psychologist Conor Seyle told BBC Future. Even so-called experts aren’t much better—economists in particular are notoriously terrible at predicting a recession. (In fact, some argue that being an “expert” in something makes you worse at predictions than generalists.)
But most of the time, we don’t even pay attention to the data. Instead we tend to rely on what we think we know when we make decisions. According to one 2008 study, researchers looking at real-time scans of participants’ brains could see the moment when they made a decision, which was usually about 10 seconds before they themselves realized they had decided something—in other words, we rely on our “guts” to make decisions instead of using data to inform them. (Ironically, it’s algorithms’ lack of preconceived notions that makes people wary that the decisions they make lack humanity.)
We find it difficult to anticipate which of the forces at play in any given situation will take more precedence than the others. For example, in 1911, Thomas Edison predicted that the homes of the future would be replete with steel furniture. It was a decent guess—the material was durable, inexpensive, and ubiquitous. But he forgot that humans don’t really like steel in their homes, both physically (it’s uncomfortable to sit on) and aesthetically (doesn’t produce that cozy vibe).
We’re bad at guessing which forces are the most powerful in part because we’re pretty good at estimating, Weinschenk says. “Most of the time, our estimates are accurate enough to keep us alive and propagating the species. If you were doing a lot of higher-level computation, you would need more brain power,” she adds—that would require too much glucose in our brains, which already absorb 20% of our bodies’ calories. So who cares about picking out exactly the thing that got you there? “Good enough is good enough,” Weinschenk says.
Weinschenk doesn’t have much hope for us to get better at predicting—just being aware of our biases doesn’t suddenly help us put them aside, she says. But we can rely more on technology to help us see past our own failings to get a better glimpse of what the future might hold.
Some other suggestions: Embrace uncertainty. Hold predictors to account, especially those who do it badly. Use expert predictions as one datapoint among many to imagine what the world of the future might be like.
Or, if you care about making accurate predictions, maybe just don’t make any at all.