AI does not have enough experience to handle the next market crash

Artificial chaos.
Artificial chaos.
Image: Reuters/Kim Kyung-Hoon
We may earn a commission from links on this page.

Artificial intelligence is increasingly used to make decisions in financial markets. Fund managers empower AI to make trading decisions, frequently by identifying patterns in financial data. The more data that AI has, the more it learns. And with the financial world producing data at an ever-increasing rate, AI should be getting better. But what happens if the data the AI encounters isn’t normal or represents an anomaly?

Globally, around 10 times more data (pdf) was generated in 2017 than in 2010. This means that the best quality data is also highly concentrated in the recent past—a world that has been running on cheap money, supplied by central banks through purchases of safe securities, which is not a “normal” state for the market. This has had a number of effects, from causing a rise in “zombie” firms to creating generational lows in volatility to encouraging unusually large corporate buybacks (pdf).

With so much data residing in this era, AI might not know what a “normal” market actually looks like. Robert Kaplan, the president of the Federal Reserve Bank of Dallas, recently pointed out some of the market extremes that exist today. The essay included a caution that growing imbalances in the economy could increase the risk of a rapid adjustment.

  • US stock market capitalization is now around 135% of GDP, the highest since 2000;
  • Corporate debt is at record highs;
  • Trading volume for 2017 on the New York Stock Exchange is down 51% from 2007, while the NYSE market capitalization is up 28%;
  • Record low volatility where the US market has gone 12 months without a 3% correction.

Today’s volatility is “extraordinarily unusual,” Kaplan noted. Cheap credit makes markets less volatile. When credit is easy, a company can rely on the promise of cheap debt to support itself so the value of equity becomes less volatile. Markets have experienced periods of low volatility before—and each time they have ended with a shock. If this current period ends violently, AI trained on predictable central-bank money flows will be unable to reconcile what it sees in new data from what it’s been trained on.

The Financial Stability Board, an international body based in Basel, Switzerland set up by the G20 in the aftermath of the last financial crisis, recently studied (pdf) the potential impacts of AI and machine learning on financial stability. One of the risks highlighted was the increased use of AI by hedge funds and market makers. Because AI is so effective at optimizing complex systems, its use can further tighten trading parameters that are vital for market stability, such as how much capital a bank has in relation to its outstanding trading positions.

With its increasing use in financial markets, it will play a role in the next market correction, perhaps a critical one, as an era of low volatility, high debt, and cheap money comes to an end. AI will need sufficient data across a big enough timespan for the models to adapt to new market conditions without overreacting.

The question is, if and when a shock comes and an entirely unfamiliar situation arises, what will the financial AIs do? As the financial system gets more interconnected, AI could spread the impact of extreme shocks faster, making the entire system potentially less stable during a shock event. This is particularly true if data sources and AI strategies are shared, and then there is a shock to a particular data source.

Consider the example of a data shock in the case of self-driving cars. When Google was training its self-driving car on the streets of Mountain View, California, the car rounded a corner and encountered a woman in a wheelchair, waving a broom, chasing a duck. The car hadn’t encountered this before so it stopped and waited. When a Tesla driving on autopilot failed to recognize a truck turning across it on the freeway, it kept going. In both cases, the situations were unfamiliar—but one had a failsafe and the other simply failed.

AI simply isn’t good in situations it doesn’t yet recognize.