In the 1990s, researchers at business schools became fascinated with the question of why so many large, seemingly dominant companies were being supplanted by startups. The incumbents had more money, more staff, and more know-how—what were they doing wrong?
The most famous answer to this question came in the form of a theory from Clayton Christensen, a Harvard Business School professor who called his concept “disruptive innovation.” The moniker of “disruption,” derived from his idea, took on a life of its own. Though Christensen meant something more specific, disruption came to describe the risk that incumbents would be felled by younger, nimbler competitors.
That risk has faded over the last 20 years. Since about the year 2000, disruption, or what the economist Joseph Schumpeter called “creative destruction,” has become less and less common in the US economy, according to a recent working paper by researchers at the Boston University School of Law.
To study disruption, the researchers analyzed the chance that one of the top four firms in an industry would fall out of the top four the next year. 1 They found that this “displacement” rate increased in the 1990s but has declined since 2000, suggesting that for the past two decades the risk that leading firms would get disrupted has been falling.
“The top firms increased their investment in software and other intangibles to a much greater degree around 2000,” said James Bessen, one of the study’s authors. “We find that those investments by the top firms reduce their risk of being disrupted.” (Disclosure: Bessen and I wrote an article together about this phenomenon in 2018.)
Bessen and his new co-authors link the decline in disruption not to spending on third-party software but to investments by companies in building their own—like Walmart and Amazon creating logistics and inventory software. These systems make it easier for very large companies to manage themselves, the researchers contend, and because they are proprietary, their benefits don’t spread to other firms in the same way technologies typically do.
The result is that the benefits of digital technology have not been spreading as widely as one would expect, creating a wide gap in digital capabilities between companies.
“It’s stunning,” James Manyika, chairman of the McKinsey Global Institute, says of the digital divide between companies within the same industry. “Part of that is access to technology, access to people who can deploy it.” The failure of digital capabilities to spread more widely through the economy is fueling inequality, he says.
However, while big US firms may be avoiding disruption by investing in software, they’re not necessarily all that innovative. Bessen and his colleagues found that R&D spending by large firms was not significantly linked to the decline in disruption.
“There have been eras in American history where large firms have been innovative,” but now is not one of them, says Heather Boushey, president & CEO of the Washington Center for Equitable Growth, an economic policy think tank.
US corporations like GE, AT&T, and DuPont once funded major R&D operations that did basic and applied science. Their research labs helped bridge the gap between academic research and practical problems. Although corporate R&D spending has increased in recent decades, most of it now goes to near-term, incremental projects like product development rather than to scientific research.
“When the big US industrial companies decided they didn’t need to do R&D anymore, we lost something,” says Pierre Azoulay, an economist at MIT who studies innovation.
There are many potential reasons for the decline of the corporate R&D lab, but one is the fact that large firms have found other ways to avoid competition—including investments in software.
Twenty-five years after Christensen coined the term “disruptive innovation,” the big question at business schools has been reversed. Scholars are no longer puzzling over why startups are able to disrupt incumbents, but why they aren’t.