Who do you think would become more successful: a young scientist who received an important grant early in her career or one who just missed out on receiving that same grant?
This question might seem like “a no-brainer,” says Dashun Wang, an associate professor of management and organizations at the Kellogg School. Many of us assume that success breeds success—and that failure, especially an early career setback, is a sign of more trouble to come.
Then again, those who subscribe to the adage that “what doesn’t kill you makes you stronger” might suspect that the unsuccessful scientists actually benefited from their early setback.
“The idea that one gets stronger through failure is the kind of stiff advice that people may tell themselves in difficult times,” says Kellogg strategy professor Benjamin F. Jones. “But is there any truth to it?”
A new paper from Wang, Jones, and Kellogg postdoctoral researcher Yang Wang finds that the optimists are right: early failure can actually breed later success. Scientists who narrowly missed out on an important grant from the National Institutes of Health (NIH) ended up publishing more successful papers than those who narrowly qualified for the grant. Over the long run, “the losers ended up being better,” Wang says.
The team’s analysis suggests that the act of failing itself may have pushed the frustrated scientists to improve. What didn’t kill them made them stronger.
It’s a hopeful discovery for Wang, who jokes that he considers himself an expert in this area, due to his “extensive experience of failure.” Indeed, he has been turned down for many grant applications himself—which, it turns out, may not be such a liability after all.
The team studied a type of NIH grant called the R01. The team’s data set included all 778,219 of these grant applications submitted to the NIH between 1990 and 2005.
They settled on R01 grants because they’re NIH’s oldest and most common grant type and hugely important to early career researchers in the biomedical sciences. At some universities, receiving one of these grants—worth an average of $1.3 million—can put a young scholar on a sure path toward tenure.
NIH’s evaluation process also made these a good type of grant to study. When a researcher submits a grant application to NIH, it is reviewed by a panel of experts and assigned a numerical score. Then, depending on how much funding is available, NIH determines a cutoff point—say, applications that scored in the top 15th percentile are funded, and the rest are not.
For the authors, this meant it was easy to determine which grants fell just short of receiving funding (they called these “near miss” grants) and which managed to squeak past the cutoff point (they called these “narrow wins”).
Then, they compared the scientists in the near-miss and narrow-win groups. The two sets of scientists were, across a variety of measures, remarkably similar—“identical twins,” Wang says, from a scientific-career perspective. They had been in the field for the same amount of time when they submitted their grant application and had published about the same number of papers, garnering roughly the same share of citations.
In other words, the only meaningful difference in their careers at that point was that the narrow winners received more than $1 million from NIH. “Now the question is, ‘Well, how big of a difference does it make ten years later?’” Wang explains.
To figure out just how much of a difference these early successes or setbacks made to a scientific career, the researchers traced the careers of 623 near-miss and 561 narrow-win scientists.
Notably, it turned out that the two groups published at similar rates over the next 10 years—not what you’d expect, given that narrow winners got an early leg up from their NIH grant funding. Even more surprising, scientists in the near-miss group were actually more likely to have “hit” papers (that is, papers that cracked the top-five percent of citations in a particular field and year). In the five years after they applied for NIH funding, 16.1% of papers produced by scientists in the near-miss group were hits, compared to 13.3% for the narrow-win group.
Next, the researchers wanted to pin down exactly why the near-miss group outperformed the narrow-win group in the end. This wasn’t easy to do, given all the complicated factors that influence a scientific career.
The first and most significant hypothesis the team examined was that failing to receive an NIH grant had a “screening effect”—essentially, it acted as a barrier that weeded out weaker scholars from the profession, meaning that, over time, those members of the near-miss group who stuck it out were the strongest scientists.
On the face of it, there appeared to be some merit to this idea: the team observed some attrition within the near miss group in the aftermath of an unsuccessful grant application. They found that failing to receive an R01 grant led to a 12.6% chance of disappearing from the NIH grant system for the next decade, a good indication that they had stopped pursuing a research career altogether.
For a fairer comparison, the team repeated their analysis, removing the narrow-win scientists whose papers most rarely became hits. Specifically, they removed the bottom 12.6% of these narrow winners—the same portion as had left the near-miss group through attrition—so that they were left comparing what they assumed to be the highest performers of each group.
But, the team found, the attrition alone could not explain the success of the near-miss scientists—the near misses still published more hit papers than the narrow winners.
Wang and Jones tested a number of other explanations: maybe, they reasoned, scientists from the near-miss group did better because they sought more influential collaborators, changed institutions, began to study a different topic, or moved into a “hot” area of research.
When they crunched the numbers, they found that there was some evidence that near-miss scientists had begun to study “hot topics,” but this, too, wasn’t enough to explain the overall performance gap.
With all of these alternative explanations ruled out, the team was left to conclude that failure itself might be the cause of the performance gap between the near-miss and narrow-win groups.
In other words, with no clear external factor that can explain the disappointed scientists’ success, it’s reasonable to think that the experience of adversity made them better in the end—confirming the conventional wisdom that “what doesn’t kill you makes you stronger.”
Jones sees that result as highly encouraging. “The advice to persevere is common,” he says. “But the idea that you take something valuable from the loss—and are better for it—is surprising and inspiring.”
Wang says there is more he wants to know about the power of failure. Is it just limited to the sciences, or will people who face setbacks in other fields succeed too? Is there another explanation for the performance gap that wasn’t testable from the available data? (Maybe, he jokes, everyone from the near-miss group simply decided to get up half an hour earlier each day. “There’s no way for me to know if that’s what happened,” he says.)
To Wang, there is something profound in the idea that failure can, paradoxically, lead to success. It’s a reminder to him, and everyone, not to give up.
“I use this insight a lot these days, because, as I mentioned, I’m kind of a daily failure,” he says. (Editor’s note: Wang’s status as a “daily failure” cannot be confirmed by external sources.) If he struggles at something, he knows there’s a chance he will actually become better at it than “the alternate-universe Dashun” who succeeded—as long as he perseveres.
“Failure is devastating,” he says, “and it can also fuel people.”
This article was previously published in Kellogg Insight. It was republished with permission of the Kellogg School of Management.