In 2016, psychologist Dana Carney made quite a splash in the research community. Carney, along with her colleagues Amy Cuddy and Andy Yap, were well-known for their study linking “power poses”—like standing confidently your hands on your hips—to “powerful” hormonal changes: increases in testosterone and decreases in cortisol. After the study was published in 2010, other researchers tried and failed to replicate their results, and Carney began to have doubts about the conclusions of her own study. In 2016, Carney posted a short letter on her website, titled simply “My position on ‘Power Poses.'” She wastes no time getting to the point: “As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that ‘power pose’ effects are real.“
Psychologists took to Facebook groups and listservs to discuss Carney’s revelation. Not only had she disavowed her own research, but she laid out, in detail, the methodological errors she believed could have skewed the original results. “There were 150 Facebook comments on a post where people were like, ‘This is amazing—we should do this more.’ And that turned into the Lack of Confidence project,” says Julia Rohrer, a psychologist and lead author of a new paper collecting psychology researchers’ confessions about the shortfalls of their own work.
Like experts in any field, scientists make mistakes. Sometimes, even when their methods are sound, their conclusions are just flat-out wrong. But it’s daunting to admit fallibility, especially in a field like science, where one’s reputation (and job stability) depends heavily on the quality of one’s research findings. And with the public’s growing mistrust of science, any admission of culpability may only fuel skeptics’ criticisms. “Everybody is hiding something, or knows something they shouldn’t disclose,” says Rohrer. “No one wants to be the first person at work to say, ‘I think I screwed that up; I did something wrong.'”
But being wrong is just step along the process of being right. There’s no shortage of research on work-related failure, which suggests that a fear of failure stifles growth and creativity, and that fostering a culture of tolerance for failure in the workplace encourages transparency. (A note to the researchers behind those studies: if no longer stand by your results, now would be a good time to bring that up.) That means that building an acceptance of failure into workplace culture—whether that workplace is a small company, huge corporation, or research field—could not have positive effect for workers’ health and happiness.
Openly sharing failures might also accelerate workplace progress. Analyzing failure can unveil new directions; for instance, consider that Bubble Wrap was originally a failed type of housing insulation, but was then used to protect packages. Science, in particular, benefits from knowing what hasn’t worked in the past, so that researchers can their focus collective efforts on leads that might be more successful in teaching us something new.
And that’s precisely what Rohrer and the Loss of Confidence participants are hoping for: that destigmatizing mistakes will improve the overall quality of research in their field. On the project’s website, they encourage researchers to publicize their mistakes to “potentially help prevent other researchers from wasting resources conducting [studies] that may be unlikely to succeed” or to “highlight the need for further research on a topic.”
Currently, academia has few avenues for researchers to admit their mistakes, and the project is intended to change that. So between December 2017 and July 2018, the Lack of Confidence project collected statements like Carney’s, detailing researchers’ issues with their own previously published papers. Six statements have been compiled in a paper posted to PsyArXiv, a repository for psychology pre-prints.
Some reported that other researchers convinced them their own work was faulty; others say that, in retrospect, they missed how key variables could have led to faulty effects. Most were written in clinical, cold language, plainly pointing out faults without defensiveness or guilt, but some were especially self-flagellating. “I now think most of the conclusions drawn in this article were absurd on their face,” writes Tal Yarkoni about a paper he co-authored in 2005. “I also now think the kinds of theoretical explanations I proposed in the paper were ludicrous in their simplicity and naiveté—so results would have told us essentially nothing even if they were statistically sound.” It’s still to early to know what consequences the psychologists’ statements might hold, as the pre-print was just released last week, but Rohrer says that she’s seen an outpouring of positive responses from fellow psychologists, along with scientists from other fields.
Now that several researchers have spoken out, Rohrer sees encouraging signs that others may follow suit. Many have expressed interest in submitting, but were afraid of being the first, or were dissuaded from doing so by co-authors who weren’t ready to admit culpability. “Many people told me, ‘[This statement] totally applies to my old study!'” she says. “This is a start so that other people can see, ‘Ok, I’m not the only one who messed up.'”