Mindfulness is having a moment. The practice, which involves consciously focusing on and recognizing your thoughts and emotions, has been found to reduce stress, improve grades, increase compassion, aid weight loss, and bolster resilience. Above all, mindfulness is considered a strong option for treating depression.
But a paper published this month in PLoS ONE should dampen some of the exuberance over mindfulness’ seemingly miraculous effects. Researchers from McGill University in Montreal, Canada, analyzed papers on mindfulness and found that, given the sample size of the studies, it was statistically unlikely that so many of them would uncover strong positive results. Smaller studies can be easily swayed by luck and so are far weaker at detecting significant results. Yet a large number of the papers studied—far more than is probable—managed to produce statistically significant findings.
The researchers conducted statistical analysis on 124 published mindfulness papers and determined that, based on the sample size and statistical power of the studies, around 68 of the trials would be expected to show positive results. Instead, 109 of the trials concluded that mindfulness therapy was effective.
The authors called this percentage “concerning.” Though the statistical analysis is not enough to definitively prove reporting bias, it suggests that this many may “be a driving force.”
The researchers also looked at 21 trials that were registered, meaning their trial objectives and planned methodology were published before the research was conducted. Of these, none specified which variable would be used to determine success, and 13 (62%) were still unpublished 30 months after the trial was completed.
None of this implies that the evidence behind mindfulness is false, but it does suggest publication bias has created a skewed effect. It seems that whereas journals are happy to publish studies showing a positive effect from mindfulness, research that finds no strong effect is more likely to remain hidden in academics’ filing cabinets.
The authors also commented that caveats or “spin” seemed to blur trials showing negative results. “When negative results are reported, they are often ‘spun’ so that they appear to be equivocal or even positive findings,” they wrote.
Mindfulness is far from the only subject where publication bias is thought to create a skewed picture of the research. Several scientific fields have recently been rocked by a replication crisis, where researchers were unable to recreate major findings, and publication bias is considered partly to blame.
The authors of the PLoS ONE paper on mindfulness argue that journals should commit to publishing studies when they are pre-registered, before the research has been conducted, which would prevent the bias in published results. Studies should also specifically identify which outcomes they will measure before collecting the data.
Study co-author Brett Thombs, a psychologist at McGill and at the Jewish General Hospital in Montreal, told Nature he believes mindfulness helps many people.
“I’m not against mindfulness,” he says. “I think that we need to have honestly and completely reported evidence to figure out for whom it works and how much.”