Optimistic cancer researchers overestimate reproducibility of preclinical studies


Wednesday, 05 July, 2017

Optimistic cancer researchers overestimate reproducibility of preclinical studies

Cancer scientists have been found to overestimate the extent to which high-profile preclinical studies can be successfully replicated, in a study that follows numerous reports exploring biomedicine’s so-called reproducibility crisis.

In the last 10 or 15 years, there have been mounting concerns that some of the techniques and practices used in biomedical research lead to inaccurate assessments of a drug’s clinical promise. Given that not all studies reproduce, McGill University’s Jonathan Kimmelman and his team wondered if cancer experts could at least sniff out which studies would not easily replicate.

The findings, published in the journal PLOS Biology, are based on a survey in which both experts and novices were asked to predict whether mouse experiments in six prominent preclinical cancer studies conducted by the Reproducibility Project: Cancer Biology would reproduce the effects observed in original studies. On average, the researchers forecasted a 75% probability of replicating statistical significance and a 50% probability of reproducing the same size effect as in the original study. Yet according to these criteria, none of the six studies already completed by the Reproducibility Project showed the same results previously reported.

One possible explanation for the optimism is that cancer scientists overestimate the replicability of major reports in their field, according to the researchers. Another is that they underestimate the logistical and methodological complexity of independent laboratories repeating these techniques. The study therefore suggests that there may be inefficiencies in the process by which science ‘self-corrects’ and raises the possibility that training may help scientists overcome certain cognitive biases that affect their interpretation of scientific reports.

“This is the first study of its type, but it warrants further investigation to understand how scientists interpret major reports,” Kimmelman said. “I think there is probably good reason to think that some of the problems we have in science are not because people are sloppy at the bench, but because there is room for improvement in the way they interpret findings.”

Image credit: Cancer Institute of Columbia.

Related News

TGA approves donanemab for treatment of early Alzheimer's

The TGA has approved the first amyloid-targeting therapy for people with Alzheimer's in...

Ultra-processed foods linked to poor health, premature death

Evidence suggests a dose-response relationship between ultra-processed food consumption and...

Shorter radiotherapy course proves safe for prostate cancer

A significantly shorter course of radiotherapy for localised prostate cancer is just as safe and...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd