Half of researchers have reported trouble reproducing published findings: MD Anderson survey
This caused me to wonder how much of education research traveling under the name of "classroom innovation" is replicable. Adam Marcus runs Retraction Watch blog with Ivan Oransky.
Think about the point made toward the end of this piece: Too many people are not keeping real notebooks; their data are all in the computer. And those data are readily manipulable.
inBloomĂ˘„Â˘ will be keeping student data.
The inBloomĂ˘„Â˘ data integration and content search services enrich learning applications by connecting them to systems and information that currently live in a variety of different places and formats. . . .
Adam Marcus is the managing editor of Anesthesiology News, a monthly magazine for anesthesiologists. His freelance articles have appeared in Science, The Economist, The Christian Science Monitor, The Scientist, BirderĂ˘€™s World, Sciam.com, and many other publications and web sites.
You can follow Retraction Watch, named in the New York Times as a watchdog blog causing people to sit up and take notice, with an e-mail subscription. They send out regular alerts on shocking fraud in current scientific research.
Although the Times did take note in that article, it seems incredible that the media pours out so much ink on testing irregularities (for many kids who shouldn't even be taking standardized tests) while ignoring fraud in cancer research.
by Adam Marcus
Readers of this blog -- and anyone who has been following the Anil Potti saga -- know that MD Anderson Cancer Center was the source of initial concerns about the reproducibility of the studies Potti, and his supervisor, Joseph Nevins, were publishing in high profile journals. So the Houston institution has a rep for dealing in issues of data quality. (We can say that with a straight face even though one MD Anderson researcher, Bharat Aggarwal, has threatened to sue us for reporting on an institutional investigation into his work, and several corrections, withdrawals, and Expressions of Concern.)
We think, therefore, that it's worth paying attention to a new study in PLOS ONE, A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic, by a group of MD Anderson researchers. They found that about half of scientists at the prominent cancer hospital report being unable to reproduce data in at least one previously published study. The number approaches 60% for faculty members:
Of note, some of the non-repeatable data were published in well-known and respected journals including several high impact journals (impact factor >20).
To be sure, the paper isn't saying that half of all results are bogus (although it might sometimes feel that way). As the authors write:
We identified that over half of investigators have had at least one experience of not being able to validate previously reported data. This finding is very alarming as scientific knowledge and advancement are based upon peer-reviewed publications, the cornerstone of access to "presumed" knowledge. If the seminal findings from a research manuscript are not reproducible, the consequences are numerous. Some suspect findings may lead to the development of entire drug development or biomarker programs that are doomed to fail. As stated in our survey, some mentors will continue to pursue their hypotheses based on unreliable data, pressuring trainees to publish their own suspect data, and propagating scientific "myths". Sadly, when authors were contacted, almost half responded n
The researchers blame the problem on a familiar bugbear -- the pressure to publish:
Our survey also provides insight regarding the pressure to publish in order to maintain a current position or to promote ones scientific career. Almost one third of all trainees felt pressure to prove a mentor's hypothesis even when data did not support it. [emphasis added] This is an unfortunate dilemma, as not proving a hypothesis could be misinterpreted by the mentor as not knowing how to perform scientific experiments. Furthermore, many of these trainees are visiting scientists from outside the US who rely on their trainee positions to maintain visa status that affect themselves and their families in our country. This statement was observed in our "comments" section of the survey, and it was a finding that provided insight into the far reaching consequences of the pressure to publish.
Of course, pressure to publish doesn't necessarily equate to bad data. Here the authors, led by post-doc Aaron Mobley, point to other reports of the rise in retractions and the role of misconduct in such removals, to suggest, gently, an explanation for their results:
Recently, the New York Times published an article about the rise of retracted papers in the past few years compared to previous decades . The article states that this larger number may simply be a result of increased availability and thus scrutiny of journal articles due to web access. Alternatively, the article highlighted that the increase in retractions could be due to something much worse; misconduct by investigators struggling to survive as scientists during an era of scarce funding. This latter explanation is supported by another study, which suggested that the most prevalent reason for retraction is misconduct. In their review of all retracted articles indexed in Pubmed (over 2,000 articles) these authors discovered that 67.4% of retracted articles had been retracted due to misconduct . Regardless of the reasons for the irreproducible data, these inaccurate findings may be costing the scientific community, and the patients who count on its work, time, money, and more importantly, a chance to identify effective therapeutics and biomarkers based on sound preclinical work.
Whatever the reason, the issue of data integrity has nettled clinical oncology for years. One of the co-authors, Lee Ellis
, co-authored a 2012 article in Nature
lamenting the high "failure rate" of clinical cancer trials:
Unquestionably, a significant contributor to failure in oncology trials is the quality of published preclinical data. Drug development relies heavily on the literature, especially with regards to new targets and biology. Moreover, clinical endpoints in cancer are defined mainly in terms of patient survival, rather than by the intermediate endpoints seen in other disciplines (for example, cholesterol levels for statins). Thus, it takes many years before the clinical applicability of initial preclinical observations is known. The results of preclinical studies must therefore be very robust to withstand the rigours and challenges of clinical trials, stemming from the heterogeneity of both tumours and patients.
The newly published survey had an overall response rate of less than 18% Ă˘€" a low mark the investigators attribute in part to fears among staff that the 20-question, online survey would not be anonymous, as promised. Questions in the survey included:
When you contacted the author how your inquiry was received?
If you did not contact the authors of the original finding, why not?
If you have ever felt pressured to publish findings of which you had doubt, then by whom (mentor, lab chief, more advanced post-doc, other)?
If you perform an experiment 10 times, what percent of the time must a result be consistent for your lab to deem it reproducible?
, last author of the paper, told us that he believes misconduct as classically defined-- the "FFP" of fraud, fabrication and plagiarism -- is on the rise.
My guess is that there is more now than there used to be because there can be. There are so many tricks you can do with computers. . . . The other thing that's a really huge issue is that too many people are not keeping real notebooks; their data are all in the computer."
And that those data readily manipulable, said Zwelling, who as research integrity officer at MD Anderson drafted a set of guidelines ("that were not widely adopted," he said) to require investigators to keep signed and dated lab notebooks.
Zwelling told us that he puts whatever sickness in science squarely at the door of the academic edifice.
I blame this completely and totally on us. We haven't policed ourselves. There's such a de-emphasis on real quality and an emphasis on quantity. I would argue that it's a lack of ethics. The watchword used to be, I'm going to do right, I'm going to do good. Now the watchword is I'm going to do what I can get away with.
That's not unique to science, of course. Such an errant moral compass also applies today in politics, sports and any other discipline. But Zwelling said the problem in science -- and especially in the field of cancer research -- has caused good money to chase at best unproven projects with unjustifiable intensity.
INDEX OF RESEARCH THAT COUNTS