Statistical analysis outlines level of data fabrication in some clinical trials

by

A statistical analysis of just over 5,000 randomised, controlled trials published in anaesthetic and general medical journals has found suspicious statistical patterns in some, raising concerns over the reliability of study outcomes.

The study, performed by John Carlisle, consultant, Department of Anaesthesia, Torbay Hospital, examined the distribution of data in 5,087 trials published in the Journal of the American Medical Association, the New England Journal of Medicine and six anaesthesiology journals over a 15-year period (from 2000 to 2015). Baseline summary data for continuous variables was taken from the papers, where ‘baseline’ was defined as a variable measure before allocated intervention was initiated.

Distribution of probability (p) values was the primary outcome of the analysis, which was noted in the paper as ‘calculated for differences between means, for individual variables and when combined within trials’. In a secondary analysis, Carlisle examined p values for those papers that had been retracted versus those that had not and compared the results.

Analysis of the data incorporated six methods to combine p values into a single probability for each trial, which was then compared with the expected uniform distribution of data (using the Anderson–Darling test) and the central distribution of data (using the Kolmogorov–Smirnov test).

After these analyses, Carlisle found that in 15.6% of the trials studied there was a 1 in 10 probability for a more extreme distribution of data — or 794 out of the 5,087 trials. Each journal assessed had the same proportion of trials with extreme p values but not the same distribution of p values. The retracted trials were more likely to have an extreme distribution of data when compared with those that were not retracted. Additionally, in trials retracted for reasons other than data integrity, there was evidence to suggest potentially corrupt or fabricated data was incorporated.

Speaking to The Guardian Carlisle said: “This raises serious questions about data in some studies. Innocent or not, the rate of error is worrying as we determine how to treat patients based upon this evidence.”

Discrepancies in data found in this study could be resultant of fraud, unintentional error, correlation, stratified allocation and poor methodology. The comparison between the specialist anaesthetic journals and the non-specialist journals found no statistical difference, however, further work may be used to determine whether this is applicable across all randomised, controlled trials.

According to the report featured in The Guardian all the editors of the journals highlighted in this study were informed of the results, with the specialist anaesthetic journals all confirming they will be approaching the authors of the papers highlighted within the publication. The non-specialist journal editors have confirmed they will be taking the issue seriously with the editor of the Journal of the American Medical Association, Howard Bauchner, stating: “We receive numerous allegations about various issues related to the articles we publish. After we assess the validity of the allegation, we will determine next steps. We certainly believe authors have the right to respond to allegations that are important.”

Back to topbutton