Study Reveals Bias In Published Drug Research

January 31st, 2009 -- A recent study published in the free access journal PLoS Medicine (Rising et al.) found that there is significant reporting bias in the results of trials involving new medicines.

The issue of bias in research is one to take very seriously. Physicians are routinely bombarded by the sales and marketing efforts of drug companies (have you ever been to a doctor's office and not seen a drug rep there?), who spend millions of dollars promoting their products. Despite this, one would hope that physicians still rely mostly on published research in guiding their clinical decisions. So, if the information they are receiving from the medical literature is not entirely accurate, the care being given to their patients could be affected.

Previous research has shown that there is selective reporting of results for specific types of medicines, such as anti-depressants. To extend this work, the researches in this study, from the University of California, San Francisco, wanted to look at a broad range of drugs over an extended period of time. In order to do this, they compared trials in the FDA database associated with New Drug Applications (NDAs) to the results published in the medical literature for the same trials.

The FDA, which regulates and approves new drugs, requires that manufacturers submit all data regarding a drug, whether the data supports the drug or does not support it. However, there is no such requirement when it comes to publication in medical journals. Whether a drug trial gets published is essentially up to the researchers and the drug companies.

This is where what is called reporting bias can come into play. Reporting bias refers to a situation where favorable results, meaning results which show a drug is effective, tend to be reported more than unfavorable results. Since the FDA keeps records of every drug trial, the trials that are published can be compared to the FDA database to look for such bias.

The team from UCSF did just that. Specifically, they reviewed all approved NDAs in 2001 and 2002 which were for drugs compounds which had not been sold or marketed before (NMEs). They chose to focus only on new drugs because physicians would be particularly interested, and one can assume influenced by, the medical literature on these drugs which had not been used before.

For the two years in question, the researchers identified 33 NDAs for novel drug compounds. Associated with these drug applications were 164 efficacy trials, meaning studies designed to measure a drug's effectiveness versus its safety. Based on the information about the efficacy trials, the researchers then searched the medical literature databases for several years after the NDAs to identify the same trials in published journals. When necessary, they even went so far as to contact the trial researchers and drug companies to see if the results had been published anywhere.

What the group found was that out of the 164 trials, only about three fourths (77%) had been published. In looking at the drug applications as a whole, only about half of the NDAs had all of the associated efficacy data published, and two of the NDAs had no published data associated with them. When researchers were contacted about why some results were not published, they said things like, "The data are in my opinion very worthwhile. Efforts were made a number of times to work on publishing the data, but it was never possible to find a time [when both the investigator and company were available]."

When the UCSF researchers looked deeper into the data to identify predictors of whether data would be published, they found that favorable primary outcomes were a significant predictor of publication (primary outcomes refer to the measures that were used to assess a given drug's effectiveness).

In addition, they found that the number of favorable outcomes which were published actually increased, whereas almost half of the outcomes which were not favorable to the drug were not published (Figure 1). Also troubling is that the statistical significance, a measure of the power of a finding, was changed for five outcomes between the NDA data and published results. It is not clear why this data was changed for publication. Similarly, of 10 negative conclusions in the FDA database, nine were changed to favorable conclusions for publication in the medical literature.

Figure 1: Outcomes & Conclusions in NDA Trials vs Published Trials

NDA (164 Trials) Published (128 Trials) Not Published (36 Trials)
Favorable Outcomes 76% 80% 61%
Not Favorable Outcomes 18% 15% 28%
Favorable Conclusion 70% 70% 67%
Not Favorable Conclusoin 4% 3% 8%

Notes: Favorable means that the outcome/conclusion supported the effectiveness of the drug being studied.

While this study clearly shows a reporting bias it is important to keep in mind that the underlying reasons for such bias were not determined. However, the researchers did find that for 80% of the publications at least one author had an industry affiliation. In fact, only 4% of the publications included statements explicitly saying the authors had no conflicts of interest.

It is difficult to assess the impact this type of reporting bias has on clinical care and patients. It certainly is troubling, however these drugs were approved by the FDA, which had access to the full data set. In order to fix this problem, some have suggested that the FDA make the data in their database easier to read and draw conclusions from. Others have suggested creating a drug trial registry which would hold the data from all trials and provide access to researchers and doctors.

Until such measures are implemented, it is worth keeping mind that some drugs may not live up to their hype.

-- Rick Labuda