Reporting bias

In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects (for example about past medical history, smoking, sexual experiences).[1] In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.[2]

In empirical research, authors may be under-reporting unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error. In this context, reporting bias can eventually lead to a status quo where multiple investigators discover and discard the same results, and later experimenters justify their own reporting bias by observing that previous experimenters reported different results. Thus, each incident of reporting bias can make future incidents more likely.[3]

Reporting biases in research

Research can only contribute to knowledge if it is communicated from investigators to the community. The generally accepted primary means of communication is “full” publication of the study methods and results in an article published in a scientific journal. Sometimes, investigators choose to present their findings at a scientific meeting as well, either through an oral or poster presentation. These presentations are included as part of the scientific record as brief “abstracts” which may or may not be recorded in publicly accessible documents typically found in libraries or the World Wide Web.

Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available.

Reporting bias occurs when the dissemination of research findings is influenced by the nature and direction of the results, for instance in systematic reviews.[4] Positive results is a commonly used term to describe a study finding that one intervention is better than another.

Various attempts have been made to overcome the effects of the reporting biases, including statistical adjustments to the results of published studies.[5] None of these approaches has proved satisfactory, however, and there is increasing acceptance that reporting biases must be tackled by establishing registers of controlled trials and by promoting good publication practice. Until these problems have been addressed, estimates of the effects of treatments based on published evidence may be biased.

Case study

Litigation brought upon by consumers and health insurers against Pfizer for the fraudulent sales practices in marketing of the drug gabapentin in 2004 revealed a comprehensive publication strategy that employed elements of reporting bias.[6] Spin was used to put emphasis on favorable findings that favored gabapentin, and also to explain away unfavorable findings towards the drug. In this case, favorable secondary outcomes became the focus over the original primary outcome, which was unfavorable. Other changes found in outcome reporting include the introduction of a new primary outcome, failure to distinguish between primary and secondary outcomes, and failure to report one or more protocol-defined primary outcomes.[7]

The decision to publish certain findings in certain journals is another strategy.[6] Trials with statistically significant findings were generally published in academic journals with higher circulation more often than trials with nonsignificant findings. Timing of publication results of trials was influenced, in that the company tried to optimize the timing between the release of two studies. Trials with nonsignificant findings were found to be published in a staggered fashion, as to not have two consecutive trials published without salient findings. Ghost authorship was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.

Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation.[8]

Types of reporting bias

Publication bias

The publication or nonpublication of research findings, depending on the nature and direction of the results. Although medical writers have acknowledged the problem of reporting biases for over a century,[9] it was not until the second half of the 20th century that researchers began to investigate the sources and size of the problem of reporting biases.[10]

Over the past two decades, evidence has accumulated that failure to publish research studies, including clinical trials testing intervention effectiveness, is pervasive.[10] Almost all failure to publish is due to failure of the investigator to submit;[11] only a small proportion of studies are not published because of rejection by journals.[12]

The most direct evidence of publication bias in the medical field comes from follow-up studies of research projects identified at the time of funding or ethics approval.[13] These studies have shown that “positive findings” is the principal factor associated with subsequent publication: researchers say that the reason they don't write up and submit reports of their research for publication is usually because they are “not interested” in the results (editorial rejection by journals is a rare cause of failure to publish).

Even those investigators who have initially published their results as conference abstracts are less likely to publish their findings in full unless the results are “significant”.[14] This is a problem because data presented in abstracts are frequently preliminary or interim results and thus may not be reliable representations of what was found once all data were collected and analyzed.[15] In addition, abstracts are often not accessible to the public through journals, MEDLINE, or easily accessed databases. Many are published in conference programs, conference proceedings, or on CD-ROM, and are made available only to meeting registrants.

The main factor associated with failure to publish is negative or null findings.[16] Controlled trials that are eventually reported in full are published more rapidly if their results are positive.[15] Publication bias leads to overestimates of treatment effect in meta-analyses, which in turn can lead doctors and decision makers to believe a treatment is more useful than it is.

It is now well-established that publication bias with more favorable efficacy results is associated with the source of funding for studies that would not otherwise be explained through usual risk of bias assessments.[17]

Time lag bias

The rapid or delayed publication of research findings, depending on the nature and direction of the results. In a systematic review of the literature, Hopewell and her colleagues found that overall, trials with “positive results” (statistically significant in favor of the experimental arm) were published about a year sooner than trials with “null or negative results” (not statistically significant or statistically significant in favor of the control arm).[15]

Multiple (duplicate) publication bias

The multiple or singular publication of research findings, depending on the nature and direction of the results. Investigators may also publish the same findings multiple times using a variety of patterns of “duplicate” publication.[18] Many duplicates are published in journal supplements, potentially difficult to access literature. Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect.

Location bias

The publication of research findings in journals with different ease of access or levels of indexing in standard databases, depending on the nature and direction of results. There is also evidence that, compared to negative or null results, statistically significant results are on average published in journals with greater impact factors,[19] and that publication in the mainstream (non grey) literature is associated with an overall greater treatment effect compared to the grey literature.[20]

Citation bias

The citation or non-citation of research findings, depending on the nature and direction of the results. Authors tend to cite positive results over negative or null results, and this has been established over a broad cross section of topics.[21][22][23][24] Differential citation may lead to a perception in the community that an intervention is effective when it is not, and it may lead to over-representation of positive findings in systematic reviews if those left uncited are difficult to locate.

Selective pooling of results in a meta-analysis is a form of citation bias that is particularly insidious in its potential to influence knowledge. To minimize bias, pooling of results from similar but separate studies requires an exhaustive search for all relevant studies. That is, a meta-analysis (or pooling of data from multiple studies) must always have emerged from a systematic review (not a selective review of the literature), even though a systematic review does not always have an associated meta-analysis.

Language bias

The publication of research findings in a particular language, depending on the nature and direction of the results. There is longstanding question about whether there is a language bias such that investigators choose to publish their negative findings in non-English language journals and reserve their positive findings for English language journals. Some research has shown that language restrictions in systematic reviews can change the results of the review[25] and in other cases, authors have not found that such a bias exists.[26]

Knowledge reporting bias

The frequency with which people write about actions, outcomes, or properties is not a reflection of real-world frequencies or the degree to which a property is characteristic of a class of individuals. People write about only some parts of the world around them; much of the information is left unsaid.[2][27]

Outcome reporting bias

The selective reporting of some outcomes but not others, depending on the nature and direction of the results.[28] A study may be published in full, but pre-specified outcomes omitted or misrepresented.[7][29] Efficacy outcomes that are statistically significant have a higher chance of being fully published compared to those that are not statistically significant.

Selective reporting of suspected or confirmed adverse treatment effects is an area for particular concern because of the potential for patient harm. In a study of adverse drug events submitted to Scandinavian drug licensing authorities, reports for published studies were less likely than unpublished studies to record adverse events (for example, 56 vs 77% respectively for Finnish trials involving psychotropic drugs).[30] Recent attention in the lay and scientific media on failure to accurately report adverse events for drugs (e.g., selective serotonin uptake inhibitors, rosiglitazone, rofecoxib) has resulted in additional publications, too numerous to review, indicating substantial selective outcome reporting (mainly suppression) of known or suspected adverse events.

See also

References

  1. Porta M, ed. (5 June 2008). A Dictionary of Epidemiology. Oxford University Press. p. 275. ISBN 978-0-19-157844-1. Retrieved 27 March 2013.
  2. Gordon J, Van Durme B (2013). "Reporting bias and knowledge acquisition". Proceedings of the 2013 workshop on Automated knowledge base construction - AKBC '13. pp. 25–30. doi:10.1145/2509558.2509563. hdl:1802/28266. ISBN 978-1-4503-2411-3. S2CID 16567195.
  3. McGauran N, Wieseler B, Kreis J, Schüler YB, Kölsch H, Kaiser T (April 2010). "Reporting bias in medical research - a narrative review". Trials. 11 (1): 37. doi:10.1186/1745-6215-11-37. PMC 2867979. PMID 20388211.
  4. JP, Green S, eds. (24 November 2008). Cochrane Handbook for Systematic Reviews of Interventions. Wiley. ISBN 978-0-470-69951-5.
  5. Rosenthal R (1979). "The file drawer problem and tolerance for null results". Psychological Bulletin. 86 (3): 638–641. doi:10.1037/0033-2909.86.3.638.
  6. Vedula SS, Goldman PS, Rona IJ, Greene TM, Dickersin K (August 2012). "Implementation of a publication strategy in the context of reporting biases. A case study based on new documents from Neurontin litigation". Trials. 13 (136): 136. doi:10.1186/1745-6215-13-136. PMC 3439687. PMID 22888801.
  7. Vedula SS, Bero L, Scherer RW, Dickersin K (November 2009). "Outcome reporting in industry-sponsored trials of gabapentin for off-label use". The New England Journal of Medicine. 361 (20): 1963–71. doi:10.1056/NEJMsa0906126. PMID 19907043.
  8. Stempel J (2 June 2014). "Pfizer to pay $325 million in Neurontin settlement". Reuters. Retrieved 24 August 2014.
  9. "The Reporting of Unsuccessful Cases — Hand Disinfection in Out-Patient Clinics — Training of Parents in the Care of Children — Medical Notes". The Boston Medical and Surgical Journal. 161 (8): 263–266. 19 August 1909. doi:10.1056/nejm190908191610809.
  10. Dickersin K (2006). "Publication Bias: Recognizing the Problem, Understanding Its Origins and Scope, and Preventing Harm". Publication Bias in Meta-Analysis. pp. 9–33. doi:10.1002/0470870168.ch2. ISBN 978-0-470-87016-7.
  11. Godlee F, Dickersin K (2003). "Bias, Subjectivity, Chance, and Conflict of Interest in Editorial Decisions". In Godlee F, Jefferson T (eds.). Peer review in health sciences (2nd ed.). London: BMJ Books. pp. 91–117. hdl:10822/1004354. ISBN 978-0-7279-1685-3.
  12. Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, et al. (June 2002). "Publication bias in editorial decision making". JAMA. 287 (21): 2825–8. doi:10.1001/jama.287.21.2825. PMID 12038924.
  13. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. (February 2010). "Dissemination and publication of research findings: an updated review of related biases". Health Technology Assessment. 14 (8): iii, ix–xi, 1–193. doi:10.3310/hta14080. PMID 20181324.
  14. Scherer RW, Langenberg P, von Elm E (April 2007). "Full publication of results initially presented in abstracts". The Cochrane Database of Systematic Reviews (2): MR000005. doi:10.1002/14651858.MR000005.pub3. PMID 17443628.
  15. Hopewell S, Clarke M, Stewart L, Tierney J (April 2007). "Time to publication for results of clinical trials". The Cochrane Database of Systematic Reviews. 2010 (2): MR000011. doi:10.1002/14651858.MR000011.pub2. PMC 7437393. PMID 17443632.
  16. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (January 2009). "Publication bias in clinical trials due to statistical significance or direction of trial results". The Cochrane Database of Systematic Reviews. 2010 (1): MR000006. doi:10.1002/14651858.MR000006.pub3. hdl:1893/22314. PMC 8276556. PMID 19160345.
  17. Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L (February 2017). "Industry sponsorship and research outcome" (PDF). The Cochrane Database of Systematic Reviews. 2017 (2): MR000033. doi:10.1002/14651858.MR000033.pub3. PMC 8132492. PMID 28207928.
  18. von Elm E, Poglia G, Walder B, Tramèr MR (February 2004). "Different patterns of duplicate publication: an analysis of articles used in systematic reviews". JAMA. 291 (8): 974–80. doi:10.1001/jama.291.8.974. PMID 14982913.
  19. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR (April 1991). "Publication bias in clinical research". Lancet. 337 (8746): 867–72. doi:10.1016/0140-6736(91)90201-y. PMID 1672966. S2CID 36570135.
  20. Hopewell S, McDonald S, Clarke M, Egger M (April 2007). "Grey literature in meta-analyses of randomized trials of health care interventions". The Cochrane Database of Systematic Reviews. 2010 (2): MR000010. doi:10.1002/14651858.MR000010.pub3. PMC 8973936. PMID 17443631.
  21. Gøtzsche PC (September 1987). "Reference bias in reports of drug trials". British Medical Journal. 295 (6599): 654–6. doi:10.1136/bmj.295.6599.654. PMC 1257776. PMID 3117277.
  22. Kjaergard LL, Gluud C (April 2002). "Citation bias of hepato-biliary randomized clinical trials". Journal of Clinical Epidemiology. 55 (4): 407–10. doi:10.1016/s0895-4356(01)00513-3. PMID 11927210.
  23. Schmidt LM, Gotzsche PC (April 2005). "Of mites and men: reference bias in narrative review articles: a systematic review". The Journal of Family Practice. 54 (4): 334–8. PMID 15833223. Gale A131501020.
  24. Nieminen P, Rucker G, Miettunen J, Carpenter J, Schumacher M (September 2007). "Statistically significant papers in psychiatry were cited more often than others". Journal of Clinical Epidemiology. 60 (9): 939–46. doi:10.1016/j.jclinepi.2006.11.014. PMID 17689810.
  25. Pham B, Klassen TP, Lawson ML, Moher D (August 2005). "Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary". Journal of Clinical Epidemiology. 58 (8): 769–76. doi:10.1016/j.jclinepi.2004.08.021. PMID 16086467.
  26. Jüni P, Holenstein F, Sterne J, Bartlett C, Egger M (February 2002). "Direction and impact of language bias in meta-analyses of controlled trials: empirical study". International Journal of Epidemiology. 31 (1): 115–23. doi:10.1093/ije/31.1.115. PMID 11914306.
  27. Misra I, Lawrence Zitnick C, Mitchell M, Girshick R (2016). "Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2930–2939. arXiv:1512.06974. doi:10.1109/CVPR.2016.320. ISBN 978-1-4673-8851-1. S2CID 3039286.
  28. Sterne JA, Egger M, Moher D (2008). "Addressing Reporting Biases". Cochrane Handbook for Systematic Reviews of Interventions. pp. 297–333. doi:10.1002/9780470712184.ch10. ISBN 978-0-470-71218-4.
  29. Chan AW, Krleza-Jerić K, Schmid I, Altman DG (September 2004). "Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research". CMAJ. 171 (7): 735–40. doi:10.1503/cmaj.1041086. PMC 517858. PMID 15451835.
  30. Hemminki E (March 1980). "Study of information submitted by drug companies to licensing authorities". British Medical Journal. 280 (6217): 833–6. doi:10.1136/bmj.280.6217.833. PMC 1601011. PMID 7370687.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.