Am Fam Physician. 2013;88(7):466-467
Author disclosure: No relevant financial affiliations.
Purpose
In AFP Journal Club, three presenters review an interesting journal article in a conversational manner. These articles involve “hot topics” that affect family physicians or “bust” commonly held medical myths. The presenters give their opinions about the clinical value of the individual study discussed. The opinions reflect the views of the presenters, not those of AFP or the AAFP.
Article
Altwairgi AK, Booth CM, Hopman WM, Baetz TD. Discordance between conclusions stated in the abstract and conclusions in the article: analysis of published randomized controlled trials of systemic therapy in lung cancer. J Clin Oncol. 2012;30(28):3552–3557.
What is this study about?
Mark: This study reviewed all 114 randomized controlled trials of chemotherapy for lung cancer conducted between 2004 and 2009. The authors searched Medline, EMBASE, and the Cochrane database using the terms lung neoplasm, lung cancer, lung tumor, and lung carcinoma. They looked at the conclusions in the abstract vs. the conclusions in the body of the paper. English-language, phase III (clinical) trials that evaluated only chemotherapy treatment were included in the final analysis.
One author collected all of the data using a structured data sheet; 60 of the 114 papers were scored a second time by two researchers to ensure the reliability of scoring. Using a previously developed scale, the conclusions in the abstracts were assigned to one of seven levels, ranging from “standard of care significantly better” to “new treatment significantly better.”1 The same scale was applied to the conclusions in the body of the papers. If the difference between a conclusion in the abstract and that in the body was two levels or greater, the conclusions were considered different.
What did they find?
Mark: The conclusions in the abstract were different from those in the body 10% of the time (11 papers). In nine of the 11 papers, the conclusion in the abstract was more positive regarding the study drug (as opposed to the standard treatment). Of the studies evaluated, 53 were published in high-profile journals, including The New England Journal of Medicine and Lancet. There was no difference between journals with high impact factors and those considered second-tier journals. In the papers evaluated by two researchers, their scoring was concordant in 51 out of 60 papers. This yielded an intrarater correlation coefficient of 0.92 (95% confidence interval [CI], 0.87 to 0.95) for the abstracts, and 0.90 (95% CI, 0.84 to 0.94) for the body of the articles. An intrarater correlation coefficient of 1 suggests perfect correlation between the scorers. Any differences in scoring were resolved by discussion.
Should we believe this study?
Mark: Yes. This is a generally well-done study that nonetheless has a few flaws. The main flaw is that the researchers presumably knew their hypothesis ahead of time. This could lead to bias when coding the data. Having someone unfamiliar with the hypothesis collect the data would have been one way to prevent this. However, possible bias was mitigated somewhat with the finding that there was concordant scoring in 51 of the 60 articles reviewed by two researchers. A second problem is the relatively small data set (114 articles). However, assuming their search was done well, this represents all of the randomized controlled trials of chemotherapy for lung cancer during the study period.
Jill: Although these results apply specifically to cancer chemotherapy, the same problem has been seen in other medical fields.2–4 In a 2004 study of major pharmacology journals, 19% of abstracts had qualitative inaccuracies, and 25% had quantitative inaccuracies.2 Another study of major journals in 1999 found that abstract and article data were inconsistent 18% (95% CI, 6% to 30%) to 68% (95% CI, 54% to 82%) of the time.3
Mark: A 2012 study concluded that almost one-fourth of randomized controlled trials about rheumatoid arthritis, osteoarthritis or spondyloarthropathies had “misleading conclusions in the abstract, especially those with negative results.”5
Bob: The authors of journal and meeting abstracts are constrained by print space, which often leads to a sparse description of a study's methods and results. However, they should never publish information in the abstract that is not included in the body of the paper. Omission of important negative results or adverse events is equally unforgivable; we have covered this issue in previous Journal Clubs.6,7
As Mark and Jill note, inaccuracies in abstract reporting is a long-standing problem. To avoid these mistakes, a group of journal editors and researchers created the CONSORT (Consolidated Standards of Reporting Trials) standards for reporting clinical trials. The CONSORT statement, first released in 2001 and updated in 2010, provides authors and editors with a checklist to produce clear, transparent, and accurate abstracts.8
What should the family physician do?
Jill: So, why are we going on about this? All of us flip through journals, look at the abstracts of articles of interest, and then move on. Unfortunately, as these studies point out, this is not good enough. It goes without saying that most of us do not have the time to read all of the articles that interest us. But, instead of relying on abstracts, we need to find another way to keep up on the literature. There are many good (and not so good) services that review papers. Although this is a place to start, we should read the papers themselves before making major changes in our practices.
Main and EBM Points
Do not rely on article abstracts when deciding whether a therapy is good. Abstracts often have misleading information and conclusions.