Am Fam Physician. 2009;79(10):875-877
Author disclosure: Dr. Slawson is a consultant for John Wiley and Sons, Inc., publisher of Essential Evidence Plus.
A wide array of resources summarizing medical information are available, and physicians must carefully choose the most trustworthy sources. Treatment decisions should be based on the best available evidence, which should be carefully critiqued for both relevance and validity. Paying particular attention to sources that use the Strength of Recommendation Taxonomy can help guide busy physicians to the most useful information sources.
Are you looking for an efficient way to keep up with important new research findings without working 80 hours per week? Is that bedside journal stack still creating anxiety and guilt? Do you often have trouble believing what is the “best evidence,” limiting opportunities to change the way you practice? If you answered “yes” to at least two of these questions, you are not alone. What is the solution? Well-written, high-quality reviews of the best evidence can help us reduce our workload, trust new information, and change our practice accordingly. In doing so, we will remain great doctors with enough time left over for our family and friends. But how can we tell which reviews are “high quality” and of the “best evidence”?
Not All Reviews Are Created Equal
Reviews generally fall into two categories: summary reviews and systematic reviews. Summary (narrative) reviews traditionally appear in standard journals and textbooks, attempting to broadly paint the landscape of a particular topic (e.g., the diagnosis and treatment of thyroid disease). Although these reviews can be useful when a lot of information is needed on one topic, assessing their validity can be difficult. Because all reviews are not created equal, physicians should never assume a review is good simply because an “expert” wrote it. The validity of a review can suffer from the author's bias or an incomplete review of the available evidence. In fact, two studies evaluating the rigor of review articles reported that increasing expertise of the author correlated with lower methodologic quality of the review.1,2
Systematic reviews often target only one or two specific questions. Specific criteria (Table 1) are adhered to in the review of the primary literature, resulting in a process that can, depending on the amount of evidence to review, take up to two years. Meta-analyses, a specific type of systematic review, can quantitatively combine results from multiple trials, leading to recommendations that may not be supported by any single study. For example, only when data were summarized from multiple individual trials did the medical community confidently accept that beta blockers reduce mortality rates in patients with previous myocardial infarction. Although systematic reviews and meta-analyses are good at answering specific questions, they often provide limited clinical context for making complex diagnostic or therapeutic decisions.3
The methods used to search the literature are described. The search ideally should be performed by an independent source or by more than one person. |
The literature review is comprehensive, including at least one evidence-based resource (e.g., the Cochrane Registry of Controlled Trials, Clinical Evidence). Databases in addition to Medline are used, as well as non-English language databases. This is especially important for information on complementary and alternative medicine, where many of the studies are in the German and Chinese literature. |
Clear and a priori criteria are used to determine which articles to include in the review. |
The evidence is graded in some reasonable way. Reviews that have an evidence table listing each recommendation separately are best. |
Recommendations are made with patient-oriented evidence emphasized over disease-oriented evidence. |
Efforts are made to locate studies that may not have been published, especially those that report negative findings. |
In the past few years, some journals have made a deliberate effort to ensure that summary reviews of a specific topic are based on the same high-quality criteria listed in Table 1. This means beginning with a thorough evidence report gathered by an independent source, followed by a critical synthesis of the best evidence, and summarized by the authors. Examples of these types of reviews include evidence-based reviews in American Family Physician and the Journal of the American Board of Family Medicine, as well as the Clinical Inquires feature in American Family Physician and the Journal of Family Practice, and the reviews in Essential Evidence Plus (formerly InfoRetriever). All of these sources use the same evidence rating method, which is based on the Strength of Recommendation Taxonomy (SORT).4
Levels of Evidence Versus Strength of Recommendations
Authors of high-quality reviews should carefully critique and assign level-of-evidence ratings to the individual studies based on the validity of the research design. This is visually depicted in Figure 1 on the x-axis: the further one goes to the right, the more rigorous the strength of evidence supporting the results (e.g., results from a well-performed, large randomized trial involving 3,000 patients are more likely to represent reality than an expert's opinion based on only unblinded personal experience with 30 patients). Many of the current evidence rating systems used by various specialty organizations report clinical recommendations based only on validity, which is measured on the x-axis. However, clinical recommendations should be supported by both the validity and the relevance of the available evidence. The SORT system rates clinical recommendations on both factors.
Relevance is plotted on the y-axis: the further one goes up the axis, the more patient-oriented the outcome (e.g., patients care more about living longer than the amount of sugar in their blood). Before studies assessing patient-oriented outcomes showed that rosiglitazone (Avandia) does not reduce cardiovascular events, as many thought it would, many specialty organizations recommended rosiglitazone as a first-line treatment based on its ability to improve β-cell function. Only by using reviews rating evidence for both relevance and validity can physicians be assured that recommendations are most likely to represent the “best practice.”
Conclusions
Basing management decisions on the best evidence, carefully critiqued for both relevance and validity, should be the goal of all physicians. A wide array of resources summarizing medical information exist, so carefully choosing the most trustworthy sources is critical. Using the criteria listed in Table 1 and paying particular attention to sources using the SORT system can help guide busy physicians to the most useful information sources and ultimately help them provide the highest quality patient care.