I. Articles from American Family Physician
1. SERIES ON FINDING EVIDENCE AND PUTTING IT INTO PRACTICE
2. Strength of Recommendation Taxonomy (SORT): A Patient-Centered Approach to Grading Evidence in the Medical Literature
Article from the February 1, 2004, issue of American Family Physician that describes the SORT evidence rating system, which allows authors to rate individual studies or bodies of evidence. To see examples of SORT tables, please look at any review article in AFP.
3. How to Write an Evidence-Based Clinical Review Article
Article from the January 15, 2002, issue of American Family Physician that presents guidelines for writing an evidence-based clinical review article. NOTE: American Family Physician no longer includes ratings of evidence in the text for individual studies as described in "How to Write an Evidence-Based Clinical Review Article." Instead, we use the SORT evidence rating system to rate bodies of evidence for key clinical recommendations on diagnosis and treatment.
4. Editorial: Evidence-based Medicine—Common Misconceptions, Barriers, and Practical Solutions
Editorial from September 15, 2018, issue of American Family Physician
II. Other Resources
1. WEBSITES FOR GENERAL PRINCIPLES OF EBM
Centre for Evidence-Based Medicine
Promotes evidence-based health care, and provides support and resources for the teaching and practice of evidence-based medicine.
Evidence-based Medicine Toolbox
Contains tools and resources for the teaching and practice of evidence-based medicine.
Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group
Provides information about grading quality of evidence and strength of recommendations.
JAMAevidence
Provides a collection of EBM resources including textbooks, podcasts, education guides, and glossaries.
2. WEBSITES FOR SOURCES OF EVIDENCE-BASED CLINICAL INFORMATION
Free Access
McMaster University’s compendium of pre-appraised evidence to support clinical decisions. Content is presented in a hierarchical way, with the highest level of available evidence listed first.
Agency for Healthcare Research and Quality
In particular, see AHRQ’s Effective Healthcare Reports on various clinical topics.
Note: many of these reports are published in AFP under the “Implementing AHRQ Effective Health Care Reviews” department collection.
Cochrane Database of Systematic Reviews
Free for abstracts only, which in most cases provide the key findings of interest. The complete review requires a subscription. The Cochrane database contains systematic reviews of narrowly focused clinical questions (e.g., “Colchicine for treating acute gout attacks”) as opposed to broad, general reviews of topics (e.g., “Management of an acute gout attack”).
Note: AFP publishes summaries of Cochrane abstracts in “Cochrane for Clinicians.”
Repository of evidence-based clinical practice guidelines, appraised using the National Academy of Medicine’s Standards for Trustworthy Clinical Practice Guidelines.
National Center for Complementary and Integrative Health
Although NCCIH has been criticized for political interference and questionable science, we include it in this list because there are few freely available comprehensive sources of information in this field.
Trip (Turning Research Into Practice)
Contains links to a wide range of journal articles, medical organization clinical guidelines, online medical references, and other sources. A limited version is freely available; additional content requires an annual subscription.
U.S. Preventive Services Task Force
Premier source of evidence-based, graded recommendations for clinical preventive services.
Note: AFP publishes Recommendations and Reports from the USPSTF as well as CME case studies in the Putting Prevention Into Practice series.
Subscription Required
Most of these are point-of-care clinical information and decision support tools. These sites provide important background information, but authors should review the primary source to use as a citation for the article.
Essential Evidence Plus
Includes POEMS (collections of patient-oriented evidence that matters).
Database of dietary supplements, natural medicines, and complementary, alternative, and integrative therapies.
To ensure adequate searching on your topic, we strongly recommend that you review several of the above sources, in addition to a PubMed search of your topic. We recommend filtering your results from the main PubMed search results pages by selecting Systematic Review under Article Type. You can also try the built-in filters of the Clinical Queries search page.
Sample Data Sources paragraph:
Data Sources: A PubMed search was completed in Clinical Queries using the key terms gout and hyperuricemia. The search included meta-analyses, randomized controlled trials, clinical trials, and reviews. Also searched were the Agency for Healthcare Research and Quality Effective Healthcare Reports, the Cochrane database, DynaMed, and Essential Evidence Plus. November 18, 2017.
3. EBM CALCULATORS
MEDICAL CALCULATORS
AFP has partnered with MDCalc as the resource for clinical calculators, such as the Centor Score (Modified/McIsaac) for Strep Pharyngitis, the CHA₂DS₂-VASc Score for Atrial Fibrillation Stroke Risk, the Revised Cardiac Risk Index for Pre-Operative Risk, and the Wells' Criteria for Pulmonary Embolism. MDCalc is available online and via smartphone app at MDCalc Medical Calculator.
The following online calculators perform statistical calculations, such as number needed to treat, odds/probability converter, positive and negative likelihood ratios, positive and negative predictive values, and posttest probability.
ClinCalc (free)
Relative risk or risk ratio from an odds ratio
Number needed to treat
EBM Toolbox (free)
Diagnostic test calculator (sensitivity, specificity, positive and negative likelihood ratios, positive and negative predictive values, post-test probability)
Randomized controlled trial calculator
Prospectdive study calculator
Case-control study calculator
Odds Ratio to NNT Converter
MedCalc (free)
Sensitivity, specificity, positive and negative likelihood ratios, positive and negative predictive values, post-test probability.
American Family Physician and other family medicine journals use the Strength of Recommendation Taxonomy (SORT) system for rating bodies of evidence for key clinical recommendations. (For additional background, please see Strength of Recommendation Taxonomy (SORT): A Patient-Centered Approach to Grading Evidence in the Medical Literature, an explanatory article published in the February 1, 2004, issue of AFP), as well as this guidance for rating the strength of evidence.
The SORT table is intended to highlight the most important three to seven recommendations for clinicians from a review article. Each recommendation is accompanied by a SORT rating of A, B, or C as defined below to emphasize interventions and approaches that improve patient-oriented outcomes (e.g. morbidity, mortality, quality of life) over disease-oriented evidence (e.g. biomarkers, surrogate endpoints). More details about creating a SORT table can be found in the Authors Guide.
STRENGTH OF RECOMMENDATION |
DEFINITION |
---|---|
A | Recommendation based on consistent and good quality patient-oriented evidence.* |
B | Recommendation based on inconsistent or limited quality patient-oriented evidence.* |
C | Recommendation based on consensus, usual practice, expert opinion, disease-oriented evidence,** and case series for studies of diagnosis, treatment, prevention, or screening |
* Patient-oriented evidence measures outcomes that matter to patients: morbidity, mortality, symptom improvement, cost reduction, quality of life. ** Disease-oriented evidence measures intermediate, physiologic, or surrogate endpoints that may or may not reflect improvements in patient outcomes (e.g., blood pressure, blood chemistry, physiological function, and pathological findings). |
American Family Physician publishes a number of evidence-based medicine (EBM) features and departments in every issue, some of which include the following:
Note: See the AFP Journal Club Toolkit and MDCalc’s glossary of EBM terms for additional information on EBM terms and types of studies.
Select a Glossary:
Search terms alphabetically:
Unintentional bias is the result of using a weaker study design (e.g., a case series or observational study), not designing a study well (e.g., using too low a dose of the comparator drug), or not executing the study well (e.g., making it possible for participants or researchers to determine to which group they are assigned). Intentional bias also exists. Examples of study techniques that are designed to make a favorable result for the study drug more likely include a run-in phase using the active drug to identify compliant patients who tolerate the drug; per protocol rather than intention-to-treat analysis; and intentionally choosing too low a dose of the comparator drug or choosing an ineffective comparator drug.
Allocation concealment recently has been recognized as an important element of randomized controlled trial design. Allocation is concealed when neither the participants nor the researchers know or can predict to which group in a study (control or treatment) the patient is assigned. Allocation concealment takes place before the study begins, as patients are being assigned. Blinding or masking—concealing the study group assignment from those participating in the study—occurs after the study begins. Blinding should involve the patient, the physicians caring for the patient, and the researcher. It is particularly important that the persons assessing outcomes also are blinded to the patient’s study group assignment.
Individual findings from the history and physical examination often are not helpful in making a diagnosis. Usually, the physician has to consider the results of several findings as the probability of disease is revised. Clinical decision rules help make this process more objective, accurate, and consistent by identifying the best predictors of disease and combining them in a simple way to rule in or rule out a given condition. Examples include the Strep Score, the Ottawa Ankle Rules, scores for ruling out pulmonary embolism, and a variety of clinical rules to evaluate perioperative risk. Also see this Point-of Care-Guide clinical decision rule table.
In a large study, a small difference may be statistically significant but not necessarily clinically significant. For example, does a 1- or 2-point difference on a 100-point dementia scale matter to your patients? It is important to ask whether statistically significant differences also are clinically significant. Conversely, if a study finds no difference, it is important to ask whether it was large enough to detect a clinically important difference and if a difference actually existed. A study with too few patients is said to lack the power to detect a difference.
The P value tells us how likely it is that the difference between groups occurred by chance rather than because of an effect of treatment. For example, if the absolute risk reduction was 4% with P = .04, if the study were done 100 times, a risk reduction this large would occur four times by chance alone. The confidence interval gives a range and is more clinically useful. A 95% confidence interval indicates that if the study were repeated 100 times, the study results would fall within this interval 95 times. For example, if a study found that a test was 80% specific with a 95% confidence interval of 74% to 85%, the specificity would fall between 74% and 85% 95 times if the study were repeated 100 times. In general, larger studies provide more precise estimates.
Disease-oriented evidence refers to the outcomes of studies that measure physiologic or surrogate markers of health. This would include things such as blood pressure, serum creatinine, glycohemoglobin, sensitivity and specificity, or peak flow. Improvements in these outcomes do not always lead to improvements in patient-oriented outcomes such as symptoms, morbidity, quality of life, or mortality.
External validity is the extent to which results of a study can be generalized to other persons in other settings, with various conditions, especially "real world" circumstances. Internal validity is the extent to which a study measures what it is supposed to measure, and to which the results of a study can be attributed to the intervention of interest, rather than a flaw in the research design. In other words, the degree to which one can draw valid conclusions about the causal effects of one variable or another.
People who volunteer for a clinical trial are generally healthier and have more favorable outcomes than those who do not. For example, when comparing English women who volunteered for a mammography trial with those who did not, the volunteers had half the overall mortality of those who stayed home. This is especially important in observational (nonrandomized) studies, and may lead to better outcomes than expected in those who volunteer to participate or choose to take a medicine or choose to exercise.
Were the participants analyzed in the groups to which they were assigned originally? This addresses what happens to participants in a study. Some participants might drop out because of adverse effects, have a change of therapy or receive additional therapy, move out of town, leave the study for a variety of reasons, or die. To minimize the possibility of bias in favor of either treatment, researchers should analyze participants based on their original treatment assignment regardless of what happens afterward. The intention-to-treat approach is conservative; if there is still a difference, the result is stronger and more likely to be because of the treatment. Per protocol analysis, which only analyzes the results for participants who complete the study, is more likely to be biased in favor of the active treatment.
When one screens for cancer, one will always detect cancers earlier. However, screening is only beneficial if the overall length of life increases, not just the time from diagnosis. Lead time is the time between detection of disease due to screening and when it would ordinarily be detected due to signs or symptoms. Lead time bias represents the apparent benefit that screening might seem to provide, but which actually just represents a longer duration of known disease, but no increase in actual lifespan. For a graphic representation of lead time bias, see figure 2 in Screening for Cancer: Concepts and Controversies.
In a study of cancer screening, a screening test is more likely to identify slower growing tumors than fast growing tumors, which may appear between screening intervals. In an observational study comparing screened with unscreened patients, this will make the outcomes appear better in the screening group, because the cancers detected have a more favorable prognosis. For a graphic representation of length time bias, see figure 3 in Screening for Cancer: Concepts and Controversies.
Likelihood ratios (LRs) correspond to the clinical impression of how well a test rules in or rules out a given disease. A test with a single cutoff for abnormal will have two LRs, one for a positive test (LR+) and one for a negative test (LR–). Tests with multiple cutoffs (i.e., very low, low, normal, high, very high) can have a different LR for each range of results. A test with an LR of 1.0 indicates that it does not change the probability of disease. The higher above 1 the LR is, the better it rules in disease (an LR greater than 10 is considered good). Conversely, the lower the LR is below 1, the better the test result rules out disease (an LR less than 0.1 is considered good).
Note: for additional information about likelihood ratios, see this comprehensive handout.
The Choosing Wisely campaign has highlighted what it describes as low value care. That is, care which costs money and may even be harmful, but has not been shown to improve health outcomes in a clinically meaningful way compared with less costly or less potentially harmful alternatives. For example, screening EKGs in patients at low risk of coronary artery disease does not improve outcomes or cardiovascular risk prediction over traditional risk factors.
A network meta-analysis (also known as a multiple-treatments meta-analysis) allows you to compare treatments directly (for example, head-to-head trials) and indirectly (for example, against a first-line treatment). This increases the number of comparisons available and may allow the development of decision tools for effective treatment prioritization.
In the past, most randomized trials were designed to prove that one intervention was more effective than another. Non-inferiority trials are designed to prove that a (usually new) intervention is not significantly worse than another. It is important to carefully examine the assumptions about what is significantly worse and what is not.
The absolute risk reduction (ARR) can be used to calculate the number needed to treat, which is … number of patients who need to be treated to prevent one additional bad outcome. For example, if the annual mortality is 20% in the control group and 10% in the treatment group, then the ARR is 10% (20 – 10), and the number needed to treat is 100% ÷ ARR (100 ÷ 10) = 10 per year. That is, for every 10 patients who are treated for one year, one additional death is prevented. The same calculation can be made for harmful events. The number of patients who need to receive an intervention instead of the alternative for one additional patient to experience an adverse event. The NNH is calculated as: 1/ARI, where ARI is absolute risk increase (see NNT). For example, if a drug causes serious bleeding in 2% of patients in the treatment group over one year compared with 1% in the control group, the number needed to treat to harm is 100% ÷ (2% – 1%) = 100 per one year. The absolute increase (ARI) is 1%.
In an observational study of a drug or other treatment, the patient chooses whether or not to take the drug or to have the surgery being studied. This may introduce unintentional bias. For example, patients who choose to take hormone therapy probably are different from those who do not. Experimental studies, most commonly randomized controlled trials (RCTs), avoid this bias by randomly assigning patients to groups. The only difference between groups in a well-designed RCT is the treatment intervention, so it is more likely that differences between groups are caused by the treatment. When good observational studies disagree with good RCTs, the RCT should be trusted.
Observational studies usually report their results as odds ratios or relative risks. Both are measures of the size of an association between an exposure (e.g., smoking, use of a medication) and a disease or death. A relative risk of 1.0 indicates that the exposure does not change the risk of disease. A relative risk of 1.75 indicates that patients with the exposure are 1.75 times more likely to develop the disease or have a 75% higher risk of disease. Odds ratios are a way to estimate relative risks in case-control studies, when the relative risks cannot be calculated specifically. Although it is accurate when the disease is rare, the approximation is not as good when the disease is common.
Overdiagnosis occurs when a screening test detects a condition that is typically treated, but that in this case never would have become clinically apparent or caused symptoms. For example, screening with PSA often detects prostate cancers that are treated, but that never would have progressed to cause symptoms prior to death from another cause. For a graphic representation of overdiagnosis bias, see figure 4 in Screening for Cancer: Concepts and Controversies.
Overtreatment refers to treating when it is not indicated, or treating more aggressively than is warranted. For example, targeting a blood pressure of 120/80 in an average risk person or using antibiotics for acute bronchitis.
Patient-oriented evidence (POE) refers to outcomes of studies that measure things a patient would care about, such as improvement in symptoms, morbidity, quality of life, cost, length of stay, or mortality. Essentially, POE indicates whether use of the treatment or test in question helped a patient live a longer or better life. Any POE that would change practice is a POEM (patient-oriented evidence that matters).
Simple randomization does not guarantee balance in numbers during a trial. If patient characteristics change with time, early imbalances cannot be corrected. Permuted block randomization ensures balance over time. The basic idea is to randomize each block such that m patients are allocated to A and m to B.
Predictive values help interpret the results of tests in the clinical setting. The positive predictive value (PV+) is the percentage of patients with a positive or abnormal test who have the disease in question. The negative predictive value (PV–) is the percentage of patients with a negative or normal test who do not have the disease in question. Although the sensitivity and specificity of a test do not change as the overall likelihood of disease changes in a population, the predictive value does change. For example, the PV+ increases as the overall probability of disease increases, so a test that has a PV+ of 30% when disease is rare may have a PV+ of 90% when it is common. Similarly, the PV changes with a physician’s clinical suspicion that a disease is or is not present in a given patient.
Whenever an illness is suspected, physicians should begin with an estimate of how likely it is that the patient has the disease. This estimate is the pretest probability. After the patient has been interviewed and examined, the results of the clinical examination are used to revise this probability upward or downward to determine the post-test probability. Although usually implicit, this process can be made more explicit using results from epidemiologic studies, knowledge of the accuracy of tests, and Bayes’ theorem. The post-test probability from the clinical examination then becomes the starting point when ordering diagnostic tests or imaging studies and becomes a new pretest probability. After the results are reviewed, the probability of disease is revised again to determine the final post-test probability of disease.
A receiver operating characteristic (ROC) curve plots the true positive rate (percent of patients with disease who have a positive test) against the false positive rate (percent without disease who have a positive test) as one varies the cutoff for what defines a positive test. The area under this curve is 1.0 for a perfectly accurate test, and 0.5 for a useless test, with higher values representing more accurate tests. The area under the ROC curve also corresponds to the likelihood that the test will correctly classify two randomly selected people correctly, one with and one without disease. The ROC curve below is for vaginal ultrasound as a test for uterine cancer, using different cutoffs for endometrial wall thickness as abnormal.
Note: the “mm” values in this graph represent endometrial wall thickness, as observed on ultrasound.
Studies often use relative risk reduction to describe results. For example, if mortality is 20% in the control group and 10% in the treatment group, there is a 50% relative risk reduction ([20 – 10] ÷ 20) x 100%. However, if mortality is 2% in the control group and 1% in the treatment group, this also indicates a 50% relative risk reduction, although it is a different clinical scenario. Absolute risk reduction subtracts the event rates in the control and treatment groups. In the first example, the absolute risk reduction is 10%, and in the second example it is 1%. Reporting absolute risk reduction is a less dramatic but more clinically meaningful way to convey results.
A run-in period is a brief period at the beginning of a trial before the intervention is applied. In some cases, run-in periods are appropriate (for example, to wean patients from a previously prescribed medication). However, run-in periods to assess compliance and ensure treatment responsiveness create a bias in favor of the treatment and reduce generalizability.
The number of patients in a study, called the sample size, determines how precisely a research question can be answered. There are two potential problems related to sample size. A large study can give a precise estimate of effect and find small differences between groups that are statistically significant, but that may not be clinically meaningful. On the other hand, a small study might not find a difference between groups (even though such a difference may actually exist and may be clinically meaningful) because it lacks statistical power. The “power” of a study takes various factors into consideration, such as sample size, to estimate the likelihood that the study will detect true differences between two groups.
Sensitivity is the percentage of patients with a disease who have a positive test for the disease in question. Specificity is the percentage of patients without the disease who have a negative test. Because it is unknown if the patient has the disease when the tests are ordered, sensitivity and specificity are of limited value. They are most valuable when very high (greater than 95%). A highly Sensitive test that is Negative tends to rule Out the disease (SnNOut), and a highly Specific test that is Positive tends to rule In the disease (SpPIn).
Also known as Cohen’s d, the standardized mean difference (SMD) is used to combine the results from studies using scales that have different lengths or sizes but are attempting to measure the same underlying parameter. For example, the 30-point Mini-Mental State Examination score and the 72-point Alzheimer’s Disease Assessment Scale–cog score are both measures of the severity of cognitive impairment. The SMD is calculated as the difference in the mean outcome between groups divided by the standard deviation. In general, an SMD less than 0.2 is not clinically significant, an SMD of 0.2 represents a small clinical effect, an SMD of 0.5 is a moderate effect, and an SMD of 0.8 or greater is a large effect.
Often, there are many studies of varying quality and size that address a clinical question. Systematic reviews can help evaluate the studies by posing a focused clinical question, identifying every relevant study in the literature, evaluating the quality of these studies by using predetermined criteria, and answering the question based on the best available evidence. Meta-analyses combine data from different studies; this should be done only if the studies were of good quality and were reasonably homogeneous (i.e., most had generally similar characteristics).
A visual analog scale asks participants to rate pain or some other subjective outcome on a scale, typically ranging from 0 to 100 points, where 0 is no pain and 100 is the worst possible pain imaginable. A difference of at least 10 points is the smallest change that is clinically noticeable or significant. Smaller differences may be statistically significant but are unlikely to be noticeable by patients.
Studies of treatments, whether the treatment is a drug, device, or other intervention, must be randomized controlled trials. Because most new, relevant medical information involves advances in treatment, these studies must sustain rigorous review.
Validity questions
Studies of diagnostic tests, whether in a laboratory or as part of the physical examination, must demonstrate that the test is accurate at identifying the disease when it is present, that the test does not identify the disease when it is not present, and that it works well over a wide spectrum of patients with and without the disease.
Validity questions
Only systematic reviews (overviews), including meta-analyses, will be considered.
Validity questions
The main threats to studies of prognosis are initial patient identification and loss of follow-up. Only prognosis studies that identify patients before they have the outcome of importance and follow up with at least 80 percent of patients are included.
Validity questions
Decision analysis involves choosing an action after formally and logically weighing the risks and benefits of the alternatives. Although all clinical decisions are made under conditions of uncertainty, this uncertainty decreases when the medical literature includes directly relevant, valid evidence. When the published evidence is scant, or less valid, uncertainty increases. Decision analysis allows physicians to compare the expected consequences of pursuing different strategies under conditions of uncertainty. In a sense, decision analysis is an attempt to construct POEMs artificially out of disease-oriented evidence.
Validity questions
Qualitative research uses nonquantitative methods to answer questions. While this type of research is able to investigate questions that quantitative research cannot, it is at risk for bias and error on the part of the researcher. Qualitative research findings will be reported if they are highly relevant, although specific conclusions will not be drawn from the results.
Validity questions
These are a broadly accepted set of nine criteria to establish causality between an exposure or incidence and an effect or consequence. In general, the more criteria that are met, the more likely the relationship is causal.
Information from Hill AB. The environment and disease: association or causation? Proc R Soc Med. 1965;58(5):295-300.
TERM | ABBREVIATION | DEFINITION |
---|---|---|
Sensitivity | Sn | Percentage of patients with disease who have a positive test for the disease in question |
Specificity | Sp | Percentage of patients without disease who have a negative test for the disease in question |
Predictive value (positive and negative) | PV+ PV- |
Percentage of patients with a positive or negative test for a disease who do or do not have the disease in question |
Pretest probability |
Probability of disease before a test is performed | |
Post-test probability |
Probability of disease after a test is performed | |
Likelihood ratio | LR | LR >1 indicates an increased likelihood of disease LR <1 indicates a decreased likelihood of disease. The most helpful tests generally have a ratio of less than 0.2 or greater than 5. |
Relative risk reduction | RRR | The percentage difference in risk or outcomes between treatment and control groups. Example: if mortality is 30% in controls and 20% with treatment, RRR is (30-20)/30 = 33 percent. |
Absolute risk reduction | ARR | The arithmetic difference in risk or outcomes between treatment and control groups. Example: if mortality is 30% in controls and 20% with treatment, ARR is 30-20=10%. |
Number needed to treat | NNT | The number of patients who need to receive an intervention instead of the alternative in order for one additional patient to benefit. The NNT is calculated as: 1/ARR. Example: if the ARR is 4%, the NNT = 1/4% = 1/0.04 = 25. |
95 percent confidence interval | 95% CI | An estimate of certainty. It is 95% certain that the true value lies within the given range. A narrow CI is good. A CI that spans 1.0 calls into question the validity of the result. |
Systematic review | A type of review article that uses explicit methods to comprehensively analyze and qualitatively synthesize information from multiple studies | |
Meta-analysis | A type of systematic review that uses rigorous statistical methods to quantitatively synthesize the results of multiple similar studies |
The following are “evidence-based medicine pointers” for analyzing research studies, culled from AFP’s Journal Club series, which ran from November 1, 2007, through May 15, 2015.
There are two major sections: Types of Studies and Key Concepts When Looking at Research Studies, organized as shown below. Within each section, key words are listed in bold.
Note: See also the EBM Glossaries and MDCalc's glossary of EBM terms for additional explanation of terms, studies, and statistical concepts.
TYPES OF STUDIES:
KEY CONCEPTS WHEN LOOKING AT RESEARCH STUDIES: