Other Keys to Interpreting Clinical Evidence


• Use a personal digital assistant (PDA) or computer online, with bookmarks for helpful EBM sources, to apply evidence at the point of care.

• Although full access to the Cochrane Database can be expensive, helpful summaries are available for free.

• Attend an information mastery or EBM workshop to solidify your grasp of basic concepts and application of this information to daily practice.

• Becoming a reviewer of Clinical Inquiries, HelpDesk Answers, or PURLs can consolidate your skills in EBM.

• Make it a habit to spend 10 minutes each day reviewing important systematic reviews or evidence-based summaries of relevance to your practice.

One of the major tasks in interpreting whether the results of a study should change practice is to determine whether all relevant patient-oriented outcomes were considered. It is important to distinguish among physiologic outcomes (e.g., serum calcium), intermediate outcomes (e.g., bone density), and patient-oriented outcomes (e.g., fractures). Whenever possible, practice decisions should be based on outcomes patients would deem important. For example, in a trial of HRT for osteoporosis, a decrease in fracture incidence would be a more convincing outcome than a change in a physiologic parameter such as bone density. Likewise, all important harms (i.e., risks) and financial end points (i.e., costs and savings) should be reported. In a trial of a new antiresorptive agent, the rate of esophagitis, gastritis, and esophageal perforation may be important harms to elaborate, along with such measures as patient satisfaction, costs and savings of care, and global well-being.

When assessing the benefits and harms of such a new treatment, appropriate competing alternatives (including no treatment at all) should be compared. Typically, such a comparison may take the form of a "balance sheet," a table comparing each intervention in terms of benefits, harms, and economic end points. Many studies are randomized, placebo-controlled trials in which patients receive an active intervention or a placebo or sham intervention. Alternatively, a study may use an active comparator, an intervention already known to be effective. Each of these approaches has pros and cons, but the most important point to remember is that just because a study shows statistical significance in a single measure, it does not mean that appropriate patient-oriented outcomes were considered.

When a study shows no effect, the question of power is raised. Put in simple terms, power is the ability to detect the effect of an intervention; it depends on the number of patients in the study, the magnitude of effect of the intervention, and the variability of the effect from one subject to another. For some interventions, even a small effect may be important. For example, many nonpharmacologic treatments for hypertension (e.g., salt restriction) have relatively modest, but important effects. Clinicians should generally be skeptical of small studies that show negative results.

Examining the confidence intervals is the easiest way to assess whether the study sample was too small and therefore did not have the statistical power to detect a clinically important difference (as reflected by wide confidence intervals).

Even when a study is positive or shows statistically significant results, it is important to consider whether the findings are clinically significant and applicable to your practice. For example, if a study showed a drug reduces the risk of heart attack by one in a million patients, we would probably be skeptical about its utility. Likewise, the findings showing that daily borscht reduces fractures in a study done in Russian dockworkers may or may not be applicable in the United States. The acceptability of an intervention (e.g., electroconvulsive therapy for depression) may vary. Moreover, the ability to replicate the findings of a study done in a typical research setting is often reduced in real-world practice. An intervention for osteoporosis requiring daily injections may be demonstrated to be efficacious, but in the average practice setting, its effectiveness may be much more limited.

Clinicians frequently rely on the synthesis of many studies, rather than a single study, to change our practices. Such reviews can be systematic, in which rigorous attempts are made to uncover all studies, published and unpublished, in English and in other relevant languages, or they may be more limited reviews that consider only a portion of the published literature. Some use formal mathematical methods to combine the results of studies (i.e., meta-analysis), and others are qualitative and synthesize data according to an author's overall judgment. Common biases to consider related to published reviews include whether all sources of evidence were considered; how disparate results were combined; whether relevant patient-oriented outcomes were assessed; if there was adequate attention to the quality of the studies and their generalizability; and whether the authors analyzed why differences in outcomes may have occurred, based on such factors as study design, population, and intervention. Published reviews, including systematic reviews and clinical guidelines, have become increasingly important tools for the busy clinician.

Clinicians may hone critical appraisal skills through involvement with local journal clubs, working with the Family Physicians' Inquiries Network (www.fpin.org). Although it is important to understand basic concepts for interpreting medical literature, sifting through original research studies can be a tedious, impractical process for busy clinicians. Many practical EBM tools have emerged in recent years to help physicians quickly access comprehensive, expert reviews of published studies in the middle of a busy practice (Table 8-3). The ability to critique articles using a structured approach is facilitated by using widely available worksheets and tools (see Web Resources). Although many taxonomies exist for level of evidence, two of the most widely available are the Centre for Evidence-Based Medicine (CEBM) and the taxonomy used in this book. The Strength of Recommendation Taxonomy is specifically tailored to family medicine (Ebell et al., 2004).

Curing Eczema Naturally

Curing Eczema Naturally

Do You Suffer From the Itching, Redness and Scaling of Chronic Eczema? If so you are not ALONE! It strikes men and women young and old! It is not just

Get My Free Ebook

Post a comment