These Journal Club worksheets can be used to appraise the evidence found during module 2: Acquire.
Tools to help you interpret the clinical and statistical significance of data reported in clinical research.
It is very important to critically evaluate any information retrieved in order to ensure its quality. Applying the CAARP Test to the resources you find will help you to make an objective evaluation and eliminate the bad information.
Currency
Accuracy
Authority
Relevance
Purpose
This handout provides a more detailed description of the criteria used for evaluating information sources, including websites.
Critical evaluation and appraisal is a very important part of the research process. You need to make sure you can identify where the information came from, its validity, and how relevant it is to your particular context. The tools on this page will help you with appriaising the information you find.
Please click on the link above to access this tutorial live on the web. Please also note that some versions of the Google Chrome browser have trouble displaying the presentation slides. If this is an issue, try a different browser.
To a certain extent, you will have to rely on your own knowledge of the subject and your critical thinking skills when evaluating research. Some of the criteria to keep in mind when appraising evidence are:
Quality
Trials that are randomised and double blind, to avoid selection and observer bias, and where we know what happened to most of the subjects in the trial.
Validity
Trials that mimic clinical practice, or could be used in clinical practice, and with outcomes that make sense. For instance, in chronic disorders we want long-term, not short-term trials. We are [also] ... interested in outcomes that are large, useful, and statistically very significant (p < 0.01, a 1 in 100 chance of being wrong).
Size
Trials (or collections of trials) that have large numbers of patients, to avoid being wrong because of the random play of chance. For instance, to be sure that a number needed to treat (NNT) of 2.5 is really between 2 and 3, we need results from about 500 patients. If that NNT is above 5, we need data from thousands of patients.
These are the criteria on which we should judge evidence. For it to be strong evidence, it has to fulfil the requirements of all three criteria.
Source: Bandolier via the Oregon Health Sciences Library
These tools can help your appraisal by providing you with a list of items to look for.
Are the results of this article valid?
1. Did the review explicitly address a sensible question?
The systematic review should address a specific question that indicates the patient problem, the exposure and one or more outcomes. General reviews, which usually do not address specific questions, may be too broad to provide an answer to the clinical question for which you are seeking information.
2. Was the search for relevant studies detailed and exhaustive?
Researchers should conduct a thorough search of appropriate bibliographic databases. The databases and search strategies should be outlined in the methodology section. Researchers should also show evidence of searching for non-published evidence by contacting experts in the field. Cited references at the end of articles should also be checked.
3. Were the primary studies of high methodological quality?
Researchers should evaluate the validity of each study included in the systematic review. The same EBP criteria used to critically appraise studies should be used to evaluate studies to be included in the systematic review. Differences in study results may be explained by differences in methodology and study design.
4. Were selection and assessments of the included studies reproducible?
More than one researcher should evaluate each study and make decisions about its validity and inclusion. Bias (systematic errors) and mistakes (random errors) can be avoided when judgment is shared. A third reviewer should be available to break a tie vote.
Key issues for Systematic Reviews:
|
What are the results?
Were the results similar from study to study?
How similar were the point estimates?
Do confidence intervals overlap between studies?
What are the overall results of the review?
Were results weighted both quantitatively and qualitatively in summary estimates?
How precise were the results?
What is the confidence interval for the summary or cumulative effect size?
More information on reading forest plots:
Ried K. Interpreting and understanding meta-analysis graphs: a practical
guide. Aust Fam Physician. 2006 Aug;35(8):635-8. PubMed PMID: 16894442.
Greenhalgh T. Papers that summarise other papers (systematic
reviews and meta-analyses). BMJ. 1997 Sep 13;315(7109):672-5.
PubMed PMID: 9310574.
How can I apply the results to patient care?
Were all patient-important outcomes considered?
Did the review omit outcomes that could change decisions?
Are any postulated subgroup effects credible?
Were subgroup differences postulated before data analysis?
Were subgroup differences consistent across studies?
What is the overall quality of the evidence?
Were prevailing study design, size, and conduct reflected in a summary of the quality of evidence?
Are the benefits worth the costs and potential risks?
Does the cumulative effect size cross a test or therapeutic threshold?