CASP Checklists

How to use our CASP Checklists

Referencing and Creative Commons

  • Online Training Courses
  • CASP Workshops
  • What is Critical Appraisal
  • Study Designs
  • Useful Links
  • Bibliography
  • View all Tools and Resources
  • Testimonials

Critical Appraisal Checklists

We offer a number of free downloadable checklists to help you more easily and accurately perform critical appraisal across a number of different study types.

The CASP checklists are easy to understand but in case you need any further guidance on how they are structured, take a look at our guide on how to use our CASP checklists .

CASP Randomised Controlled Trial Checklist

  • Print & Fill

CASP Systematic Review Checklist

CASP Qualitative Studies Checklist

CASP Cohort Study Checklist

CASP Diagnostic Study Checklist

CASP Case Control Study Checklist

CASP Economic Evaluation Checklist

CASP Clinical Prediction Rule Checklist

Checklist Archive

  • CASP Randomised Controlled Trial Checklist 2018 fillable form
  • CASP Randomised Controlled Trial Checklist 2018

CASP Checklist

Need more information?

  • Online Learning
  • Privacy Policy

case control study casp tool

Critical Appraisal Skills Programme

Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Copyright 2024 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand

  • En español – ExME
  • Em português – EME

CASP Tools for Case Control Studies

Posted on 1st May 2013 by Norah Essali

evaluation tick

The CASP International Network ( CASPin ) [1] is a non-profit making organisation for people promoting skills in finding, critically appraising and acting on the results of research papers (evidence).

CASPin provide many tools to help you systematically read evidence and this specific tool will help you make sense of any case control study and assess its validity in a quick and easy way.

Appraising a case control study using this tool will take you a little over an hour (depending on how fast you read and how often you check Facebook while reading it). I highly recommend you try this out and here’s a case control study ( use of caffeinated substances and risk of crashes in long distance drivers of commercial vehicles: case-control study )  [2] to help you get started.

Link to resource

https://casp-uk.net/images/checklist/documents/CASP-Case-Control-Study-Checklist/CASP-Case-Control-Study-Checklist-2018-fillable-form.pdf [3]

[1] CASP International Network [Internet]. CASP; [cited 1 May 2013]. Available from:  http://www.casp-uk.net/#!casp-international/c1zsi

[2] Lisa N Sharwood, Jane Elkington, Lynn Meuleners, Rebecca Ivers, Soufiane Boufous, Mark Stevenson. Use of caffeinated substances and risk of crashes in long distance drivers of commercial vehicles: case-control study [Internet].  BMJ  2013;346:f1140 [cited 1 May 2013]. Available from:  http://www.bmj.com/content/346/bmj.f1140

[3] Anon. Making sense of the evidence about clinical effectiveness [Internet, PDF]. CASP; 14 October 2010 [cited 1 May 2013]. 5 p. Available from:  https://casp-uk.net/images/checklist/documents/CASP-Case-Control-Study-Checklist/CASP-Case-Control-Study-Checklist-2018-fillable-form.pdf

' src=

Norah Essali

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on CASP Tools for Case Control Studies

Subscribe to our newsletter.

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Risk Communication in Public Health

Learn why effective risk communication in public health matters and where you can get started in learning how to better communicate research evidence.

""

Measures of central tendency in clinical research papers: what we should know whilst analysing them

Learn more about the measures of central tendency (mean, mode, median) and how these need to be critically appraised when reading a paper.

""

Journal club: tips for setting up a student journal club

Chris set up a student journal club in his first year of Physical Therapy training. In this blog, he describes how he started the club, how it has changed and expanded throughout his studies, and provides tips and suggested papers to get you started.

Banner

Critical appraisal for medical and health sciences: 3. Checklists

  • Introduction
  • 1. What is critical appraisal?
  • 2. How do I start?
  • 3. Checklists
  • 4. Further reading and resources

Using the checklists

  • Where do I look?

case control study casp tool

"The process of assessing and interpreting evidence by systematically considering its validity , results and relevance ."

The checklists will help you consider these three areas as part of your critical appraisal. See the following tabs for an overview. 

case control study casp tool

There will be particular biases to look out for, depending on the study type.

For example, the checklists and guidance will help you to scrutinise: 

  • Was the study design appropriate for the research question?
  • How were participants selected? Has there been an attempt to minimise bias in this selection process?
  • Were potential ethical issues addressed? 
  • Was there any failure to account for subjects dropping out of the study?

case control study casp tool

  • How was data collected and analysed?
  • Are the results reliable?
  • Are the results statistically significant?

The following e- resources, developed by the University of Nottingham, may be useful when appraising quantitative studies:

  • Confidence intervals
  • Numbers Needed to Treat (NNT)
  • Relative Risk Reduction (RRR) and Absolute Risk Reduction (ARR)

case control study casp tool

Finally, the checklists will assist you in determining:

  • Can you use the results in your situation?
  • How applicable are they to your patient or research topic?
  • Was the study well conducted?
  • Are the results valid and reproducible?
  • What do the studies tell us about the current state of science?

Where do I look for this information?

Most articles follow the IMRAD format; Introduction, Methods, Results and Discussion (Greenhalgh, 2014, p. 28), with an abstract at the beginning.

The table below shows where in the article you might look to answer your questions:

  • How to read a paper: the basics of evidence-based medicine and healthcare Greenhalgh, Trisha

Checklists and tools

  • AMSTAR checklist for systematic reviews
  • Cardiff University critical appraisal checklists
  • CEBM Critical appraisal worksheets
  • Scottish Intercollegiate Guidelines Network checklists
  • JBI Critical appraisal tools
  • CASP checklists

Checklists for different study types

  • Systematic Review
  • Randomised Controlled Trial (RCT)
  • Qualitative study
  • Cohort study
  • Case-control study
  • Case report
  • In vivo animal studies
  • In vitro studies
  • Grey literature

case control study casp tool

There are different checklists for different study types, as each are prone to different biases.

The following tabs will give you an overview of some of the different study types you will come across, the sort of questions you will need to consider with each, and the checklists you can use.

Not sure what type of study you're looking at? See the  Spotting the study design  guide from the Centre for Evidence-Based Medicine for more help.

What is a systematic review?

A review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review. Statistical methods (meta-analysis) may or may not be used to analyse and summarise the results of the included studies.

From Cochrane Glossary

Some questions to ask when critically appraising a systematic review:

  • Do you think all the important, relevant studies were included?
  • Did the review’s authors do enough to assess quality of the included studies?
  • If the results of the review have been combined, was it reasonable to do so?

From: Critical Appraisal Skills Programme (2018). CASP Systematic Review Checklist. [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise a systematic review:

What is a randomised controlled trial?

An experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants. In most trials on intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals (for example, in different orders or to different parts of the body).

Some questions to ask when critically appraising RCTs:

  • Was the assignment of patients to treatments randomised?
  • Were patients, health workers and study personnel ‘blind’ to treatment? i.e. could they tell who was in each group?
  • Were all of the patients who entered the trial properly accounted for at its conclusion? 
  • Were all participants analysed in the groups in which they were randomised, i.e. was a  Intention to treat analysis undertaken?

From: Critical Appraisal Skills Programme (2018).  CASP Randomised Controlled Trial Checklist . [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise an RCT:

What is a qualitative study?

Qualitative research is designed to explore the human elements of a given topic, where specific methods are used to examine how individuals see and experience the world...Qualitative methods are best for addressing many of the  why  questions that researchers have in mind when they develop their projects. Where quantitative approaches are appropriate for examining  who  has engaged in a behavior or  what  has happened and while experiments can test particular interventions, these techniques are not designed to explain why certain behaviors occur. Qualitative approaches are typically used to explore new phenomena and to capture individuals’ thoughts, feelings, or interpretations of meaning and process.

From Given, L. (2008)  The SAGE Encyclopedia of Qualitative Research Methods . Sage: London.

Some  questions to ask  when critically appraising a  qualitative study:  

  • What was the selection process and was it appropriate? 
  • Were potential ethical issues addressed, such as the potential impact of the researcher on the participants? Has anything been done to limit the effects of this?
  • Was the data analysis done using explicit, rigorous, and justified methods?

From: Critical Appraisal Skills Programme (2018).  CASP Qualitative Checklist . [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise a qualitative study:

Watch the video for an example of how to critically appraise a qualitative study using the CASP checklist:

What is a cohort study?

An observational study in which a defined group of people (the cohort) is followed over time. The outcomes of people in subsets of this cohort are compared, to examine people who were exposed or not exposed (or exposed at different levels) to a particular intervention or other factor of interest. A prospective cohort study assembles participants and follows them into the future. A retrospective (or historical) cohort study identifies subjects from past records and follows them from the time of those records to the present. Because subjects are not allocated by the investigator to different interventions or other exposures, adjusted analysis is usually required to minimise the influence of other factors (confounders).

Some questions to ask when critically appraising a cohort study

  • Have there been any attempts to limit selection bias or other types of bias?
  • Have the authors identified any confounding factors?
  • Are the results precise and reliable?

From: Critical Appraisal Skills Programme (2018).  CASP Cohort Study Checklist . [online] Available at:  https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018

Checklists you can use to critically appraise a cohort study:

What is a case-control study?

A study that compares people with a specific disease or outcome of interest (cases) to people from the same population without that disease or outcome (controls), and which seeks to find associations between the outcome and prior exposure to particular risk factors. This design is particularly useful where the outcome is rare and past exposure can be reliably measured. Case-control studies are usually retrospective, but not always.

Some questions to ask  when critically appraising a case-control study:

  • Was the recruitment process appropriate? Is there any evidence of selection bias?
  • Have all confounding factors been accounted for?
  • How precise is the estimate of the effect? Were confidence intervals given?
  • Do you believe the results?

From Critical Appraisal Skills Programme (2018). CASP Case Control Study Checklist . [online] Available at: https://casp-uk.net/casp-tools-checklists/ . Accessed: 22/08/2018. 

Checklists you can use to critically appraise a case-control study:

What is a case report?

A study reporting observations on a single individual.

Some questions to ask  when critically appraising a case report:

  • Is the researcher’s perspective clearly described and taken into account?
  • Are the methods for collecting data clearly described?
  • Are the methods for analysing the data likely to be valid and reliable?
  • Are quality control measures used?
  • Was the analysis repeated by more than one researcher to ensure reliability?
  • Are the results credible, and if so, are they relevant for practice? Are the results easy to understand?
  • Are the conclusions drawn justified by the results?
  • Are the findings of the study transferable to other settings?

From:  Roever and Reis (2015), 'Critical Appraisal of a Case Report', Evidence Based Medicine and Practice  Vol. 1 (1) 

Checklists you can use to critically appraise a case report:

  • CEBM critical appraisal of a case study

What are in vivo animal studies?

In vivo animal studies are experiments carried out using animals as models. These studies are usually pre-clinical, often bridging the gap between in vitro experiments (using cell or microorganisms) and research with human participants.

Arrive guidelines  provide suggested minimum reporting standards for in vivo experiments using animal models. You can use these to help you evaluate the quality and transparency of animal studies

Some questions to ask when critically appraising in vivo studies:

  • Is the study/experimental design explained clearly?
  • Was the sample size clearly stated, with information about how sample size was decided?
  • Was randomisation used?
  • Who was aware of group allocation at each stage of the experiment?
  • Were outcomes measures clearly defined and assessed?
  • Were the statistical methods used clearly explained?
  • Were all relevant details about animals used in the experiment clearly outlined (species, strain and substrain, sex, age or developmental stage, and, if relevant, weight)
  • Were experimental procedures explained in enough detail for them to be replicated?
  • Were the results clear, with relevant statistics included?

Adapted from:  The ARRIVE guidelines 2.0: author checklist

The ARRIVE guidelines 2.0: author checklist

While this checklist has been designed for authors to help while writing their studies, you can use the checklist to help you identify whether or not a study reports all of the required elements effectively.

SciRAP : evaluation of in vivo toxicity studies tool

The SciRAP method for evaluating reliability of  in vivo  toxicity studies consists of criteria for for evaluating both the   reporting quality  and  methodological quality  of studies, separately. You can switch between evaluation of reporting and methodological quality.

Further guidance 

Hooijmans CR, Rovers MM, de Vries RB, Leenaars M, Ritskes-Hoitinga M, Langendam MW. SYRCLE's risk of bias tool for animal studies. BMC Med Res Methodol. 2014 Mar 26;14:43. doi: 10.1186/1471-2288-14-43. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4230647/

Kilkenny C, et al, Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoSBiol2010;8:e1000412. doi:10.1371/journal.pbio.100041220613859: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1000412

Moermond CT, Kase R, Korkaric M, Ågerstrand M. CRED: Criteria for reporting and evaluating ecotoxicity data. Environ Toxicol Chem. 2016 May;35(5):1297-309. doi: 10.1002/etc.3259.

What are in vitro studies?

In vitro studies involve tests carried out outside of a living organism, usually involving tissues, organs or cells.

Some questions to ask when critically appraising in vitro studies:

  • Is there a clear and detailed description of the results, the test conditions and the interpretation of the results? 
  • Do the authors clearly communicate the limitations of the method/s used?
  • Do the authors use a validated method? 

Adapted from:  https://echa.europa.eu/support/registration/how-to-avoid-unnecessary-testing-on-animals/in-vitro-methods 

Guidance and checklists

SciRAP : evaluation of in vitro toxicity studies tool

The SciRAP method for evaluating reliability of  in vitro  toxicity studies consists of criteria for for evaluating both the   reporting quality  and  methodological quality  of studies, separately. You can switch between evaluation of reporting and methodological quality.

Development and validation of a risk-of-bias tool for assessing in vitro studies conducted in dentistry: The QUIN

Checklist designed to support the evaluation of in vitro dentistry studies, although can be used to assess risk of bias in other types of in vitro studies.

What is grey literature?

The term grey literature is used to describe a wide range of different information that is produced outside of traditional publishing and distribution channels, and which is often not well represented in indexing databases.

A widely accepted definition in the scholarly community for grey literature is

"information produced on all levels of government, academia, business and industry in electronic and print formats not controlled by commercial publishing" ie. where publishing is not the primary activity of the producing body." 

From: Third International Conference on Grey Literature in 1997 (ICGL Luxembourg definition, 1997  - Expanded in New York, 2004).

You can find out more about grey literature and how to track it down here.

Some questions to ask when critically appraising grey literature:

  • Who is the author? Are they credible and do they have appropriate qualifications to speak to the subject?
  • Does the source have a clearly stated aim or brief, and does it meet this?
  • Does the source reference credible and authoritative sources?
  • Is any data collection valid and appropriate for it's purpose?
  • Are any limits stated? e.g. missing data, information outside of scope or resource of project.
  • Is the source objective, or does the source support a viewpoint that could be biased?
  • Does the source have a identifiable date?
  • Is the source appropriate and relevant to the research area you have chosen?

Adapted from AACOS checklist

AACODS Checklist

The AACODS checklist has been designed to support the evaluation and critical appraisal of grey literature. 

  • << Previous: 2. How do I start?
  • Next: 4. Further reading and resources >>
  • Last Updated: Nov 22, 2023 11:28 AM
  • URL: https://libguides.exeter.ac.uk/criticalappraisalhealth
  • Subscribe to Newsletter

NCCMT

KB Mentoring Impact

  • Rapid Evidence Service

Log in to your free NCCMT account to save this method or tool to your dashboard.

Your dashboard is a one-stop shop for accessing various NCCMT resources, tracking your progress as you work through available training opportunities, saving evidence syntheses and publications, and building your own toolkit to match your evidence-informed decision making needs.

Critical Appraisal Skills Programme (CASP) Tools

Oxford Centre for Triple Value Healthcare Ltd (n.d.). The Critical Skills Appraisal Programme: Making sense of evidence . Retrieved from Organization website: http://www.casp-uk.net/

Description

These tools teach users to critically appraise different types of evidence. The program consists of seven critical appraisal tools to assess:

  • Systematic reviews
  • Randomized controlled trials (RCTs)
  • Qualitative research
  • Economic evaluation studies
  • Cohort studies
  • Case-control studies
  • Diagnostic test studies

Steps for Using Method/Tool

Each tool systematically guides users through questions in three main sections:

  • Is the study valid?
  • What are the results?
  • Will the results help locally?

Long, H. A., French, D. P., & Brooks, J. M. (2020). Optimising the value of the Critical Appraisal Skills Programme (CASP) tool for quality appraisal in qualitative evidence synthesis. Research Methods in Medicine & Health Sciences, 1 (1), 31–42. https://doi.org/10.1177/2632084320947559

Ma, L. L., Wang, Y. Y., Yang, Z. H., Huang, D., Weng, H., & Zeng, X. T. (2020). Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: What are they and which is better?. Military Medical Research, 7 (1), 7. https://doi.org/10.1186/s40779-020-00238-8

These summaries are written by the NCCMT to condense and to provide an overview of the resources listed in the Registry of Methods and Tools and to give suggestions for their use in a public health context. For more information on individual methods and tools included in the review, please consult the authors/developers of the original resources.

We have provided the resources and links as a convenience and for informational purposes only; they do not constitute an endorsement or an approval by McMaster University of any of the products, services or opinions of the external organizations, nor have the external organizations endorsed their resources and links as provided by McMaster University. McMaster University bears no responsibility for the accuracy, legality or content of the external sites.

Critical Appraisal Skills Programme (CASP) Tools

Use it with

Briefing note: Decisions, rationale and key findings summary

Using research evidence to frame options to address a problem

Looking to strengthen your skills?

[webpage] Critically and efficiently appraise the evidence

[module] Critical appraisal of qualitative studies

[module] Critical appraisal of systematic reviews

Have you used this resource? Share your story!

Log in here or create an account here .

The resource has been deleted.

Book cover

Handbook of Research Methods in Health Social Sciences pp 1027–1049 Cite as

Critical Appraisal of Quantitative Research

  • Rocco Cavaleri 2 ,
  • Sameer Bhole 3 , 8 &
  • Amit Arora 4 , 5 , 6 , 7  
  • Reference work entry
  • First Online: 13 January 2019

2305 Accesses

1 Citations

Critical appraisal skills are important for anyone wishing to make informed decisions or improve the quality of healthcare delivery. A good critical appraisal provides information regarding the believability and usefulness of a particular study. However, the appraisal process is often overlooked, and critically appraising quantitative research can be daunting for both researchers and clinicians. This chapter introduces the concept of critical appraisal and highlights its importance in evidence-based practice. Readers are then introduced to the most common quantitative study designs and key questions to ask when appraising each type of study. These studies include systematic reviews, experimental studies (randomized controlled trials and non-randomized controlled trials), and observational studies (cohort, case-control, and cross-sectional studies). This chapter also provides the tools most commonly used to appraise the methodological and reporting quality of quantitative studies. Overall, this chapter serves as a step-by-step guide to appraising quantitative research in healthcare settings.

  • Critical appraisal
  • Quantitative research
  • Methodological quality
  • Reporting quality

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Altman DG, Bland JM. Treatment allocation in controlled trials: why randomise? BMJ. 1999;318(7192):1209.

Article   Google Scholar  

Arora A, Scott JA, Bhole S, Do L, Schwarz E, Blinkhorn AS. Early childhood feeding practices and dental caries in preschool children: a multi-centre birth cohort study. BMC Public Health. 2011;11(1):28.

Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, … Lijmer JG. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med. 2003;138(1):W1–12.

Cavaleri R, Schabrun S, Te M, Chipchase L. Hand therapy versus corticosteroid injections in the treatment of de quervain’s disease: a systematic review and meta-analysis. J Hand Ther. 2016;29(1):3–11. https://doi.org/10.1016/j.jht.2015.10.004 .

Centre for Evidence-based Management. Critical appraisal tools. 2017. Retrieved 20 Dec 2017, from https://www.cebma.org/resources-and-tools/what-is-critical-appraisal/ .

Centre for Evidence-based Medicine. Critical appraisal worksheets. 2017. Retrieved 3 Dec 2017, from http://www.cebm.net/blog/2014/06/10/critical-appraisal/ .

Clark HD, Wells GA, Huët C, McAlister FA, Salmi LR, Fergusson D, Laupacis A. Assessing the quality of randomized trials: reliability of the jadad scale. Control Clin Trials. 1999;20(5):448–52. https://doi.org/10.1016/S0197-2456(99)00026-4 .

Critical Appraisal Skills Program. Casp checklists. 2017. Retrieved 5 Dec 2017, from http://www.casp-uk.net/casp-tools-checklists .

Dawes M, Davies P, Gray A, Mant J, Seers K, Snowball R. Evidence-based practice: a primer for health care professionals. London: Elsevier; 2005.

Google Scholar  

Dumville JC, Torgerson DJ, Hewitt CE. Research methods: reporting attrition in randomised controlled trials. BMJ. 2006;332(7547):969.

Greenhalgh T, Donald A. Evidence-based health care workbook: understanding research for individual and group learning. London: BMJ Publishing Group; 2000.

Guyatt GH, Sackett DL, Cook DJ, Guyatt G, Bass E, Brill-Edwards P, … Gerstein H. Users’ guides to the medical literature: II. How to use an article about therapy or prevention. JAMA. 1993;270(21):2598–601.

Guyatt GH, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, … Jaeschke R. GRADE guidelines: 1. Introduction – GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4), 383–94.

Herbert R, Jamtvedt G, Mead J, Birger Hagen K. Practical evidence-based physiotherapy. London: Elsevier Health Sciences; 2005.

Hewitt CE, Torgerson DJ. Is restricted randomisation necessary? BMJ. 2006;332(7556):1506–8.

Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions version 5.0.2. The cochrane collaboration. 2009. Retrieved 3 Dec 2017, from http://www.cochrane-handbook.org .

Hoffmann T, Bennett S, Del Mar C. Evidence-based practice across the health professions. Chatswood: Elsevier Health Sciences; 2013.

Hoffmann T, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, … Johnston M. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ, 2014;348: g1687.

Joanna Briggs Institute. Critical appraisal tools. 2017. Retrieved 4 Dec 2017, from http://joannabriggs.org/research/critical-appraisal-tools.html .

Mhaskar R, Emmanuel P, Mishra S, Patel S, Naik E, Kumar A. Critical appraisal skills are essential to informed decision-making. Indian J Sex Transm Dis. 2009;30(2):112–9. https://doi.org/10.4103/0253-7184.62770 .

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol. 2001;1(1):2. https://doi.org/10.1186/1471-2288-1-2 .

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the prisma statement. PLoS Med. 2009;6(7):e1000097.

National Health and Medical Research Council. NHMRC additional levels of evidence and grades for recommendations for developers of guidelines. Canberra: NHMRC; 2009. Retrieved from https://www.nhmrc.gov.au/_files_nhmrc/file/guidelines/developers/nhmrc_levels_grades_evidence_120423.pdf .

National Heart Lung and Blood Institute. Study quality assessment tools. 2017. Retrieved 17 Dec 2017, from https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools .

Physiotherapy Evidence Database. PEDro scale. 2017. Retrieved 10 Dec 2017, from https://www.pedro.org.au/english/downloads/pedro-scale/ .

Portney L, Watkins M. Foundations of clinical research: application to practice. 2nd ed. Upper Saddle River: F.A. Davis Company/Publishers; 2009.

Roberts C, Torgerson DJ. Understanding controlled trials: baseline imbalance in randomised controlled trials. BMJ. 1999;319(7203):185.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, … Kristjansson E. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008. https://doi.org/10.1136/bmj.j4008 .

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, … Boutron I. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.

Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, … Thacker SB. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.

Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Initiative S. The strengthening the reporting of observational studies in epidemiology (strobe) statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9.

Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, … Bossuyt PM. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155(8):529–36.

Download references

Author information

Authors and affiliations.

School of Science and Health, Western Sydney University, Campbelltown, NSW, Australia

Rocco Cavaleri

Sydney Dental School, Faculty of Medicine and Health, The University of Sydney, Surry Hills, NSW, Australia

Sameer Bhole

School of Science and Health, Western Sydney University, Sydney, NSW, Australia

Discipline of Paediatrics and Child Health, Sydney Medical School, Sydney, NSW, Australia

Oral Health Services, Sydney Local Health District and Sydney Dental Hospital, NSW Health, Sydney, NSW, Australia

COHORTE Research Group, Ingham Institute of Applied Medical Research, Liverpool, NSW, Australia

Oral Health Services, Sydney Local Health District and Sydney Dental Hospital, NSW Health, Surry Hills, NSW, Australia

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rocco Cavaleri .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Cavaleri, R., Bhole, S., Arora, A. (2019). Critical Appraisal of Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_120

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_120

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 29 February 2020

Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?

  • Lin-Lu Ma 1 ,
  • Yun-Yun Wang 1 , 2 ,
  • Zhi-Hua Yang 1 ,
  • Di Huang 1 , 2 ,
  • Hong Weng 1 &
  • Xian-Tao Zeng   ORCID: orcid.org/0000-0003-1262-725X 1 , 2 , 3 , 4  

Military Medical Research volume  7 , Article number:  7 ( 2020 ) Cite this article

237k Accesses

638 Citations

33 Altmetric

Metrics details

Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.

In the twentieth century, pioneering works by distinguished professors Cochrane A [ 1 ], Guyatt GH [ 2 ], and Chalmers IG [ 3 ] have led us to the evidence-based medicine (EBM) era. In this era, how to search, critically appraise, and use the best evidence is important. Moreover, systematic review and meta-analysis is the most used tool for summarizing primary data scientifically [ 4 , 5 , 6 ] and also the basic for developing clinical practice guideline according to the Institute of Medicine (IOM) [ 7 ]. Hence, to perform a systematic review and/ or meta-analysis, assessing the methodological quality of based primary studies is important; naturally, it would be key to assess its own methodological quality before usage. Quality includes internal and external validity, while methodological quality usually refers to internal validity [ 8 , 9 ]. Internal validity is also recommended as “risk of bias (RoB)” by the Cochrane Collaboration [ 9 ].

There are three types of tools: scales, checklists, and items [ 10 , 11 ]. In 2015, Zeng et al. [ 11 ] investigated methodological quality tools for randomized controlled trial (RCT), non-randomized clinical intervention study, cohort study, case-control study, cross-sectional study, case series, diagnostic accuracy study which also called “diagnostic test accuracy (DTA)”, animal study, systematic review and meta-analysis, and clinical practice guideline (CPG). From then on, some changes might generate in pre-existing tools, and new tools might also emerge; moreover, the research method has also been developed in recent years. Hence, it is necessary to systematically investigate commonly-used tools for assessing methodological quality, especially those for economic evaluation, clinical prediction rule/model, and qualitative study. Therefore, this narrative review presented related methodological quality (including “RoB”) assessment tools for primary and secondary medical studies up to December 2019, and Table  1 presents their basic characterizes. We hope this review can help the producers, users, and researchers of evidence.

Tools for intervention studies

Randomized controlled trial (individual or cluster).

The first RCT was designed by Hill BA (1897–1991) and became the “gold standard” for experimental study design [ 12 , 13 ] up to now. Nowadays, the Cochrane risk of bias tool for randomized trials (which was introduced in 2008 and edited on March 20, 2011) is the most commonly recommended tool for RCT [ 9 , 14 ], which is called “RoB”. On August 22, 2019 (which was introduced in 2016), the revised revision for this tool to assess RoB in randomized trials (RoB 2.0) was published [ 15 ]. The RoB 2.0 tool is suitable for individually-randomized, parallel-group, and cluster- randomized trials, which can be found in the dedicated website https://www.riskofbias.info/welcome/rob-2-0-tool . The RoB 2.0 tool consists of five bias domains and shows major changes when compared to the original Cochrane RoB tool (Table S 1 A-B presents major items of both versions).

The Physiotherapy Evidence Database (PEDro) scale is a specialized methodological assessment tool for RCT in physiotherapy [ 16 , 17 ] and can be found in http://www.pedro.org.au/english/downloads/pedro-scale/ , covering 11 items (Table S 1 C). The Effective Practice and Organisation of Care (EPOC) Group is a Cochrane Review Group who also developed a tool (called as “EPOC RoB Tool”) for complex interventions randomized trials. This tool has 9 items (Table S 1 D) and can be found in https://epoc.cochrane.org/resources/epoc-resources-review-authors . The Critical Appraisal Skills Programme (CASP) is a part of the Oxford Centre for Triple Value Healthcare Ltd. (3 V) portfolio, which provides resources and learning and development opportunities to support the development of critical appraisal skills in the UK ( http://www.casp-uk.net/ ) [ 18 , 19 , 20 ]. The CASP checklist for RCT consists of three sections involving 11 items (Table S 1 E). The National Institutes of Health (NIH) also develops quality assessment tools for controlled intervention study (Table S 1 F) to assess methodological quality of RCT ( https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools ).

The Joanna Briggs Institute (JBI) is an independent, international, not-for-profit researching and development organization based in the Faculty of Health and Medical Sciences at the University of Adelaide, South Australia ( https://joannabriggs.org/ ). Hence, it also develops many critical appraisal checklists involving the feasibility, appropriateness, meaningfulness and effectiveness of healthcare interventions. Table S 1 G presents the JBI Critical appraisal checklist for RCT, which includes 13 items.

The Scottish Intercollegiate Guidelines Network (SIGN) was established in 1993 ( https://www.sign.ac.uk/ ). Its objective is to improve the quality of health care for patients in Scotland via reducing variations in practices and outcomes, through developing and disseminating national clinical guidelines containing recommendations for effective practice based on current evidence. Hence, it also develops many critical appraisal checklists for assessing methodological quality of different study types, including RCT (Table S 1 H).

In addition, the Jadad Scale [ 21 ], Modified Jadad Scale [ 22 , 23 ], Delphi List [ 24 ], Chalmers Scale [ 25 ], National Institute for Clinical Excellence (NICE) methodology checklist [ 11 ], Downs & Black checklist [ 26 ], and other tools summarized by West et al. in 2002 [ 27 ] are not commonly used or recommended nowadays.

Animal study

Before starting clinical trials, the safety and effectiveness of new drugs are usually tested in animal models [ 28 ], so animal study is considered as preclinical research, possessing important significance [ 29 , 30 ]. Likewise, the methodological quality of animal study also needs to be assessed [ 30 ]. In 1999, the initial “Stroke Therapy Academic Industry Roundtable (STAIR)” recommended their criteria for assessing the quality of stroke animal studies [ 31 ] and this tool is also called “STAIR”. In 2009, the STAIR Group updated their criteria and developed “Recommendations for Ensuring Good Scientific Inquiry” [ 32 ]. Besides, Macleod et al. [ 33 ] proposed a 10-point tool based on STAIR to assess methodological quality of animal study in 2004, which is also called “CAMARADES (The Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies)”; with “S” presenting “Stroke” at that time and now standing for “Studies” ( http://www.camarades.info/ ). In CAMARADES tool, every item could reach a highest score of one point and the total score for this tool could achieve 10 points (Table S 1 J).

In 2008, the Systematic Review Center for Laboratory animal Experimentation (SYRCLE) was established in Netherlands and this team developed and released an RoB tool for animal intervention studies - SYRCLE’s RoB tool in 2014, based on the original Cochrane RoB Tool [ 34 ]. This new tool contained 10 items which had become the most recommended tool for assessing the methodological quality of animal intervention studies (Table S 1 I).

Non-randomised studies

In clinical research, RCT is not always feasible [ 35 ]; therefore, non-randomized design remains considerable. In non-randomised study (also called quasi-experimental studies), investigators control the allocation of participants into groups, but do not attempt to adopt randomized operation [ 36 ], including follow-up study. According to with or without comparison, non-randomized clinical intervention study can be divided into comparative and non-comparative sub-types, the Risk Of Bias In Non-randomised Studies - of Interventions (ROBINS-I) tool [ 37 ] is the preferentially recommended tool. This tool is developed to evaluate risk of bias in estimating comparative effectiveness (harm or benefit) of interventions in studies not adopting randomization in allocating units (individuals or clusters of individuals) into comparison groups. Besides, the JBI critical appraisal checklist for quasi-experimental studies (non-randomized experimental studies) is also suitable, which includes 9 items. Moreover, the methodological index for non-randomized studies (MINORS) [ 38 ] tool can also be used, which contains a total of 12 methodological points; the first 8 items could be applied for both non-comparative and comparative studies, while the last 4 items appropriate for studies with two or more groups. Every item is scored from 0 to 2, and the total scores over 16 or 24 give an overall quality score. Table S 1 K-L-M presented the major items of these three tools.

Non-randomized study with a separate control group could also be called clinical controlled trial or controlled before-and-after study. For this design type, the EPOC RoB tool is suitable (see Table S 1 D). When using this tool, the “random sequence generation” and “allocation concealment” should be scored as “High risk”, while grading for other items could be the same as that for randomized trial.

Non-randomized study without a separate control group could be a before-after (Pre-Post) study, a case series (uncontrolled longitudinal study), or an interrupted time series study. A case series is described a series of individuals, who usually receive the same intervention, and contains non control group [ 9 ]. There are several tools for assessing the methodological quality of case series study. The latest one was developed by Moga C et al. [ 39 ] in 2012 using a modified Delphi technique, which was developed by the Canada Institute of Health Economics (IHE); hence, it is also called “IHE Quality Appraisal Tool” (Table S 1 N). Moreover, NIH also develops a quality assessment tool for case series study, including 9 items (Table S 1 O). For interrupted time series studies, the “EPOC RoB tool for interrupted time series studies” is recommended (Table S 1 P). For the before-after study, we recommend the NIH quality assessment tool for before-after (Pre-Post) study without control group (Table S 1 Q).

In addition, for non-randomized intervention study, the Reisch tool (Check List for Assessing Therapeutic Studies) [ 11 , 40 ], Downs & Black checklist [ 26 ], and other tools summarized by Deeks et al. [ 36 ] are not commonly used or recommended nowadays.

Tools for observational studies and diagnostic study

Observational studies include cohort study, case-control study, cross-sectional study, case series, case reports, and comparative effectiveness research [ 41 ], and can be divided into analytical and descriptive studies [ 42 ].

Cohort study

Cohort study includes prospective cohort study, retrospective cohort study, and ambidirectional cohort study [ 43 ]. There are some tools for assessing the quality of cohort study, such as the CASP cohort study checklist (Table S 2 A), SIGN critical appraisal checklists for cohort study (Table S 2 B), NIH quality assessment tool for observational cohort and cross-sectional studies (Table S 2 C), Newcastle-Ottawa Scale (NOS; Table S 2 D) for cohort study, and JBI critical appraisal checklist for cohort study (Table S 2 E). However, the Downs & Black checklist [ 26 ] and the NICE methodology checklist for cohort study [ 11 ] are not commonly used or recommended nowadays.

The NOS [ 44 , 45 ] came from an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. Among all above mentioned tools, the NOS is the most commonly used tool nowadays which also allows to be modified based on a special subject.

Case-control study

Case-control study selects participants based on the presence of a specific disease or condition, and seeks earlier exposures that may lead to the disease or outcome [ 42 ]. It has an advantage over cohort study, that is the issue of “drop out” or “loss in follow up” of participants as seen in cohort study would not arise in such study. Nowadays, there are some acceptable tools for assessing the methodological quality of case-control study, including CASP case-control study checklist (Table S 2 F), SIGN critical appraisal checklists for case-control study (Table S 2 G), NIH quality assessment tool of case-control study (Table S 2 H), JBI critical appraisal checklist for case-control study (Table S 2 I), and the NOS for case-control study (Table S 2 J). Among them, the NOS for case-control study is also the most frequently used tool nowadays and allows to be modified by users.

In addition, the Downs & Black checklist [ 26 ] and the NICE methodology checklist for case-control study [ 11 ] are also not commonly used or recommended nowadays.

Cross-sectional study (analytical or descriptive)

Cross-sectional study is used to provide a snapshot of a disease and other variables in a defined population at a time point. It can be divided into analytical and purely descriptive types. Descriptive cross-sectional study merely describes the number of cases or events in a particular population at a time point or during a period of time; whereas analytic cross-sectional study can be used to infer relationships between a disease and other variables [ 46 ].

For assessing the quality of analytical cross-sectional study, the NIH quality assessment tool for observational cohort and cross-sectional studies (Table S 2 C), JBI critical appraisal checklist for analytical cross-sectional study (Table S 2 K), and the Appraisal tool for Cross-Sectional Studies (AXIS tool; Table S 2 L) [ 47 ] are recommended tools. The AXIS tool is a critical appraisal tool that addresses study design and reporting quality as well as the risk of bias in cross-sectional study, which was developed in 2016 and contains 20 items. Among these three tools, the JBI checklist is the most preferred one.

Purely descriptive cross-sectional study is usually used to measure disease prevalence and incidence. Hence, the critical appraisal tool for analytic cross-sectional study is not proper for the assessment. Only few quality assessment tools are suitable for descriptive cross-sectional study, like the JBI critical appraisal checklist for studies reporting prevalence data [ 48 ] (Table S 2 M), Agency for Healthcare Research and Quality (AHRQ) methodology checklist for assessing the quality of cross-sectional/ prevalence study (Table S 2 N), and Crombie’s items for assessing the quality of cross-sectional study [ 49 ] (Table S 2 O). Among them, the JBI tool is the newest.

Case series and case reports

Unlike above mentioned interventional case series, case reports and case series are used to report novel occurrences of a disease or a unique finding [ 50 ]. Hence, they belong to descriptive studies. There is only one tool – the JBI critical appraisal checklist for case reports (Table S 2 P).

Comparative effectiveness research

Comparative effectiveness research (CER) compares real-world outcomes [ 51 ] resulting from alternative treatment options that are available for a given medical condition. Its key elements include the study of effectiveness (effect in the real world), rather than efficacy (ideal effect), and the comparisons among alternative strategies [ 52 ]. In 2010, the Good Research for Comparative Effectiveness (GRACE) Initiative was established and developed principles to help healthcare providers, researchers, journal readers, and editors evaluate inherent quality for observational research studies of comparative effectiveness [ 41 ]. And in 2016, a validated assessment tool – the GRACE Checklist v5.0 (Table S 2 Q) was released for assessing the quality of CER.

Diagnostic study

Diagnostic tests, also called “Diagnostic Test Accuracy (DTA)”, are used by clinicians to identify whether a condition exists in a patient or not, so as to develop an appropriate treatment plan [ 53 ]. DTA has several unique features in terms of its design which differ from standard intervention and observational evaluations. In 2003, Penny et al. [ 53 , 54 ] developed a tool for assessing the quality of DTA, namely Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. In 2011, a revised “QUADAS-2” tool (Table S 2 R) was launched [ 55 , 56 ]. Besides, the CASP diagnostic checklist (Table S 2 S), SIGN critical appraisal checklists for diagnostic study (Table S 2 T), JBI critical appraisal checklist for diagnostic test accuracy studies (Table S 2 U), and the Cochrane risk of bias assessing tool for diagnostic test accuracy (Table S 2 V) are also common useful tools in this field.

Of them, the Cochrane risk of bias tool ( https://methods.cochrane.org/sdt/ ) is based on the QUADAS tool, and the SIGN and JBI tools are based on the QUADAS-2 tool. Of course, the QUADAS-2 tool is the first recommended tool. Other relevant tools reviewed by Whiting et al. [ 53 ] in 2004 are not used nowadays.

Tools for other primary medical studies

Health economic evaluation.

Health economic evaluation research comparatively analyses alternative interventions with regard to their resource uses, costs and health effects [ 57 ]. It focuses on identifying, measuring, valuing and comparing resource use, costs and benefit/effect consequences for two or more alternative intervention options [ 58 ]. Nowadays, health economic study is increasingly popular. Of course, its methodological quality also needs to be assessed before its initiation. The first tool for such assessment was developed by Drummond and Jefferson in 1996 [ 59 ], and then many tools have been developed based on the Drummond’s items or its revision [ 60 ], such as the SIGN critical appraisal checklists for economic evaluations (Table S 3 A), CASP economic evaluation checklist (Table S 3 B), and the JBI critical appraisal checklist for economic evaluations (Table S 3 C). The NICE only retains one methodology checklist for economic evaluation (Table S 3 D).

However, we regard the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement [ 61 ] as a reporting tool rather than a methodological quality assessment tool, so we do not recommend it to assess the methodological quality of health economic evaluation.

  • Qualitative study

In healthcare, qualitative research aims to understand and interpret individual experiences, behaviours, interactions, and social contexts, so as to explain interested phenomena, such as the attitudes, beliefs, and perspectives of patients and clinicians; the interpersonal nature of caregiver and patient relationships; illness experience; and the impact of human sufferings [ 62 ]. Compared with quantitative studies, assessment tools for qualitative studies are fewer. Nowadays, the CASP qualitative research checklist (Table S 3 E) is the most frequently recommended tool for this issue. Besides, the JBI critical appraisal checklist for qualitative research [ 63 , 64 ] (Table S 3 F) and the Quality Framework: Cabinet Office checklist for social research [ 65 ] (Table S 3 G) are also suitable.

Prediction studies

Clinical prediction study includes predictor finding (prognostic factor) studies, prediction model studies (development, validation, and extending or updating), and prediction model impact studies [ 66 ]. For predictor finding study, the Quality In Prognosis Studies (QIPS) tool [ 67 ] can be used for assessing its methodological quality (Table S 3 H). For prediction model impact studies, if it uses a randomized comparative design, tools for RCT can be used, especially the RoB 2.0 tool; if it uses a nonrandomized comparative design, tools for non-randomized studies can be used, especially the ROBINS-I tool. For diagnostic and prognostic prediction model studies, the Prediction model Risk Of Bias Assessment Tool (PROBAST; Table S 3 I) [ 68 ] and CASP clinical prediction rule checklist (Table S 3 J) are suitable.

Text and expert opinion papers

Text and expert opinion-based evidence (also called “non-research evidence”) comes from expert opinions, consensus, current discourse, comments, and assumptions or assertions that appear in various journals, magazines, monographs and reports [ 69 , 70 , 71 ]. Nowadays, only the JBI has a critical appraisal checklist for the assessment of text and expert opinion papers (Table S 3 K).

Outcome measurement instruments

An outcome measurement instrument is a “device” used to collect a measurement. The range embraced by the term ‘instrument’ is broad, and can refer to questionnaire (e.g. patient-reported outcome such as quality of life), observation (e.g. the result of a clinical examination), scale (e.g. a visual analogue scale), laboratory test (e.g. blood test) and images (e.g. ultrasound or other medical imaging) [ 72 , 73 ]. Measurements can be subjective or objective, and either unidimensional (e.g. attitude) or multidimensional. Nowadays, only one tool - the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) Risk of Bias checklist [ 74 , 75 , 76 ] ( www.cosmin.nl/ ) is proper for assessing the methodological quality of outcome measurement instrument, and Table S 3 L presents its major items, including patient - reported outcome measure (PROM) development (Table S 3 LA), content validity (Table S 3 LB), structural validity (Table S 3 LC), internal consistency (Table S 3 LD), cross-cultural validity/ measurement invariance (Table S 3 LE), reliability (Table S 3 LF), measurement error (Table S 3 LG), criterion validity (Table S 3 LH), hypotheses testing for construct validity (Table S 3 LI), and responsiveness (Table S 3 LJ).

Tools for secondary medical studies

Systematic review and meta-analysis.

Systematic review and meta-analysis are popular methods to keep up with current medical literature [ 4 , 5 , 6 ]. Their ultimate purposes and values lie in promoting healthcare [ 6 , 77 , 78 ]. Meta-analysis is a statistical process of combining results from several studies, commonly a part of a systematic review [ 11 ]. Of course, critical appraisal would be necessary before using systematic review and meta-analysis.

In 1988, Sacks et al. developed the first tool for assessing the quality of meta-analysis on RCTs - the Sack’s Quality Assessment Checklist (SQAC) [ 79 ]; And then in 1991, Oxman and Guyatt developed another tool – the Overview Quality Assessment Questionnaire (OQAQ) [ 80 , 81 ]. To overcome the shortcomings of these two tools, in 2007 the A Measurement Tool to Assess Systematic Reviews (AMSTAR) was developed based on them [ 82 ] ( http://www.amstar.ca/ ). However, this original AMSTAR instrument did not include an assessment on the risk of bias for non-randomised studies, and the expert group thought revisions should address all aspects of the conduct of a systematic review. Hence, the new instrument for randomised or non-randomised studies on healthcare interventions - AMSTAR 2 was released in 2017 [ 83 ], and Table S 4 A presents its major items.

Besides, the CASP systematic review checklist (Table S 4 B), SIGN critical appraisal checklists for systematic reviews and meta-analyses (Table S 4 C), JBI critical appraisal checklist for systematic reviews and research syntheses (Table S 4 D), NIH quality assessment tool for systematic reviews and meta-analyses (Table S 4 E), The Decision Support Unit (DSU) network meta-analysis (NMA) methodology checklist (Table S 4 F), and the Risk of Bias in Systematic Review (ROBIS) [ 84 ] tool (Table S 4 G) are all suitable. Among them, the AMSTAR 2 is the most commonly used and the ROIBS is the most frequently recommended.

Among those tools, the AMSTAR 2 is suitable for assessing systematic review and meta-analysis based on randomised or non-randomised interventional studies, the DSU NMA methodology checklist for network meta-analysis, while the ROBIS for meta-analysis based on interventional, diagnostic test accuracy, clinical prediction, and prognostic studies.

Clinical practice guidelines

Clinical practice guideline (CPG) is integrated well into the thinking of practicing clinicians and professional clinical organizations [ 85 , 86 , 87 ]; and also make scientific evidence incorporated into clinical practice [ 88 ]. However, not all CPGs are evidence-based [ 89 , 90 ] and their qualities are uneven [ 91 , 92 , 93 ]. Until now there were more than 20 appraisal tools have been developed [ 94 ]. Among them, the Appraisal of Guidelines for Research and Evaluation (AGREE) instrument has the greatest potential in serving as a basis to develop an appraisal tool for clinical pathways [ 94 ]. The AGREE instrument was first released in 2003 [ 95 ] and updated to AGREE II instrument in 2009 [ 96 ] ( www.agreetrust.org/ ). Now the AGREE II instrument is the most recommended tool for CPG (Table S 4 H).

Besides, based on the AGREE II, the AGREE Global Rating Scale (AGREE GRS) Instrument [ 97 ] was developed as a short item tool to evaluate the quality and reporting of CPGs.

Discussion and conclusions

Currently, the EBM is widely accepted and the major attention of healthcare workers lies in “Going from evidence to recommendations” [ 98 , 99 ]. Hence, critical appraisal of evidence before using is a key point in this process [ 100 , 101 ]. In 1987, Mulrow CD [ 102 ] pointed out that medical reviews needed routinely use scientific methods to identify, assess, and synthesize information. Hence, perform methodological quality assessment is necessary before using the study. However, although there are more than 20 years have been passed since the first tool emergence, many users remain misunderstand the methodological quality and reporting quality. Of them, someone used the reporting checklist to assess the methodological quality, such as used the Consolidated Standards of Reporting Trials (CONSORT) statement [ 103 ] to assess methodological quality of RCT, used the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement [ 104 ] to methodological quality of cohort study. This phenomenon indicates more universal education of clinical epidemiology is needed for medical students and professionals.

The methodological quality tool development should according to the characteristics of different study types. In this review, we used “methodological quality”, “risk of bias”, “critical appraisal”, “checklist”, “scale”, “items”, and “assessment tool” to search in the NICE website, SIGN website, Cochrane Library website and JBI website, and on the basis of them, added “systematic review”, “meta-analysis”, “overview” and “clinical practice guideline” to search in PubMed. Compared with our previous systematic review [ 11 ], we found some tools are recommended and remain used, some are used without recommendation, and some are eliminated [ 10 , 29 , 30 , 36 , 53 , 94 , 105 , 106 , 107 ]. These tools produce a significant impetus for clinical practice [ 108 , 109 ].

In addition, compared with our previous systematic review [ 11 ], this review stated more tools, especially those developed after 2014, and the latest revisions. Of course, we also adjusted the method of study type classification. Firstly, in 2014, the NICE provided 7 methodology checklists but only retains and updated the checklist for economic evaluation now. Besides, the Cochrane RoB 2.0 tool, AMSTAR 2 tool, CASP checklist, and most of JBI critical appraisal checklists are all the newest revisions; the NIH quality assessment tool, ROBINS-I tool, EPOC RoB tool, AXIS tool, GRACE Checklist, PROBAST, COSMIN Risk of Bias checklist, and ROBIS tool are all newly released tools. Secondly, we also introduced tools for network meta-analysis, outcome measurement instruments, text and expert opinion papers, prediction studies, qualitative study, health economic evaluation, and CER. Thirdly, we classified interventional studies into randomized and non-randomized sub-types, and then further classified non-randomized studies into with and without controlled group. Moreover, we also classified cross-sectional study into analytic and purely descriptive sub-types, and case-series into interventional and observational sub-types. These processing courses were more objective and comprehensive.

Obviously, the number of appropriate tools is the largest for RCT, followed by cohort study; the applicable range of JBI is widest [ 63 , 64 ], with CASP following closely. However, further efforts remain necessary to develop appraisal tools. For some study types, only one assessment tool is suitable, such as CER, outcome measurement instruments, text and expert opinion papers, case report, and CPG. Besides, there is no proper assessment tool for many study types, such as overview, genetic association study, and cell study. Moreover, existing tools have not been fully accepted. In the future, how to develop well accepted tools remains a significant and important work [ 11 ].

Our review can help the professionals of systematic review, meta-analysis, guidelines, and evidence users to choose the best tool when producing or using evidence. Moreover, methodologists can obtain the research topics for developing new tools. Most importantly, we must remember that all assessment tools are subjective, and actual yields of wielding them would be influenced by user’s skills and knowledge level. Therefore, users must receive formal training (relevant epidemiological knowledge is necessary), and hold rigorous academic attitude, and at least two independent reviewers should be involved in evaluation and cross-checking to avoid performance bias [ 110 ].

Availability of data and materials

The data and materials used during the current review are all available in this review.

Abbreviations

AGREE Global rating scale

Appraisal of guidelines for research and evaluation

Agency for healthcare research and quality

A measurement tool to assess systematic reviews

Appraisal tool for cross-sectional studies

The collaborative approach to meta-analysis and review of animal data from experimental studies

Critical appraisal skills programme

Consolidated health economic evaluation reporting standards

Consolidated standards of reporting trials

Consensus-based standards for the selection of health measurement instruments

Clinical practice guideline

Decision support unit

Diagnostic test accuracy

Evidence-based medicine

The effective practice and organisation of care group

The good research for comparative effectiveness initiative

Canada institute of health economics

Institute of medicine

Joanna Briggs Institute

Methodological index for non-randomized studies

National institute for clinical excellence

National institutes of health

Network meta-analysis

Newcastle-Ottawa scale

Overview quality assessment questionnaire

Physiotherapy evidence database

The prediction model risk of bias assessment tool

Patient - reported outcome measure

Quality in prognosis studies

Quality assessment of diagnostic accuracy studies

Randomized controlled trial

  • Risk of bias

Risk of bias in non-randomised studies - of interventions

Risk of bias in systematic review

The Scottish intercollegiate guidelines network

Sack’s quality assessment checklist

Stroke therapy academic industry roundtable

Strengthening the reporting of observational studies in epidemiology

Systematic review center for laboratory animal experimentation

Stavrou A, Challoumas D, Dimitrakakis G. Archibald Cochrane (1909-1988): the father of evidence-based medicine. Interact Cardiovasc Thorac Surg. 2013;18(1):121–4.

Article   PubMed   PubMed Central   Google Scholar  

Group E-BMW. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420–5.

Article   Google Scholar  

Levin A. The Cochrane collaboration. Ann Intern Med. 2001;135(4):309–12.

Article   PubMed   CAS   Google Scholar  

Lau J, Ioannidis JP, Schmid CH. Summing up evidence: one answer is not always enough. Lancet. 1998;351(9096):123–7.

Clarke M, Chalmers I. Meta-analyses, multivariate analyses, and coping with the play of chance. Lancet. 1998;351(9108):1062–3.

Oxman AD, Schunemann HJ, Fretheim A. Improving the use of research evidence in guideline development: 8. Synthesis and presentation of evidence. Health Res Policy Syst. 2006;4:20.

Zhang J, Wang Y, Weng H, Wang D, Han F, Huang Q, et al. Management of non-muscle-invasive bladder cancer: quality of clinical practice guidelines and variations in recommendations. BMC Cancer. 2019;19(1):1054.

Campbell DT. Factors relevant to the validity of experiments in social settings. Psychol Bull. 1957;54(4):297–312.

Higgins J, Green S. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011]. The Cochrane Collaboration; 2011.

Google Scholar  

Juni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ. 2001;323(7303):42–6.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Zeng X, Zhang Y, Kwong JS, Zhang C, Li S, Sun F, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med. 2015;8(1):2–10.

Article   PubMed   Google Scholar  

A Medical Research Council Investigation. Treatment of pulmonary tuberculosis with streptomycin and Para-aminosalicylic acid. Br Med J. 1950;2(4688):1073–85.

Armitage P. Fisher, Bradford Hill, and randomization. Int J Epidemiol. 2003;32(6):925–8.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Sterne JAC, Savovic J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898.

Maher CG, Sherrington C, Herbert RD, Moseley AM, Elkins M. Reliability of the PEDro scale for rating quality of randomized controlled trials. Phys Ther. 2003;83(8):713–21.

Shiwa SR, Costa LO, Costa Lda C, Moseley A, Hespanhol Junior LC, Venancio R, et al. Reproducibility of the Portuguese version of the PEDro scale. Cad Saude Publica. 2011;27(10):2063–8.

Ibbotson T, Grimshaw J, Grant A. Evaluation of a programme of workshops for promoting the teaching of critical appraisal skills. Med Educ. 1998;32(5):486–91.

Singh J. Critical appraisal skills programme. J Pharmacol Pharmacother. 2013;4(1):76.

Taylor R, Reeves B, Ewings P, Binns S, Keast J, Mears R. A systematic review of the effectiveness of critical appraisal skills training for clinicians. Med Educ. 2000;34(2):120–5.

Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17(1):1–12.

Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273(5):408–12.

Hartling L, Ospina M, Liang Y, Dryden DM, Hooton N, Krebs Seida J, et al. Risk of bias versus quality assessment of randomised controlled trials: cross sectional study. BMJ. 2009;339:b4012.

Verhagen AP, de Vet HC, de Bie RA, Kessels AG, Boers M, Bouter LM, et al. The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus. J Clin Epidemiol. 1998;51(12):1235–41.

Chalmers TC, Smith H Jr, Blackburn B, Silverman B, Schroeder B, Reitman D, et al. A method for assessing the quality of a randomized control trial. Control Clin Trials. 1981;2(1):31–49.

Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84.

West S, King V, Carey TS, Lohr KN, McKoy N, Sutton SF, et al. Systems to rate the strength of scientific evidence. Evid Rep Technol Assess (Summ). 2002;47:1–11.

Sibbald WJ. An alternative pathway for preclinical research in fluid management. Crit Care. 2000;4(Suppl 2):S8–15.

Perel P, Roberts I, Sena E, Wheble P, Briscoe C, Sandercock P, et al. Comparison of treatment effects between animal experiments and clinical trials: systematic review. BMJ. 2007;334(7586):197.

Hooijmans CR, Ritskes-Hoitinga M. Progress in using systematic reviews of animal studies to improve translational research. PLoS Med. 2013;10(7):e1001482.

Stroke Therapy Academic Industry R. Recommendations for standards regarding preclinical neuroprotective and restorative drug development. Stroke. 1999;30(12):2752–8.

Fisher M, Feuerstein G, Howells DW, Hurn PD, Kent TA, Savitz SI, et al. Update of the stroke therapy academic industry roundtable preclinical recommendations. Stroke. 2009;40(6):2244–50.

Macleod MR, O'Collins T, Howells DW, Donnan GA. Pooling of animal experimental data reveals influence of study design and publication bias. Stroke. 2004;35(5):1203–8.

Hooijmans CR, Rovers MM, de Vries RB, Leenaars M, Ritskes-Hoitinga M, Langendam MW. SYRCLE's risk of bias tool for animal studies. BMC Med Res Methodol. 2014;14:43.

McCulloch P, Taylor I, Sasako M, Lovett B, Griffin D. Randomised trials in surgery: problems and possible solutions. BMJ. 2002;324(7351):1448–51.

Deeks JJ, Dinnes J, D'Amico R, Sowden AJ, Sakarovitch C, Song F, et al. Evaluating non-randomised intervention studies. Health Technol Assess. 2003;7(27):1–173.

Sterne JA, Hernan MA, Reeves BC, Savovic J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.

Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J. Methodological index for non-randomized studies (minors): development and validation of a new instrument. ANZ J Surg. 2003;73(9):712–6.

Moga C, Guo B, Schopflocher D, Harstall C. Development of a quality appraisal tool for case series studies using a modified delphi technique 2012. http://www.ihe.ca/documents/Case%20series%20studies%20using%20a%20modified%20Delphi%20technique.pdf .(Accept 15 Januray 2020).

Reisch JS, Tyson JE, Mize SG. Aid to the evaluation of therapeutic studies. Pediatrics. 1989;84(5):815–27.

PubMed   CAS   Google Scholar  

Dreyer NA, Schneeweiss S, McNeil BJ, Berger ML, Walker AM, Ollendorf DA, et al. GRACE principles: recognizing high-quality observational studies of comparative effectiveness. Am J Manag Care. 2010;16(6):467–71.

PubMed   Google Scholar  

Grimes DA, Schulz KF. An overview of clinical research: the lay of the land. Lancet. 2002;359(9300):57–61.

Grimes DA, Schulz KF. Cohort studies: marching towards outcomes. Lancet. 2002;359(9303):341–5.

Wells G, Shea B, O'Connell D, Peterson J, Welch V, Losos M, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp (Accessed 16 Jan 2020).

Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25(9):603–5.

Wu L, Li BH, Wang YY, Wang CY, Zi H, Weng H, et al. Periodontal disease and risk of benign prostate hyperplasia: a cross-sectional study. Mil Med Res. 2019;6(1):34.

PubMed   PubMed Central   Google Scholar  

Downes MJ, Brennan ML, Williams HC, Dean RS. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open. 2016;6(12):e011458.

Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data. Int J Evid Based Healthc. 2015;13(3):147–53.

Crombie I. Pocket guide to critical appraisal: Oxford. UK: John Wiley & Sons, Ltd; 1996.

Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D, et al. The CARE guidelines: consensus-based clinical case report guideline development. J Clin Epidemiol. 2014;67(1):46–51.

Li BH, Yu ZJ, Wang CY, Zi H, Li XD, Wang XH, et al. A preliminary, multicenter, prospective and real world study on the hemostasis, coagulation, and safety of hemocoagulase bothrops atrox in patients undergoing transurethral bipolar plasmakinetic prostatectomy. Front Pharmacol. 2019;10:1426.

Strom BL, Schinnar R, Hennessy S. Comparative effectiveness research. Pharmacoepidemiology. Oxford, UK: John Wiley & Sons, Ltd; 2012. p. 561–79.

Whiting P, Rutjes AW, Dinnes J, Reitsma J, Bossuyt PM, Kleijnen J. Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess. 2004;8(25):1–234.

Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol. 2003;3:25.

Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36.

Schueler S, Schuetz GM, Dewey M. The revised QUADAS-2 tool. Ann Intern Med. 2012;156(4):323.

Hoch JS, Dewa CS. An introduction to economic evaluation: what's in a name? Can J Psychiatr. 2005;50(3):159–66.

Donaldson C, Vale L, Mugford M. Evidence based health economics: from effectiveness to efficiency in systematic review. UK: Oxford University Press; 2002.

Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ economic evaluation working party. BMJ. 1996;313(7052):275–83.

Drummond MF, Richardson WS, O'Brien BJ, Levine M, Heyland D. Users’ guides to the medical literature. XIII. How to use an article on economic analysis of clinical practice. A. Are the results of the study valid? Evidence-based medicine working group. JAMA. 1997;277(19):1552–7.

Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated health economic evaluation reporting standards (CHEERS) statement. Value Health. 2013;16(2):e1–5.

Wong SS, Wilczynski NL, Haynes RB, Hedges T. Developing optimal search strategies for detecting clinically relevant qualitative studies in MEDLINE. Stud Health Technol Inform. 2004;107(Pt 1):311–6.

Vardell E, Malloy M. Joanna briggs institute: an evidence-based practice database. Med Ref Serv Q. 2013;32(4):434–42.

Hannes K, Lockwood C. Pragmatism as the philosophical foundation for the Joanna Briggs meta-aggregative approach to qualitative evidence synthesis. J Adv Nurs. 2011;67(7):1632–42.

Spencer L, Ritchie J, Lewis J, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. UK: Government Chief Social Researcher’s office; 2003.

Bouwmeester W, Zuithoff NP, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, et al. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Hayden JA, van der Windt DA, Cartwright JL, Cote P, Bombardier C. Assessing bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.

Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med. 2019;170(1):51–8.

Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312(7023):71–2.

Tonelli MR. Integrating evidence into clinical practice: an alternative to evidence-based approaches. J Eval Clin Pract. 2006;12(3):248–56.

Woolf SH. Evidence-based medicine and practice guidelines: an overview. Cancer Control. 2000;7(4):362–7.

Polit DF. Assessing measurement in health: beyond reliability and validity. Int J Nurs Stud. 2015;52(11):1746–53.

Polit DF, Beck CT. Essentials of nursing research: appraising evidence for nursing practice, ninth edition: Lippincott Williams & Wilkins, north American; 2017.

Mokkink LB, de Vet HCW, Prinsen CAC, Patrick DL, Alonso J, Bouter LM, et al. COSMIN risk of bias checklist for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1171–9.

Mokkink LB, Prinsen CA, Bouter LM, Vet HC, Terwee CB. The consensus-based standards for the selection of health measurement instruments (COSMIN) and how to select an outcome measurement instrument. Braz J Phys Ther. 2016;20(2):105–13.

Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.

Swennen MH, van der Heijden GJ, Boeije HR, van Rheenen N, Verheul FJ, van der Graaf Y, et al. Doctors’ perceptions and use of evidence-based medicine: a systematic review and thematic synthesis of qualitative studies. Acad Med. 2013;88(9):1384–96.

Gallagher EJ. Systematic reviews: a logical methodological extension of evidence-based medicine. Acad Emerg Med. 1999;6(12):1255–60.

Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med. 1987;316(8):450–5.

Oxman AD. Checklists for review articles. BMJ. 1994;309(6955):648–51.

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44(11):1271–8.

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008.

Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Davis DA, Taylor-Vaisey A. Translating guidelines into practice. A systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ. 1997;157(4):408–16.

PubMed   PubMed Central   CAS   Google Scholar  

Neely JG, Graboyes E, Paniello RC, Sequeira SM, Grindler DJ. Practical guide to understanding the need for clinical practice guidelines. Otolaryngol Head Neck Surg. 2013;149(1):1–7.

Browman GP, Levine MN, Mohide EA, Hayward RS, Pritchard KI, Gafni A, et al. The practice guidelines development cycle: a conceptual tool for practice guidelines development and implementation. J Clin Oncol. 1995;13(2):502–12.

Tracy SL. From bench-top to chair-side: how scientific evidence is incorporated into clinical practice. Dent Mater. 2013;30(1):1–15.

Chapa D, Hartung MK, Mayberry LJ, Pintz C. Using preappraised evidence sources to guide practice decisions. J Am Assoc Nurse Pract. 2013;25(5):234–43.

Eibling D, Fried M, Blitzer A, Postma G. Commentary on the role of expert opinion in developing evidence-based guidelines. Laryngoscope. 2013;124(2):355–7.

Chen YL, Yao L, Xiao XJ, Wang Q, Wang ZH, Liang FX, et al. Quality assessment of clinical guidelines in China: 1993–2010. Chin Med J. 2012;125(20):3660–4.

Hu J, Chen R, Wu S, Tang J, Leng G, Kunnamo I, et al. The quality of clinical practice guidelines in China: a systematic assessment. J Eval Clin Pract. 2013;19(5):961–7.

Henig O, Yahav D, Leibovici L, Paul M. Guidelines for the treatment of pneumonia and urinary tract infections: evaluation of methodological quality using the appraisal of guidelines, research and evaluation ii instrument. Clin Microbiol Infect. 2013;19(12):1106–14.

Vlayen J, Aertgeerts B, Hannes K, Sermeus W, Ramaekers D. A systematic review of appraisal tools for clinical practice guidelines: multiple similarities and one common deficit. Int J Qual Health Care. 2005;17(3):235–42.

Collaboration A. Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project. Qual Saf Health Care. 2003;12(1):18–23.

Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ. 2010;182(18):E839–42.

Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. The global rating scale complements the AGREE II in advancing the quality of practice guidelines. J Clin Epidemiol. 2012;65(5):526–34.

Guyatt GH, Oxman AD, Kunz R, Falck-Ytter Y, Vist GE, Liberati A, et al. Going from evidence to recommendations. BMJ. 2008;336(7652):1049–51.

Andrews J, Guyatt G, Oxman AD, Alderson P, Dahm P, Falck-Ytter Y, et al. GRADE guidelines: 14. Going from evidence to recommendations: the significance and presentation of recommendations. J Clin Epidemiol. 2013;66(7):719–25.

Tunguy-Desmarais GP. Evidence-based medicine should be based on science. S Afr Med J. 2013;103(10):700.

Muckart DJ. Evidence-based medicine - are we boiling the frog? S Afr Med J. 2013;103(7):447–8.

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Moher D, Schulz KF, Altman D, Group C. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA. 2001;285(15):1987–91.

von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370(9596):1453–7.

Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol. 2007;36(3):666–76.

Willis BH, Quigley M. Uptake of newer methodological developments and the deployment of meta-analysis in diagnostic test research: a systematic review. BMC Med Res Methodol. 2011;11:27.

Whiting PF, Rutjes AW, Westwood ME, Mallett S, Group Q-S. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J Clin Epidemiol. 2013;66(10):1093–104.

Swanson JA, Schmitz D, Chung KC. How to practice evidence-based medicine. Plast Reconstr Surg. 2010;126(1):286–94.

Manchikanti L. Evidence-based medicine, systematic reviews, and guidelines in interventional pain management, part I: introduction and general considerations. Pain Physician. 2008;11(2):161–86.

Gold C, Erkkila J, Crawford MJ. Shifting effects in randomised controlled trials of complex interventions: a new kind of performance bias? Acta Psychiatr Scand. 2012;126(5):307–14.

Download references

Acknowledgements

The authors thank all the authors and technicians for their hard field work for development methodological quality assessment tools.

This work was supported (in part) by the Entrusted Project of National commission on health and health of China (No. [2019]099), the National Key Research and Development Plan of China (2016YFC0106300), and the Nature Science Foundation of Hubei Province (2019FFB03902). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors declare that there are no conflicts of interest in this study.

Author information

Authors and affiliations.

Center for Evidence-Based and Translational Medicine, Zhongnan Hospital, Wuhan University, 169 Donghu Road, Wuchang District, Wuhan, 430071, Hubei, China

Lin-Lu Ma, Yun-Yun Wang, Zhi-Hua Yang, Di Huang, Hong Weng & Xian-Tao Zeng

Department of Evidence-Based Medicine and Clinical Epidemiology, The Second Clinical College, Wuhan University, Wuhan, 430071, China

Yun-Yun Wang, Di Huang & Xian-Tao Zeng

Center for Evidence-Based and Translational Medicine, Wuhan University, Wuhan, 430071, China

Xian-Tao Zeng

Global Health Institute, Wuhan University, Wuhan, 430072, China

You can also search for this author in PubMed   Google Scholar

Contributions

XTZ is responsible for the design of the study and review of the manuscript; LLM, ZHY, YYW, and DH contributed to the data collection; LLM, YYW, and HW contributed to the preparation of the article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xian-Tao Zeng .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Supplementary information

Additional file 1: table s1..

Major components of the tools for assessing intervention studies

Additional file 2: Table S2.

Major components of the tools for assessing observational studies and diagnostic study

Additional file 3: Table S3.

Major components of the tools for assessing other primary medical studies

Additional file 4: Table S4.

Major components of the tools for assessing secondary medical studies

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Ma, LL., Wang, YY., Yang, ZH. et al. Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?. Military Med Res 7 , 7 (2020). https://doi.org/10.1186/s40779-020-00238-8

Download citation

Received : 17 January 2020

Accepted : 18 February 2020

Published : 29 February 2020

DOI : https://doi.org/10.1186/s40779-020-00238-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological quality
  • Quality assessment
  • Critical appraisal
  • Methodology checklist
  • Appraisal tool
  • Observational study
  • Interventional study
  • Outcome measurement instrument

Military Medical Research

ISSN: 2054-9369

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case control study casp tool

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 31 January 2022

The fundamentals of critically appraising an article

  • Sneha Chotaliya 1  

BDJ Student volume  29 ,  pages 12–13 ( 2022 ) Cite this article

1949 Accesses

Metrics details

Sneha Chotaliya

We are often surrounded by an abundance of research and articles, but the quality and validity can vary massively. Not everything will be of a good quality - or even valid. An important part of reading a paper is first assessing the paper. This is a key skill for all healthcare professionals as anything we read can impact or influence our practice. It is also important to stay up to date with the latest research and findings.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

We are sorry, but there is no personal subscription option available for your country.

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Chambers R, 'Clinical Effectiveness Made Easy', Oxford: Radcliffe Medical Press , 1998

Loney P L, Chambers L W, Bennett K J, Roberts J G and Stratford P W. Critical appraisal of the health research literature: prevalence or incidence of a health problem. Chronic Dis Can 1998; 19 : 170-176.

Brice R. CASP CHECKLISTS - CASP - Critical Appraisal Skills Programme . 2021. Available at: https://casp-uk.net/casp-tools-checklists/ (Accessed 22 July 2021).

White S, Halter M, Hassenkamp A and Mein G. 2021. Critical Appraisal Techniques for Healthcare Literature . St George's, University of London.

Download references

Author information

Authors and affiliations.

Academic Foundation Dentist, London, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sneha Chotaliya .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Chotaliya, S. The fundamentals of critically appraising an article. BDJ Student 29 , 12–13 (2022). https://doi.org/10.1038/s41406-021-0275-6

Download citation

Published : 31 January 2022

Issue Date : 31 January 2022

DOI : https://doi.org/10.1038/s41406-021-0275-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

case control study casp tool

Evidence-Based Practice Guidelines: Critical Appraisal

  • What is EBP?
  • Clinical Questions
  • Using CINAHL Filters
  • Critical Appraisal
  • Free EBP Resources
  • Stats Calculators

EBP Evidence Pyramid

case control study casp tool

CEBM Levels of Evidence

  • CEBM Levels of Evidence Guide What are we to do when the irresistible force of the need to offer clinical advice meets with the immovable object of flawed evidence? All we can do is our best: give the advice, but alert the advisees to the flaws in the evidence on which it is based. The CEBM ‘Levels of Evidence 1’ document sets out one approach to systematising this process for different question types.
  • What is Critical Appraisal?
  • Critical Appraisal of the Evidence (Part I)

Easy Checklists for Critical Appraisal

  • CASP Case Control Checklist
  • CASP Clinical Protection Rule Checklist
  • CASP Cohort Study Checklist
  • CASP Diagnostic Checklist
  • CASP Economic Evaluation Checklist
  • CASP Qualitative Study Checklist
  • CASP Randomized Controlled Trial (RCT) Checklist
  • CASP Systematic Review Checklist

CASP Tools & Checklists

  • CASP Checklists This set of eight critical appraisal tools are designed to be used when reading research, these include tools for Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule.
  • << Previous: Using CINAHL Filters
  • Next: Free EBP Resources >>
  • Last Updated: Mar 8, 2024 12:26 PM
  • URL: https://guides.norwich.edu/EBP

COMMENTS

  1. CASP Checklists

    Critical Appraisal Checklists. We offer a number of free downloadable checklists to help you more easily and accurately perform critical appraisal across a number of different study types. The CASP checklists are easy to understand but in case you need any further guidance on how they are structured, take a look at our guide on how to use our ...

  2. CASP Tools for Case Control Studies

    The CASP International Network [1] is a non-profit making organisation for people promoting skills in finding, critically appraising and acting on the results of research papers (evidence).CASPin provide many tools to help you systematically read evidence and this specific tool will help you make sense of any case control study and assess its validity in a quick and easy way.

  3. PDF 11 questions to help you make sense of case control study

    ©Critical Appraisal Skills Programme (CASP) Case Control Study Checklist 31.05.13 1 . 11 questions to help you make sense of case control study . How to use this appraisal tool Three broad issues need to be considered when appraising a case control study: • Are the results of the trial valid? (Section A) • What are the results? (Section B)

  4. CASP checklists

    CASP checklists. CASP (Critical Appraisal Skills Programme) checklists are a series of checklists involving prompt questions to help you evaluate research studies. They are often used in Healthcare and cover the following types of research methods: Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic ...

  5. PDF for use in JBI Systematic Reviews Checklist for Case Control Studies

    Case Control Studies 4 Explanation of case control studies critical appraisal How to cite: Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, Currie M, Qureshi R, Mattis P, Lisy K, Mu P-F. Chapter 7: Systematic reviews of etiology and risk . In: Aromataris E, Munn Z (Editors). Joanna Briggs Institute Reviewer's Manual.

  6. PDF CRITICAL APPRAISAL SKILLS PROGRAMME

    ©Critical Appraisal Skills Programme (CASP) Case Control study checklist_14.10.10 CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness 11 questions to help you make sense of case control study General comments • Three broad issues need to be considered when appraising a case control study.

  7. Optimising the value of the critical appraisal skills programme (CASP

    It follows that quality appraisal is contingent on adequate reporting, and may only assess reporting, rather than study conduct. 17 Nevertheless, the CASP tool is the most commonly used checklist/criteria-based tool for quality appraisal in health and social care-related qualitative evidence synthesis. 16,23 Authors' reasons for using a ...

  8. Critical Appraisal Tools and Reporting Guidelines

    Tool Authors/Organization Applicability/Study design Example of application in lactation research; Critical Appraisal Skills Programme (CASP) CASP: Systematic Reviews, Randomized Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative Studies, and Clinical Prediction Rule (Channell Doig et ...

  9. PDF Checklist for Case Control Studies

    JBI grants use of these Critical Appraisal Checklist for Case Control Studies -2. tools for research purposes only. All other enquiries should be sent to [email protected]. INTRODUCTION. JBI is an international research organisation based in the Faculty of Health and Medical Sciences at the University of Adelaide, South Australia.

  10. Critical appraisal for medical and health sciences: 3. Checklists

    Some questions to ask when critically appraising a case-control study: Was the recruitment process appropriate? Is there any evidence of selection bias? Have all confounding factors been accounted for? How precise is the estimate of the effect? Were confidence intervals given? Do you believe the results? From Critical Appraisal Skills Programme ...

  11. Critical Appraisal Skills Programme (CASP) Tools

    These tools teach users to critically appraise different types of evidence. The program consists of seven critical appraisal tools to assess: Systematic reviews Randomized controlled trials (RCTs) Qualitative research Economic evaluation studies Cohort studies Case-control studies Diagnostic test studies

  12. Critical Appraisal of Quantitative Research

    The CASP case-control study appraisal tool (Critical Appraisal Skills Program 2017) The Joanna Briggs Institute's appraisal checklist for case-control studies (Joanna Briggs Institute 2017) The case-control study appraisal tool from the Center for Evidence-based Management .

  13. PDF 11 questions to help you make sense of case control study

    ©Critical Appraisal Skills Programme (CASP) Case Control Study Checklist 13.03.17 1 11 questions to help you make sense of case control study How to use this appraisal tool Three broad issues need to be considered when appraising a case control study: Are the results of the study valid? (Section A) What are the results? (Section B)

  14. Tools for other primary medical studies

    The Critical Appraisal Skills Programme (CASP) is a part of the Oxford Centre for Triple Value Healthcare Ltd. (3 V) ... NIH quality assessment tool of case-control study (Table S2H), JBI critical appraisal checklist for case-control study (Table S2I), and the NOS for case-control study (Table S2J). Among them, the NOS for case-control study is ...

  15. Caspuk

    Download the CASP critical appraisal checklists for: Randomised Controlled Trials; Systematic Reviews; Cohort studies; Case-control studies; Qualitative studies; Economic evaluations; Diagnostic studies; You can also find out about the background to CASP, the CASP approach and Training the Trainer approaches. Check penis enlargement pills like ...

  16. The fundamentals of critically appraising an article

    CASP has specific checklists to use for critically appraising randomised controlled trials, systematic reviews, qualitative studies, cohort studies, diagnostic studies and case control studies. 3 ...

  17. Evidence-Based Practice Guidelines: Critical Appraisal

    CASP Tools & Checklists. CASP Checklists. This set of eight critical appraisal tools are designed to be used when reading research, these include tools for Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule.