Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic review report example

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

Banner

Systematic Reviews

  • Introduction
  • Guidelines and procedures
  • Management tools
  • Define the question
  • Check the topic
  • Determine inclusion/exclusion criteria
  • Develop a protocol
  • Identify keywords
  • Databases and search strategies
  • Grey literature
  • Manage and organise
  • Screen & Select
  • Locate full text
  • Extract data

Example reviews

  • Examples of systematic reviews
  • Accessing help This link opens in a new window
  • Systematic Style Reviews Guide This link opens in a new window

Please choose the tab below for your discipline to see relevant examples.

For more information about how to conduct and write reviews, please see the Guidelines section of this guide.

  • Health & Medicine
  • Social sciences
  • Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018).
  • Nicotine effects on exercise performance and physiological responses in nicotine‐naïve individuals: a systematic review. (2018).
  • Association of total white cell count with mortality and major adverse events in patients with peripheral arterial disease: A systematic review. (2014).
  • Do MOOCs contribute to student equity and social inclusion? A systematic review 2014–18. (2020).
  • Interventions in Foster Family Care: A Systematic Review. (2020).
  • Determinants of happiness among healthcare professionals between 2009 and 2019: a systematic review. (2020).
  • Systematic review of the outcomes and trade-offs of ten types of decarbonization policy instruments. (2021).
  • A systematic review on Asian's farmers' adaptation practices towards climate change. (2018).
  • Are concentrations of pollutants in sharks, rays and skates (Elasmobranchii) a cause for concern? A systematic review. (2020).
  • << Previous: Write
  • Next: Publish >>
  • Last Updated: Jan 16, 2024 10:23 AM
  • URL: https://libguides.jcu.edu.au/systematic-review

Acknowledgement of Country

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

systematic review report example

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews
  • Step 8: Write the Review

Systematic Reviews: Step 8: Write the Review

Created by health science librarians.

HSL Logo

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies

About Step 8: Write the Review

Write your review, report your review with prisma, review sections, plain language summaries for systematic reviews, writing the review- webinars.

  • Writing the Review FAQs

  Check our FAQ's

   Email us

  Chat with us (during business hours)

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Search the FAQs

In Step 8, you will write an article or a paper about your systematic review.  It will likely have five sections: introduction, methods, results, discussion, and conclusion.  You will: 

  • Review the reporting standards you will use, such as PRISMA. 
  • Gather your completed data tables and PRISMA chart. 
  • Write the Introduction to the topic and your study, Methods of your research, Results of your research, and Discussion of your results.
  • Write an Abstract describing your study and a Conclusion summarizing your paper. 
  • Cite the studies included in your systematic review and any other articles you may have used in your paper. 
  • If you wish to publish your work, choose a target journal for your article.

The PRISMA Checklist will help you report the details of your systematic review. Your paper will also include a PRISMA chart that is an image of your research process. 

Click an item below to see how it applies to Step 8: Write the Review.

Reporting your review with PRISMA

To write your review, you will need the data from your PRISMA flow diagram .  Review the PRISMA checklist to see which items you should report in your methods section.

Managing your review with Covidence

When you screen in Covidence, it will record the numbers you need for your PRISMA flow diagram from duplicate removal through inclusion of studies.  You may need to add additional information, such as the number of references from each database, citations you find through grey literature or other searching methods, or the number of studies found in your previous work if you are updating a systematic review.

How a librarian can help with Step 8

A librarian can advise you on the process of organizing and writing up your systematic review, including: 

  • Applying the PRISMA reporting templates and the level of detail to include for each element
  • How to report a systematic review search strategy and your review methodology in the completed review
  • How to use prior published reviews to guide you in organizing your manuscript 

Reporting standards & guidelines

Be sure to reference reporting standards when writing your review. This helps ensure that you communicate essential components of your methods, results, and conclusions. There are a number of tools that can be used to ensure compliance with reporting guidelines. A few review-writing resources are listed below.

  • Cochrane Handbook - Chapter 15: Interpreting results and drawing conclusions
  • JBI Manual for Evidence Synthesis - Chapter 12.3 The systematic review
  • PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses.

Tools for writing your review

  • RevMan (Cochrane Training)
  • Methods Wizard (Systematic Review Accelerator) The Methods Wizard is part of the Systematic Review Accelerator created by Bond University and the Institute for Evidence-Based Healthcare.
  • UNC HSL Systematic Review Manuscript Template Systematic review manuscript template(.doc) adapted from the PRISMA 2020 checklist. This document provides authors with template for writing about their systematic review. Each table contains a PRISMA checklist item that should be written about in that section, the matching PRISMA Item number, and a box where authors can indicate if an item has been completed. Once text has been added, delete any remaining instructions and the PRISMA checklist tables from the end of each section.
  • The PRISMA 2020 statement: an updated guideline for reporting systematic reviews The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies.
  • PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews This document is intended to enhance the use, understanding and dissemination of the PRISMA 2020 Statement. Through examples and explanations, the meaning and rationale for each checklist item are presented.

The PRISMA checklist

The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) is a 27-item checklist used to improve transparency in systematic reviews. These items cover all aspects of the manuscript, including title, abstract, introduction, methods, results, discussion, and funding. The PRISMA checklist can be downloaded in PDF or Word files.

  • PRISMA 2020 Checklists Download the 2020 PRISMA Checklists in Word or PDF formats or download the expanded checklist (PDF).

The PRISMA flow diagram

The PRISMA Flow Diagram visually depicts the flow of studies through each phase of the review process. The PRISMA Flow Diagram can be downloaded in Word files.

  • PRISMA 2020 Flow Diagrams The flow diagram depicts the flow of information through the different phases of a systematic review. It maps out the number of records identified, included and excluded, and the reasons for exclusions. Different templates are available depending on the type of review (new or updated) and sources used to identify studies.

Documenting grey literature and/or hand searches

If you have also searched additional sources, such as professional organization websites, cited or citing references, etc., document your grey literature search using the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers and other sources or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

Complete the boxes documenting your database searches,  Identification of studies via databases and registers, according to the PRISMA flow diagram instructions.  Complete the boxes documenting your grey literature and/or hand searches on the right side of the template, Identification of studies via other methods, using the steps below.

Need help completing the PRISMA flow diagram?

There are different PRISMA flow diagram templates for new and updated reviews, as well as different templates for reviews with and without grey literature searches. Be sure you download the correct template to match your review methods, then follow the steps below for each portion of the diagram you have available.

View the step-by-step explanation of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases and registers only or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases and registers only . 

View the step-by-step explanation of the grey literature & hand searching portion of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 1 PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers and other sources or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

View the step-by-step explanation of review update portion of the PRISMA flow diagram

Step 1: Preparation Download the flow diagram template version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases and registers only or the version 2 PRISMA 2020 flow diagram for updated systematic reviews which included searches of databases, registers and other sources . 

For more information about updating your systematic review, see the box Updating Your Review? on the Step 3: Conduct Literature Searches page of the guide.

Sections of a Scientific Manuscript

Scientific articles often follow the IMRaD format: Introduction, Methods, Results, and Discussion.  You will also need a title and an abstract to summarize your research.

You can read more about scientific writing through the library guides below.

  • Structure of Scholarly Articles & Peer Review • Explains the standard parts of a medical research article • Compares scholarly journals, professional trade journals, and magazines • Explains peer review and how to find peer reviewed articles and journals
  • Writing in the Health Sciences (For Students and Instructors)
  • Citing & Writing Tools & Guides Includes links to guides for popular citation managers such as EndNote, Sciwheel, Zotero; copyright basics; APA & AMA Style guides; Plagiarism & Citing Sources; Citing & Writing: How to Write Scientific Papers

Sections of a Systematic Review Manuscript

Systematic reviews follow the same structure as original research articles, but you will need to report on your search instead of on details like the participants or sampling. Sections of your manuscript are shown as bold headings in the PRISMA checklist.

Refer to the PRISMA checklist for more information.

Consider including a Plain Language Summary (PLS) when you publish your systematic review. Like an abstract, a PLS gives an overview of your study, but is specifically written and formatted to be easy for non-experts to understand. 

Tips for writing a PLS:

  • Use clear headings e.g. "why did we do this study?"; "what did we do?"; "what did we find?"
  • Use active voice e.g. "we searched for articles in 5 databases instead of "5 databases were searched"
  • Consider need-to-know vs. nice-to-know: what is most important for readers to understand about your study? Be sure to provide the most important points without misrepresenting your study or misleading the reader. 
  • Keep it short: Many journals recommend keeping your plain language summary less than 250 words. 
  • Check journal guidelines: Your journal may have specific guidelines about the format of your plain language summary and when you can publish it. Look at journal guidelines before submitting your article. 

Learn more about Plain Language Summaries: 

  • Rosenberg, A., Baróniková, S., & Feighery, L. (2021). Open Pharma recommendations for plain language summaries of peer-reviewed medical journal publications. Current Medical Research and Opinion, 37(11), 2015–2016.  https://doi.org/10.1080/03007995.2021.1971185
  • Lobban, D., Gardner, J., & Matheis, R. (2021). Plain language summaries of publications of company-sponsored medical research: what key questions do we need to address? Current Medical Research and Opinion, 1–12. https://doi.org/10.1080/03007995.2021.1997221
  • Cochrane Community. (2022, March 21). Updated template and guidance for writing Plain Language Summaries in Cochrane Reviews now available. https://community.cochrane.org/news/updated-template-and-guidance-writing-plain-language-summaries-cochrane-reviews-now-available
  • You can also look at our Health Literacy LibGuide:  https://guides.lib.unc.edu/healthliteracy 

How to Approach Writing a Background Section

What Makes a Good Discussion Section

Writing Up Risk of Bias

Developing Your Implications for Research Section

  • << Previous: Step 7: Extract Data from Included Studies
  • Next: FAQs >>
  • Last Updated: Mar 28, 2024 9:43 AM
  • URL: https://guides.lib.unc.edu/systematic-reviews

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?

Steps of a Systematic Review

  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Forms and templates

Logos of MS Word and MS Excel

Image: David Parmenter's Shop

  • PICO Template
  • Inclusion/Exclusion Criteria
  • Database Search Log
  • Review Matrix
  • Cochrane Tool for Assessing Risk of Bias in Included Studies

   • PRISMA Flow Diagram  - Record the numbers of retrieved references and included/excluded studies. You can use the Create Flow Diagram tool to automate the process.

   •  PRISMA Checklist - Checklist of items to include when reporting a systematic review or meta-analysis

PRISMA 2020 and PRISMA-S: Common Questions on Tracking Records and the Flow Diagram

  • PROSPERO Template
  • Manuscript Template
  • Steps of SR (text)
  • Steps of SR (visual)
  • Steps of SR (PIECES)

Adapted from  A Guide to Conducting Systematic Reviews: Steps in a Systematic Review by Cornell University Library

Source: Cochrane Consumers and Communications  (infographics are free to use and licensed under Creative Commons )

Check the following visual resources titled " What Are Systematic Reviews?"

  • Video  with closed captions available
  • Animated Storyboard
  • << Previous: What is a Systematic Review (SR)?
  • Next: Framing a Research Question >>
  • Last Updated: Mar 4, 2024 12:09 PM
  • URL: https://lib.guides.umd.edu/SR

Jump to navigation

Home

Cochrane Training

Chapter iii: reporting the review.

Miranda Cumpston, Toby Lasserson, Ella Flemyng, Matthew J Page

Key Points:

  • Clear reporting of a systematic review allows readers to evaluate the rigour of the methods applied, and to interpret the findings appropriately. Transparency can facilitate attempts to verify or reproduce the results, and make the review more usable for health care decision makers.
  • The target audience for Cochrane Reviews is people making decisions about health care, including healthcare professionals, consumers and policy makers. Cochrane Reviews should be written so that they are easy to read and understand by someone with a basic sense of the topic who may not necessarily be an expert in the area.
  • Cochrane Protocols and Reviews should comply with the PRISMA 2020 and PRISMA for Protocols reporting guidelines.
  • Guidance on the composition of plain language summaries of Cochrane Reviews is also available to help review authors specify the key messages in terms that are accessible to consumers and non-expert readers.
  • Review authors should ensure that reporting of objectives, important outcomes, results, caveats and conclusions is consistent across the main text, the abstract, and any other summary versions of the review (e.g. plain language summary).

This chapter should be cited as: Cumpston M, Lasserson T, Flemyng E, Page MJ. Chapter III: Reporting the review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

III.1 Introduction

The effort of undertaking a systematic review is wasted if review authors do not report clearly what they did and what they found ( Glasziou et al 2014 ). Clear reporting enables others to replicate the methods used in the review, which can facilitate attempts to verify or reproduce the results ( Page et al 2018 ). Transparency can also make the review more usable for healthcare decision makers. For example, clearly describing the interventions assigned in the included studies can help users determine how best to deliver effective interventions in practice ( Hoffmann et al 2017 ). Also, comprehensively describing the eligibility criteria applied, sources consulted, analyses conducted, and post-hoc decisions made, can reduce uncertainties in assessments of risk of bias in the review findings ( Whiting et al 2016 ). For these reasons, transparent reporting is an essential component of all systematic reviews.

Surveys of the transparency of published systematic reviews suggest that many elements of systematic reviews could be reported better. For example, Nguyen and colleagues evaluated a random sample of 300 systematic reviews of interventions indexed in bibliographic databases in November 2020 ( Nguyen et al 2022 ). They found that in at least 20% of the reviews there was no information about the years of coverage of the search, the methods used to collect data and appraise studies, or the funding source of the review. Less than half of the reviews provided information on a protocol or registration record for the review. However, Cochrane Reviews, which accounted for 3% of the sample, had more complete reporting than all other types of systematic reviews.

Possible reasons why more complete reporting of Cochrane Reviews has been observed include the use of software (RevMan, https://training.cochrane.org/online-learning/core-software-cochrane-reviews/revman ) and strategies in the editorial process that promote good reporting. RevMan includes many standard headings and subheadings which are designed to prompt Cochrane Review authors to document their methods and results clearly. 

Cochrane Reviews of interventions should adhere to the PRISMA 2020 (Preferred Reporting Items for Systematic reviews and Meta-Analysis) reporting guideline; see http://www.prisma-statement.org/ . PRISMA is an evidence-based, minimum set of items for reporting systematic reviews and meta-analyses to ensure the highest possible standard for reporting is met. Extensions to PRISMA and additional reporting guidelines for specific areas of methods are cited in the relevant sections below.

Cochrane’s Methodological Expectations of Cochrane Intervention Reviews (MECIR) detail standards for the conduct of Cochrane Reviews of interventions. They provide expectations for the general methodological approach to be followed from designing the review up to interpreting the findings at the end. There is a good reason to distinguish between conduct (MECIR) and reporting (PRISMA): good conduct does not necessarily lead to good reporting, good reporting cannot improve poor conduct, and poor reporting can obscure good or poor conduct of a review. The MECIR expectations of conduct are embedded in the relevant chapters of this Handbook and authors should adhere to MECIR throughout the development of their systematic review. MECIR conduct guidance for updates of Cochrane Reviews of interventions are presented in Chapter IV . For the latest version of all MECIR conduct guidance, readers should consult the MECIR web pages, available at https://methods.cochrane.org/mecir .

This chapter is built on reporting guidance from PRISMA 2020 ( Page et al 2021a , Page et al 2021b ) and is divided into sections for Cochrane Review protocols ( Section III.2) and new Cochrane Reviews ( Section III.3 ). Many of the standard headings recommended for use in Cochrane Reviews are referred to in this chapter, although the precise headings available in RevMan may be amended as new versions are released. New headings can be added and some standard headings can be deactivated; if the latter is done, review authors should ensure that all information expected (as outlined in PRISMA 2020) is still reported somewhere in the review.

III.2 Reporting of protocols of new Cochrane Reviews

Preparing a well-written review protocol is important for many reasons (see Chapter 1 ). The protocol is a public record of the question of interest and the intended methods before results of the studies are fully known. This helps readers to judge how the eligibility criteria of the review, stated outcomes and planned methods will address the intended question of interest. It also helps anyone who evaluates the completed review to judge how far it fulfilled its original objectives ( Lasserson et al 2016 ). Investing effort in the development of the review question and planning of methods also stimulates review authors to anticipate methodological challenges that may arise, and helps minimize potential for non-reporting biases by encouraging review authors to publish their review and report results for all pre-specified outcomes ( Shamseer et al 2015 ).

See the Introduction and Methods sections of PRISMA 2020 for the reporting items relevant to protocols for new Cochrane Reviews. All these items are also covered in PRISMA for Protocols, an extension to the PRISMA guidelines for the reporting of systematic review protocols ( Moher et al 2015 , Shamseer et al 2015 ). They include guidance for reporting of the:

  • Background;
  • Objectives;
  • Criteria for considering studies for inclusion in the review;
  • Search methods for identification of studies (e.g. a list of all sources that will be searched, a complete search strategy to be implemented for at least one database);
  • Data collection and analysis (e.g. types of information that will be sought from reports of included studies and methods for obtaining such information, how risk of bias in included studies will be assessed, and any intended statistical methods for combining results across studies); and
  • Other information (e.g. acknowledgements, contributions of authors, declarations of interest, and sources of support).

These sections correspond to the same sections in a completed review, and further details are outlined in Section III.3 .

The required reporting items have been incorporated into a template for protocols for Cochrane Reviews, which is available in Cochrane’s review production tool, RevMan (see the RevMan Knowledge Base ). If using the template, authors should carefully consider the methods that are appropriate for their specific review and adapt the template where required.

One key difference between a review protocol and a completed review is that the Methods section in a protocol should be written in the future tense. Because Cochrane Reviews are updated as new evidence accumulates, methods outlined in the protocol should generally be written as if a suitably large number of studies will be identified to allow the objectives to be met (even if this is assumed to be unlikely the case at the time of writing).

PRISMA 2020 reflects the minimum expectations for good reporting of a review protocol. Further guidance on the level of planning required for each aspect of the review methods and the detailed information recommended for inclusion in the protocol is given in the relevant chapters of this Handbook.

III.3 Reporting of new Cochrane Reviews

The main text of a Cochrane Review should be succinct and readable. Although there is no formal word limit for Cochrane Reviews, review authors should consider 10,000 words a maximum for the main text of the review unless there is a special reason to write a longer review, such as when the question is unusually broad or complex.

People making decisions about health care are the target audience for Cochrane Reviews. This includes healthcare professionals, consumers and policy makers, and reviews should be accessible to these audiences. Cochrane Reviews should be written so that they are easy to read and understand by someone with a basic sense of the topic who is not necessarily an expert in the area. Some explanation of terms and concepts is likely to be helpful, and perhaps even essential. However, too much explanation can detract from the readability of a review. Simplicity and clarity are also vital to readability. The readability of Cochrane Reviews should compare to that of a well-written article in a general medical journal.

Review authors should ensure that reporting of objectives, outcomes, results, caveats and conclusions is consistent across the main text, the tables and figures, the abstract, and any other summary versions of the review (e.g. ‘Summary of findings’ table and plain language summary). Although this sounds simple, it can be challenging in practice; authors should review their text carefully to ensure that readers of a summary version are likely to come away with the same overall understanding of the conclusions of the review as readers accessing the full text.

Plagiarism is not acceptable and all sources of information should be cited (for more information see the Cochrane Library editorial policy on plagiarism ). Also, the unattributed reproduction of text from other sources should be avoided. Quotes from other published or unpublished sources should be indicated and attributed clearly, and permission may be required to reproduce any published figures.

PRISMA 2020 provides the main reporting items for new Cochrane Reviews. A template for Cochrane Reviews of interventions is available that incorporates the relevant reporting guidance from PRISMA 2020. The template is available in RevMan to facilitate author adherence to the reporting guidance via the RevMan Knowledge Base . If using the template, authors should consider carefully the methods that are appropriate for their specific review and adapt the template where required. In the remainder of this section we summarize the reporting guidance relating to different sections of a Cochrane Review.

III.3.1 Abstract

All reviews should include an abstract of not more than 1000 words, although in the interests of brevity, authors should aim to include no more than 700 words, without sacrificing important content. Abstracts should be targeted primarily at healthcare decision makers (clinicians, consumers and policy makers) rather than just to researchers.

Terminology should be reasonably easy to understand for a general rather than a specialist healthcare audience. Abbreviations should be avoided, except where they are widely understood (e.g. HIV). Where essential, other abbreviations should be spelt out (with the abbreviations in brackets) on first use. Names of drugs and interventions that can be understood internationally should be used wherever possible. Trade or brand names should not be used and generic names are preferred.

Abstracts of Cochrane Reviews are made freely available on the internet and published in bibliographic databases that index the Cochrane Database of Systematic Reviews (e.g. MEDLINE, Embase). Some readers may be unable to access the full review, or the full text may not have been translated into their language, so abstracts may be the only source they have to understand the review results ( Beller et al 2013 ). It is important therefore that they can be read as stand-alone documents. The abstract should summarize the key methods, results and conclusions of the review. An abstract should not contain any information that is not in the main body of the review, and the overall messages should be consistent with the conclusions of the review.

Abstracts for Cochrane Reviews of interventions should follow the PRISMA 2020 for Abstracts checklist ( Page et al 2021b ). Each abstract should include:

  • Rationale (a concise summary of the rationale for and context of the review);
  • Objectives (of the review);
  • Search methods (including an indication of databases searched, and the date of the last search for which studies were fully incorporated);
  • Eligibility criteria (including a summary of eligibility criteria for study designs, participants, interventions and comparators);
  • Risk of bias (methods used to assess risk of bias);
  • Synthesis methods (methods used to synthesize results, especially any variations on standard approaches);
  • Included studies (total number of studies and participants and a brief summary of key characteristics);
  • Results of syntheses (including the number of studies and participants for each outcome, a clear statement of the direction and magnitude of the effect, the effect estimate and 95% confidence interval if meta-analysis was used, and the GRADE assessment of the certainty of the evidence. The results should contain the same outcomes as found in other summary formats such as the plain language summary and ‘Summary of findings’ table, including those for which no studies reported the outcome and those that are not statistically significant. This section should also provide a brief summary of the limitations of the evidence included in the review);
  • Authors’ conclusions (including implications both for practice and for research);
  • Funding (primary source of funding for the review); and
  • Registration (registration name and number and/or DOIs of previously published protocols and versions of the review, if applicable).

III.3.2 Plain language summary

A Cochrane Plain language summary is a stand-alone summary of the systematic review. Like the Abstract, the Plain language summary may be read alone, and its overall messages should be consistent with the conclusions in the full review. The Plain language summary should convey clearly the questions and key findings of the review, using language that can be understood by a wide range of non-expert readers. The summary should use words and sentence structures that are easy to understand, and should avoid technical terms and jargon where possible. Any technical terms used should be explained. The audience for Plain language summaries may include people with a health condition, carers, healthcare workers or policy makers. Readers may not have English as their first language. Cochrane Plain language summaries are frequently translated, and using plain language is also helpful for translators.

Writing in plain language is a skill that is different from writing for a scientific audience. Full guidance and a template are available as online supplementary material to this chapter. Authors are strongly encouraged to use this guidance to ensure good practice and consistency with other summaries in the Cochrane Library. It may also be helpful to seek assistance for this task, such as asking someone with experience in writing in plain language for a general audience for help, or seeking feedback on the draft summary from a consumer or someone with little knowledge of the topic area.

III.3.3 Background and Objectives

Well-formulated review questions occur in the context of an already-formed body of knowledge. The Background section should address this context, including a description of the condition or problem of interest. It should help clarify the rationale for the review, and explain why the questions being addressed are important. It should be concise (generally under 1000 words) and be understandable to the users of the intervention(s) under investigation.

It is important that the eligibility criteria and other aspects of the methods, such as the comparisons used in the synthesis, build on ideas that have been developed in the Background section. For example, if there are uncertainties to be explored in how variation in setting, different populations or type of intervention influence the intervention effect, then it would be important to acknowledge these as objectives of the review, and ensure the concepts and rationale are explained.

The following three standard subheadings in the Background section of a Cochrane Review are intended to facilitate a structured approach to the context and overall rationale for the review.

  • Description of the condition:  A brief description of the condition being addressed, who is affected, and its significance, is a useful way to begin the review. It may include information about the biology, diagnosis, prognosis, prevalence, incidence and burden of the condition, and may consider equity or variation in how different populations are affected.
  • Description of the intervention and how it might work:  A description of the experimental intervention(s) should place it in the context of any standard or alternative interventions, remembering that standard practice may vary widely according to context. The role of the comparator intervention(s) in standard practice should also be made clear. For drugs, basic information on clinical pharmacology should be presented where available, such as dose range, metabolism, selective effects, half-life, duration and any known interactions with other drugs. For more complex interventions, such as behavioural or service-level interventions, a description of the main components should be provided (see Chapter 17 ). This section should also provide theoretical reasoning as to why the intervention(s) under review may have an impact on potential recipients, for example, by relating a drug intervention to the biology of the condition. Authors may refer to a body of empirical evidence such as similar interventions having an impact on the target recipients or identical interventions having an impact on other populations. Authors may also refer to a body of literature that justifies the possibility of effectiveness. Authors may find it helpful to use a logic model ( Kneale et al 2015 ) or conceptual framework to illustrate the proposed mechanism of action of the intervention and its components. This will also provide review authors with a framework for the methods and analyses undertaken throughout the review to ensure that the review question is clearly and appropriately addressed. More guidance on considering the conceptual framework for a particular review question is presented in Chapter 2 and Chapter 17 .
  • Why it is important to do this review: Review authors should explain clearly why the questions being asked are important. Rather than justifying the review on the grounds that there are known eligible studies, it is more helpful to emphasize what aspects of, or uncertainties in, the accumulating evidence base now justify a systematic review. For example, it might be the case that studies have reached conflicting conclusions, that there is debate about the evidence to date, or that there are competing approaches to implementing the intervention.

Immediately following the Background section of the review, review authors should declare the review objectives. They should begin with a precise statement of the primary objective of the review, ideally in a single sentence. Where possible the style should be of the form “To assess the effects of [intervention or comparison] for [health problem] for/in [types of people, disease or problem and setting if specified] ”. This might be followed by a series of secondary objectives relating to different participant groups, different comparisons of interventions or different outcome measures. If relevant, any objectives relating to the evaluation of economic or qualitative evidence should be stated. It is not necessary to state specific hypotheses.

III.3.4 Methods

The Methods section in a completed review should be written in the past tense, and should describe what was done to obtain the results and conclusions of the current review.

Review authors are expected to cite their protocol to make it clear that there was one. Often a review is unable to implement all the methods outlined in the protocol. For example, planned investigations of heterogeneity (e.g. subgroup analyses) and small-study effects may not have been conducted because of an insufficient number of studies. Authors should describe and explain all amendments to the prespecified methods in the main Methods section.

The Methods section of a Cochrane Review includes five main subsections, within which are a series of standard headings to guide authors in reporting all the relevant information. See Sections III.3.4.1 , III.3.4.2 ,  III.3.4.3 , III.3.4.4, and III.3.4.5 for a summary of content recommended for inclusion under each subheading.

III.3.4.1 Criteria for considering studies for this review

Review authors should declare all criteria used to decide which studies are included in the review. Doing so will help readers understand the scope of the review and recognize why particular studies they are aware of were not included. Eligible study designs should be described, with a focus on specific features of a study’s design rather than design labels (e.g. how groups were formed, whether the intervention was assigned to individuals or clusters of individuals) ( Reeves et al 2017 ). Review authors should describe eligibility criteria for participants, including any restrictions based on age, diagnostic criteria, location and setting. If relevant, it is useful to describe how studies including a subset of relevant participants were addressed (e.g. when children up to the age of 16 years only were eligible but a study included children up to the age of 18 years). Eligibility criteria for interventions and comparators should be stated also, including any criteria around delivery, dose, duration, intensity, co-interventions and characteristics of complex interventions. The rationale for all criteria should be clear, including the eligible study designs.

Typically, studies should not be excluded from a review solely because no outcomes of interest were reported, because failure to report an outcome does not mean it was not assessed ( Dwan et al 2017 ). However, on occasion it will be appropriate to include only studies that measured particular outcomes. For example, a review of a multi-component public health intervention promoting healthy lifestyle choices, focusing on reduction in smoking prevalence, might legitimately exclude studies that do not measure any smoking outcomes. Review authors should specify if measurement of a particular outcome was used as an eligibility criterion for the review, and justify why this was done.

Further guidance on planning eligibility criteria is presented in Chapter 3 .

III.3.4.2 Outcome measures

Review authors should specify the critical and important outcomes of interest to the review, and define acceptable ways of measuring them. The review’s important outcomes should normally reflect at least one potential benefit and at least one potential harm.

For each listed outcome or outcome domain, it should be clear which specific outcomes, measures or tools will be considered together and combined for the purposes of synthesis. For example, for the outcome of depression, a series of measurement tools for depression symptoms may be listed. It should be explicitly stated whether they will be synthesized together as a single outcome (depression), or presented as a series of separate syntheses for each tool. Any categories of time that will be used to group outcomes for synthesis should also be defined, e.g. short term (up to 1 month), medium term (> 1 month to 12 months), and long term (> 12 months). Additional guidance on grouping of outcomes for synthesis is included in Chapter 3 , and in the InSynQ (Intervention Synthesis Questions) reporting guideline ( https://InSynQ.info ).  

III.3.4.3 Search methods for identification of studies

It is essential that users of systematic reviews are given an opportunity to evaluate the methods used to identify studies for inclusion. Such an evaluation is possible when review authors report their search methods comprehensively. This involves specifying all sources consulted, including databases, trials registers, websites, and a list of individuals or organizations contacted. If particular journals were handsearched, this should be noted. Any specific methods used to develop the search strategy, such as automated text analysis or peer review, should also be noted, including methods used to translate the search strategy for use in different databases. Specifying the dates of coverage of all databases searched and the date of the last search for which studies were fully incorporated can help users determine how up to date the review is. Review authors should also declare any limits placed on the search (e.g. by language, publication date or publication format).

To facilitate replication of a search, review authors should include in the supplementary material the exact search strategy (or strategies) used for each database, including any limits and filters used. Search strategies can be exported from bibliographic databases, and these should be copied and pasted instead of re-typing each line, which can introduce errors.

See  Chapter 4 for guidance on search methods. An extension to the PRISMA statement for reporting of literature searches is also available ( Rethlefsen et al 2021 ).

III.3.4.4 Data collection and analysis

Cochrane Reviews include several standard subheadings to enable a structured, detailed description of the methods used for data collection and analysis. Additional headings should be included where appropriate to describe additional methods implemented in the review, e.g. those specific to the analysis of qualitative or economic evidence.

Selection of studies: There should be a description of how the eligibility criteria were applied, from screening of search results through to the final selection of studies for inclusion in the review. The number of people involved at each stage of the process should be stated, such as two authors working independently, along with an indication of how any disagreements were resolved. Any automated processes, software tools or crowdsourcing used to support selection should be noted. See Chapter 4 for guidance on the study selection process.

Data collection and management:  Review authors should specify how data were collected for the included studies. This includes describing the number of people involved in data collection, whether they worked independently, how any disagreements were resolved, and whether standardized data collection forms were used (and if so, whether they were piloted in advance). Any software tools used in data collection should be cited, as well as any checklists such as TIDieR for the description of interventions ( Hoffmann et al 2017 ), TIDieR PHP for population health and policy interventions ( Campbell et al 2018 ), or TACIT for identifying conflicts of interest ( https://tacit.one/ ). If study authors or sponsors were contacted to obtain missing information or to clarify the information available, this should be stated.

RevMan allows authors to directly import some types of data (including study results and risk of bias assessments). To facilitate the import, it is recommended that Cochrane authors consider the required format of data import files to inform their data extraction forms. See documentation in the RevMan Knowledge Base .

A brief description of the data items (e.g. participant characteristics, intervention details) extracted from each report is recommended. If methods for transforming or processing data in preparation for analysis were necessary (e.g. converting standard errors to standard deviations, extracting numeric data from graphs), these methods should be described.

Additional information about the outcomes to be collected is helpful to include, including a description of how authors handled multiplicity, such as where a single study reports more than one similar outcome measure or measurement time point eligible for inclusion in the same review, requiring a method or decision rule to select between eligible results. See Chapter 3 for guidance on selecting outcomes, and Chapter 5 for guidance on data collection.

Risk of bias assessment in included studies: There should be a description of the approach used to assess risk of bias in the included studies. This involves specifying the risk-of-bias tool(s) used, how many authors were involved in the assessment, how disagreements were resolved, and how the assessments were incorporated into the analysis or interpretation of the results. The preferred bias assessment tools for Cochrane review authors are RoB 2 for RCTs and ROBINS -I for non-randomized studies (described in Chapter 8 and Chapter 25 ). When using either of these tools, some specific information is needed in this section of the Methods. Authors should specify the outcome measures and timepoints assessed (often the same prespecified outcomes were considered in the GRADE assessment and included in summary versions of the review, see Chapter 3, Section 3.2.4.2 ); and the effect of interest the author team assessed (either the effect of assignment to the intervention, or the effect of adhering to the intervention). Authors should also specify how overall judgements were reached, both across domains for an individual result and across multiple studies included in a synthesis. Cochrane has developed checklists for reporting risk of bias methods in protocols and completed reviews for authors using the RoB 2 tool ( https://methods.cochrane.org/risk-bias-2 ) and the ROBINS-I tool ( https://methods.cochrane.org/robins-i ). See Chapter 7 for further guidance on study risk-of-bias assessment. Authors who have used the original version of the RoB tool (from 2008 or 2011) should refer to guidance for reporting the risk of bias in version 5.2 of the Cochrane Handbook for Systematic Reviews of Interventions (available at https://training.cochrane.org/handbook/archive/v5.2 ).

Measures of the treatment effect:  The effect measures used by the review authors to describe results in any included studies or meta-analyses (or both) should be stated. Examples of effect measures include the odds ratio (OR), risk ratio (RR) and risk difference (RD) for dichotomous data; the mean difference (MD) and standardized mean difference (SMD) for continuous data; and hazard ratio for time-to-event data. Note that some non-randomized study designs require different effect estimates, and these should be specified if such designs are included in the review (e.g. interrupted time series commonly measure the change in level and change in slope). See Chapter 6 for more guidance on effect measures.

Unit of analysis issues: If the review includes study designs that can give rise to a unit-of-analysis error (when the number of observations in an analysis does not match the number of units randomized), the approaches taken to address these issues should be described. Studies that can give rise to unit-of-analysis errors include crossover trials, cluster-randomized trials, studies where interventions are assigned to multiple parts of the body of the same participant, and studies with multiple intervention groups where more than two groups are included in the same meta-analysis. See Chapter 23 for guidance on handling unit-of-analysis issues.

Dealing with missing data:  Review authors may encounter various types of missing data in their review. For example, there may be missing information that has not been reported by the included studies, such as information about the methods of the included studies (e.g. when the method of randomization is not reported, which may be addressed in the risk of bias assessment); missing statistics (e.g. when standard deviations of mean scores are not reported, where missing statistics may be calculated from the available information or imputed); or non-reporting of outcomes (which may represent a risk of bias due to missing results). Missing data may also refer to cases where participants in the included primary studies have withdrawn or been lost to follow-up, or have missing measurements for some outcomes, which may be considered and addressed through risk of bias assessment. Any strategies used to deal with missing data should be reported, including any attempts to obtain the missing data. See Chapter 10 for guidance on dealing with missing data.

Reporting bias assessment: Any methods used to assess the risk of bias due to missing results should be described. Such methods may include consideration of the number of studies missing from a synthesis due to selective non-reporting of results, or investigations to assess small-study effects (e.g. funnel plots), which can arise from the suppression of small studies with ‘negative’ results (also called publication bias). If relevant, any tools or checklists used (such as ROB-ME, https://www.riskofbias.info/welcome/rob-me-tool ) should be cited.See Chapter 13 for a description of methods for assessing risk of bias due to missing results in a synthesis.

Synthesis methods:  Reviews may address multiple research questions (‘synthesis questions’). For example, a review may be interested in the effects of an intervention in children or adults, or may wish to investigate the effects of different types of exercise interventions. Each comparison to be made in the synthesis should be specified in enough detail to allow a reader to replicate decisions about which studies belong in each synthesis, and the rationale for the comparisons should be clear. Comparisons for synthesis can be defined using the same PICO characteristics that are used to define the eligibility criteria for including studies in the review. See Chapter 3 for guidance on defining the ‘PICO for each synthesis’. Further guidance is available in the InSynQ (Intervention Synthesis Questions) tool for planning and reporting synthesis questions ( https://InSynQ.info ).

Review authors should then describe the methods used for synthesizing results across studies in each comparison (e.g. meta-analysis, network meta-analysis or other methods). Where data have been combined in statistical software external to RevMan, authors should reference the software, commands and settings used to run the analysis. See Chapter 10 for guidance on undertaking meta-analysis, Chapter 11 for guidance on undertaking network meta-analysis, and Chapter 12 for a description of other synthesis methods. An extension to the PRISMA statement for reporting network meta-analyses is available for reviews using these methods ( Hutton et al 2015 ).

Where meta-analysis is planned, details should be specified of the meta-analysis model (e.g. fixed-effect or random-effects), the specific method used (e.g. Mantel Haenszel, inverse variance, Peto), and a rationale presented for the options selected. Review authors should also describe their approach to identifying or quantifying statistical heterogeneity (e.g. visual inspection of results, a formal statistical test for heterogeneity, I2, Tau2, or prediction interval). See Chapter 10 for guidance on assessment of heterogeneity.

Where meta-analysis is not possible, any other synthesis methods used should be described explicitly, including the rationale for the methods selected. It is common for these methods to be insufficiently described in published reviews ( Campbell et al 2019 , Cumpston et al 2023 ), and general terms such as ‘narrative synthesis’ do not provide appropriate detail about the specific methods used. In addition to detailed guidance in Chapter 12 , a reporting guideline for Synthesis Without Meta-analysis (SWiM) has been developed and should be considered in addition to MECIR for reporting these methods( Campbell et al 2020 ).

For whichever synthesis methods are used, the structure of tables and plots used to visually display results should also be specified, including a rationale for the options selected (see Section III.3.5.4).

Investigations of heterogeneity and subgroup analysis:  If subgroup analyses or meta-regression were performed, review authors should specify the potential effect modifiers explored, the rationale for each, whether they were identified before or after the results were known, whether they were based on between-study or within-study subgroups, and how they were compared (e.g. using a statistical test for interaction). See Chapter 10 for more information on investigating heterogeneity. If applicable, review authors should specify which equity-related characteristics were explored.

Sensitivity analysis: If any sensitivity analyses were performed to explore the robustness of meta-analysis results, review authors should specify the basis of each analysis (e.g. removal of studies at high risk of bias, imputing alternative estimates of missing standard deviations). See Chapter 10 for more information on sensitivity analyses.

Certainty of the evidence assessment:  Review authors should describe methods for summarizing the findings of the review, and assessing the certainty of the body of evidence (e.g. using the GRADE approach). The domains to be assessed should be stated, including any thresholds used to downgrade the certainty of the evidence, such as risk of bias assessment, levels of unexplained heterogeneity, or key factors for assessing directness. Who conducted the GRADE assessment should be stated, including whether two authors assessed GRADE independently and how disagreements were resolved. Review authors should also indicate which populations, interventions, comparisons and outcomes are addressed in ‘Summary of findings’ tables, specifying up to seven prioritized critical or important outcomes to be included. Authors should note what they considered to be a minimally important difference for each outcome. Any specific language used to describe results in the context of the GRADE assessment should be explained, such as using the word “probably” for to moderate-certainty evidence, and “may” in relation to low-certainty evidence (see Chapter 15, Section 15.6.4 ). For more details on completing ‘Summary of findings’ tables and using the GRADE approach, see Chapter 14 .

III.3.4.5 Consumer Involvement

Cochrane follows the ACTIVE (Authors and Consumers Together Impacting on eVidencE) framework to help review authors have meaningful involvement in their systematic reviews ( Pollock et al 2017 ). Review authors should report on their methods for involving consumers in their review, including the authors’ general approach to involvement; the level of involvement and the roles of the consumers involved; the stage in the review process when involvement occurs; and any formal research methods or techniques used.

Other stakeholders may also be involved in the systematic reviews, such as health care providers, policy makers and other decision makers. Where other stakeholders are involved, this should also be described.

If review authors did not involve consumers or other stakeholders, this should be stated.

III.3.5 Results

A narrative summary of the results of a Cochrane Review should be provided under the three standard subheadings in the Results section (see Sections ‎III.3.5.1 , ‎III.3.5.2 and ‎III.3.5.3 for a summary of content recommended for inclusion under each subheading). Details about the effects of interventions (including summary statistics and effect estimates for each included study and for synthesis) can be presented in various tables and figures (see Section ‎III.3.5.4 ).

III.3.5.1 Description of studies

The results section should start with a summary of the results of the search (for example, how many references were retrieved by the electronic searches, how many were evaluated after duplicates were removed, how many were considered as potentially eligible after screening, and how many were included). Review authors are expected to include a PRISMA-type flow diagram demonstrating the flow of studies throughout the selection process ( Page et al 2021b ). Such flow diagrams can be created within RevMan.

To help readers determine the completeness and applicability of the review findings in relation to the review question, as well as how studies are grouped for synthesis within the review, authors should describe the characteristics of the included studies. In the Results section, a brief narrative summary of the included studies should be presented. The summary should not describe each included study individually, but instead should summarize how the included studies vary in terms of design, number of participants, and important effect modifiers outlined in the protocol (e.g. populations and settings, interventions, comparators, outcomes or funding sources). An ‘Overview of synthesis and included studies’ (OSIS) table should be used to summarize key characteristics, and assist readers in matching studies to comparisons for synthesis (guidance on this is available in the RevMan Knowledge Base ). See Chapter 9 for further guidance on summarizing study characteristics.

More details about each included study should be presented in the ‘Characteristics of included studies’ supplementary material. These are organized in tables and should include (at a minimum) the following information about each included study:

  • basic study design or design features;
  • baseline demographics of the study sample (e.g. age, sex/gender, key equity characteristics);
  • sample size;
  • details of all interventions (including what was delivered, by whom, in which setting, and how often; for more guidance see the TIDieR ( Hoffmann et al 2017 ) and TIDieR PHP ( Campbell et al 2018 ) reporting guidelines;
  • outcomes measured (with details on how and when they were measured);
  • funding source; and
  • declarations of interest among the primary researchers.

Studies that may appear to some readers to meet the eligibility criteria, but which were excluded, should be listed in the ‘Characteristics of excluded studies’ supplementary material, and an explicit reason for exclusion should be provided (one reason is usually sufficient, and all reasons should be consistent with the stated eligibility criteria). It is not necessary to include every study excluded at the full text screening stage in the table; rather, authors should use their judgement to identify those studies most likely to be considered eligible by readers, and hence most useful to include here. A succinct summary of the reasons why studies were excluded from the review should be provided in the Results section.

It is helpful to make readers aware of any completed studies that have been identified as potentially eligible but have not been incorporated into the review. This may occur when there is insufficient information to determine whether the study meets the eligibility criteria of the review, or when a top-up search is run immediately prior to publication and the review authors consider it unlikely that inclusion of the study would change the review conclusions substantially. A description of such studies can be provided in the ‘Characteristics of studies awaiting classification’ supplementary material.

Readers should also be made aware of any studies that meet the eligibility criteria for the review, but which are still in progress and hence have no results available. This serves several purposes. It will help readers assess the stability of the review findings, alert research funders about ongoing research activity, help inform research implications, and can serve as a useful basis for deciding when an update of the review may be needed. A description of such studies can be provided in the ‘Characteristics of ongoing studies’ supplementary material.

III.3.5.2 Risk of bias in included studies

To help readers determine the credibility of the results of included studies, review authors should provide an overview of their risk-of-bias assessments in this section of the Results. For example, this might include overall comments on key domains that influenced the overall risk of bias judgement (e.g. the extent to which blinding was implemented across all included trials), and an indication of whether important differences in overall risk of bias were observed across outcomes. It is not necessary to describe the individual domain assessments of each included study or each result here. If risk of bias assessments were very similar (or identical) for all outcomes in the review, a summary of the assessments across studies should be presented here. If risk of bias assessments are very different for different outcomes, this section should be very brief, and summaries of the assessments across studies should be provided within the ‘Synthesis of results’ section alongside the relevant results.

If RoB 2 or ROBINS-I has been used, result-level ‘risk of bias’ tables should be included to summarize the risk of bias judgements for each domain for each study included in the synthesis. For RoB 2, these tables can be generated in RevMan, and summaries of risk of bias assessments can also be added to forest plots presenting the results of meta-analysis. Authors should use an additional supplementary material for ROBINS-I risk of bias tables. More detailed assessments, including the consensus responses to each signalling question and comments to support each response, can be made available as an additional file in a publicly available data repository.

Cochrane guidance specific to the presentation and reporting of risk of bias assessments using the RoB 2 tool is available at https://methods.cochrane.org/risk-bias-2 , and for ROBINS-I at https://methods.cochrane.org/robins-i . Chapter 7 , Chapter 8 and Chapter 25 present further guidance on risk of bias assessment.

III.3.5.3 Synthesis of Results

Review authors should summarize in text form the results for all pre-specified review outcomes, regardless of the statistical significance, magnitude or direction of the effects, or whether evidence was found for those outcomes. The text should present the results in a logical and systematic way. This can be done by organizing results by population or comparison (e.g. by first describing results for the comparison of drug versus placebo, then describing results for the comparison of drug A versus drug B).

If meta-analysis was possible, synthesized results should always be accompanied by a measure of statistical uncertainty, such as a 95% confidence interval. If other synthesis methods were used, authors should take care to specifically state the methods used. In particular, unless vote counting based on the direction of effect is used explicitly, authors should avoid the inadvertent use of vote counting in text (e.g. “the majority of studies found a positive effect”) ( Cumpston et al 2023 ). It is also helpful to indicate the amount of information (numbers of studies and participants) contributing to each synthesis. If additional studies reported results that could not be included in synthesis (e.g. because results were incompletely reported or were in an incompatible format), these additional results should be reported in the review. If no data were available for particular review outcomes of interest, review authors should say so, so that all pre-specified outcomes are accounted for. Guidance on summarizing results from meta-analysis is provided in Chapter 10 , from network meta-analysis in Chapter 11 , and for methods other than meta-analysis in Chapter 12 .

It is important that the results of the review are presented in a manner that ensures the reader can interpret the findings accurately. The direction of effect (increase or decrease, benefit or harm), should always be clear to the reader, and the minimal important difference in the outcome (if known) should be specified. Review authors should consider presenting results in formats that are easy to interpret. For example, standardized mean differences are difficult to interpret because they are in units of standard deviation, but can be re-expressed in more accessible formats (see Chapter 15 ). 

In addition to summarizing the effects of interventions, review authors should also summarize the results of any subgroup analyses (or meta-regression), sensitivity analyses, and assessments of the risk of bias due to missing results (if performed) that are relevant to each synthesis. A common issue in reporting the results of subgroup analyses that should be avoided is the misleading emphasis placed on the intervention effects within subgroups (e.g. noting that one group has a statistically significant effect) without reference to a test for between-subgroup difference (see Chapter 10 ).

A ‘Summary of findings’ table is a useful means of presenting findings for the most important comparisons and outcomes, whether or not evidence is available for them. In a published Cochrane Review, all ‘Summary of findings’ tables are included before the Background section. A ‘Summary of findings’ table typically:

  • includes results for one clearly defined population group;
  • indicates the intervention and the comparator;
  • includes seven or fewer patient-important outcomes;
  • describes the characteristics of the outcomes (e.g. scale, scores, follow-up);
  • indicates the number of participants and studies for each outcome;
  • presents at least one estimate of the typical risk or score for participants receiving the comparator intervention for each outcome;
  • summarizes the intervention effect (if appropriate), and;
  • includes an assessment of the certainty of the body of evidence for each outcome.

The assessment of the certainty of the body of evidence should follow the GRADE approach, which includes considerations of risk of bias, indirectness, inconsistency, imprecision and publication bias (see Chapter 14 ). Where available, the GRADE assessment should always be presented alongside each result wherever it appears (for example, in the Results, Discussion or Abstract).

A common mistake to avoid is the confusion of ‘no evidence of an effect’ with ‘evidence of no effect’. When a confidence interval includes the possibility of no effect, it is wrong to claim that it shows that an intervention has no effect or is no different from the control intervention, unless the confidence interval is narrow enough to exclude a meaningful difference in either a positive or negative direction. Where confidence intervals are compatible with either a positive and negative, or positive and negligible effect, this is factored into an assessment of the imprecision of the result through GRADE. Authors can therefore report the size and direction of the central effect estimate as observed, alongside an assessment of its uncertainty.

III.3.5.4 Presenting results of studies and syntheses in tables and figures

Simple summary data for each intervention group (such as means and standard deviations), as well as estimates of effect (such as mean differences), should be presented for each study, for each outcome of interest to the review, in the Analyses supplementary material. The Analyses supplementary material has a hierarchical structure, presenting results in forest plots or other table formats, grouped first by comparison, and then for each outcome assessed within the comparison. Authors can also record in each table the source of all results presented, in particular, whether results were obtained from published literature, by correspondence, from a trials register, or from another source (e.g. clinical study report). Presenting such information facilitates attempts by others to verify or reproduce the results ( Page et al 2018 ).

In addition to the Analyses supplementary material, review authors should include the main forest plots and tables that help the review address its objectives and support its conclusions as Figures and Tables within the main body of the review.

Forest plots display effect estimates and confidence intervals for each individual study and the meta-analysis ( Lewis and Clarke 2001 ). Forest plots created in RevMan typically illustrate:

1. the summary statistics (e.g. number of events and sample size of each group for dichotomous outcomes) for each study;

2. point estimates and confidence intervals for each study, both in numeric and graphic format;

3. a point estimate and confidence interval for the meta-analytic effect, both in numeric and graphic format;

4. the total number of participants in the experimental and control groups;

5. labels indicating the interventions being compared and the direction of effect;

6. percentage weights assigned to each study;

7. the risk of bias in each point estimate, including the overall judgement and judgements for each domain;

8. estimates of heterogeneity (e.g. Tau 2 ) and inconsistency (I 2 );

9. a statistical test for the meta-analytic effect.

For reviews using network meta-analysis, a range of figures and table formats may be appropriate to present both the network of evidence and the results of the analysis. These may include a network diagram, contribution matrix, forest plot or rankogram (see Chapter 11 for more details).

If meta-analysis was not possible or appropriate, or if the results of some studies could not be included in a meta-analysis, the results of each included study should still be presented in the review. Wherever possible, results should be presented in a consistent format (e.g. an estimate of effect such as a risk ratio or mean difference with a confidence interval, which may be calculable from the available data even if not presented in the primary study). Where meta-analysis is not used, review authors may find it useful to present the results of studies in a forest plot without calculating a meta-analytic effect.

Where appropriate, authors might consider presenting alternative figures to present the results of included studies. These may include a harvest plot, effect direction plot or albatross plot (see Chapter 12 for more details).

Figures other than forest plots and funnel plots may be produced in software other than RevMan and included as Figures in a Cochrane Review.

Review authors should ensure that all statistical results presented in the main review text are consistent between the text and tables or figures, and across all sections of the review where results are reported (e.g. the Abstract, Plain language summary, ‘Summary of findings’ tables, Results and Analyses supplementary material).

If authors wish to make additional data available, such as completed data collection forms or full datasets and code used in statistical analysis, these may be provided as additional files through a publicly available repository (such as the Open Science Framework) and cited in the review.

Authors should avoid presenting tables or forest plots for comparisons or outcomes for which there are no data (i.e. no included studies reported that outcome or comparison). Instead, authors should note in the text of the review that no data are available for the comparisons. However, if the review has a ‘Summary of findings’ table, the main outcomes should be included in this irrespective of whether data are available from the included studies.

III.3.6 Discussion

A structured discussion can help readers consider the implications of the review findings. Standard Discussion subheadings in Cochrane Reviews provide the structure for this section.

Summary of main results:  It is useful to provide a concise description of results for the main outcomes of the review, but this should not simply repeat text provided elsewhere. If the review has a number of comparisons this section should focus on those that are most prominent in the review, and that address the main review objectives. Review authors should avoid repeating all the results of the synthesis, but be careful to ensure that all summary statements made in the Discussion are supported by and consistent with the results presented elsewhere in the review.

Limitations of the evidence included in the review:  This section should present an assessment of how well the evidence identified in the review addressed the review question. It should indicate whether the studies identified were sufficient to address all of the objectives of the review, and whether all relevant types of participants, interventions and outcomes have been investigated. Information presented under ‘Description of studies’ will be useful to draw on in writing this part of the discussion. This section should also summarize the considerations that led to downgrading or upgrading the certainty of the evidence in their implementation of GRADE. This information can be based on explanations for downgrading decisions alongside the ‘Summary of findings’ tables in the review.

Limitations of the review process: It is important for review authors to reflect on and report any decisions they made that might have introduced bias into the review findings. For example, rather than emphasizing the comprehensiveness of the search for studies, review authors should consider which aspects of the design or execution of the search could have led to studies being missed. This might occur because of the complexity and low specificity of the search, because the indexing of studies in the area is poor, or because searches beyond bibliographic databases did not occur. If attempts to obtain relevant data were not successful, this should be stated. Additional limitations to consider include contestable decisions relating to the inclusion or exclusion of studies, synthesis of study results, or grouping of studies for the purposes of subgroup analysis. For example, review authors may have decided to exclude particular studies from a synthesis because of uncertainty about the precise details of the interventions delivered, measurement instrument used, or where it has not been possible to retrieve subgroup level data. If data were imputed and alternative approaches to achieve this could have been undertaken, this might also be acknowledged. It may be helpful to consider tools that have been designed to assess the risk of bias in systematic reviews (such as the ROBIS tool (Whiting et al 2016)) when writing this section.

Agreements and disagreements with other studies or reviews: Review authors should also discuss the extent to which the findings of the current review agree or disagree with those of other reviews. Authors could briefly summarize the conclusions of previous reviews addressing the same question, and if the conclusions contrast with their own, discuss why this may have occurred (e.g. because of differences in eligibility criteria, search methods or synthesis approach).

Further guidance on issues for consideration in the Discussion section is presented in Chapter 14 and Chapter 15 .

III.3.7 Conclusions

There are two standard sections in Cochrane Reviews devoted to the authors’ conclusions.

Implications for practice: In this section, review authors should provide a general interpretation of the evidence so that it can inform healthcare or policy decisions. The implications for practice should be as practical and unambiguous as possible, should be supported by the data presented in the review and should not be based on additional data that were not systematically compiled and evaluated as part of the review. Recommendations for how interventions should be implemented and used in practice should not be given in Cochrane Reviews , as they may be inappropriate depending on the different settings and individual circumstances of readers. Authors may be helpful to readers by identifying factors that are likely to be relevant to their decision making, such as the relative value of the likely benefits and harms of the intervention, participants at different levels of risk, or resource issues. If the review considered equity, discuss the equity-related implications for practice and policy.

Implications for research:  This section of a Cochrane Review is often used by people making decisions about future research, and review authors should try to write something that will be useful for this purpose. Implications for how research might be done and reported (e.g. the need for randomized trials rather than other types of study, for better descriptions of interventions, or for the routine collection of patient-important outcomes) should be distinguished from what future research should be done (e.g. research in particular subgroups of people, or an as-yet-untested experimental intervention). In addition to important gaps in the completeness and applicability of the evidence noted in the Discussion, any factors that led to downgrading the evidence as part of a GRADE assessment may provide suggestions to be addressed by future research. This could include avoidable sources of bias or larger studies. This section should also draw on what is known about any ongoing studies identified from trials register searches, and any information about ongoing or recently completed studies can be used to guide recommendations on whether new studies should be initiated. If the review considered equity, discuss the equity-related implications for research. It is important that this section is as clear and explicit as possible. General statements that contain little or no specific information, such as “Future research should be better conducted” or “More research is needed” are of little use to people making decisions, and should be avoided.

III.3.8 Additional information

A Cochrane Review should include several pieces of additional, administrative information, many of which are standard in other journals. These include acknowledgements, contributions of authors, declarations of interest, sources of support, registration and protocol details, and availability of data, code and other materials.

Acknowledgements: Review authors should acknowledge the contribution of people not listed as authors of the review and any contributions to searching, data collection, study appraisal or statistical analysis performed by people not listed as authors. Written permission is required from those listed in this section.

Contributions of authors:  The contributions of each author to the review should be described. It is helpful to specify which authors were involved in each of the following tasks: conception of the review; design of the review; co-ordination of the review; search and selection of studies for inclusion in the review; collection of data for the review; assessment of the risk of bias in the included studies; analysis of data; assessment of the certainty in the body of evidence; interpretation of data, and writing of the review. Refer to the Cochrane Library editorial policy on authorship for the criteria that must be met to be listed as an author.

Declarations of interest:  All authors should report any present or recent affiliations or other involvement in any organization or entity with an interest in the review’s topic that might lead to a real or perceived conflict of interest. The dates of the involvement should be reported. For reviews whose titles were registered prior to 14 October 2020, and for updates which were underway before that date, the relevant time frame for interests begins three years before the original registration of the review with Cochrane, before the beginning of an individual author’s first involvement with the review, or before the decision to commence work on a review update. For all other reviews and updates, the relevant time frame for interests begins three years before the submission of the initial draft article, or three years before the beginning of an individual author’s first involvement. If there are no known conflicts of interest, this should be stated explicitly, for example, by writing “None known”. Authors should make themselves aware of the restrictions in place on authorship of Cochrane Reviews where conflicts of interest arise. Refer to the Cochrane Library editorial policy on conflicts of interest for full details.

Sources of support:  Authors should acknowledge grants that supported the review, and other forms of support, such as support from their university or institution in the form of a salary. Sources of support are divided into ‘internal’ (provided by the institutions at which the review was produced) and ‘external’ (provided by other institutions or funding agencies). Each source, its country of origin and what it supported should be provided. Authors should make themselves aware of the restrictions in place on funding of Cochrane Reviews by commercial sources where conflicts of interest may arise. Refer to the Cochrane Library editorial policy on conflicts of interest for full details.

Registration and protocol: Authors should provide the DOIs of protocols or previous versions of the review. If the systematic review is registered, authors should cite the review’s registration record number

Data, code and other materials: Cochrane requires, as a condition for publication, that the data supporting the results in systematic reviews published in the Cochrane Database of Systematic Reviews be made available for users, and that authors provide a data availability statement.

Analyses and data management are preferably conducted within Cochrane’s authoring tool, RevMan, for which computational methods are publicly available. Data entered into RevMan, such as study data, analysis data, and additional information including search results, citations of included and excluded studies, and risk of bias assessments are automatically made available for download from Cochrane Reviews published on the Cochrane Library. Scripts and artefacts used to generate analyses outside of RevMan which are presented in the review should be publicly archived and cited within the review’s data availability statement. External files, such as template data extraction forms or other data sets, can be added to a disciplinary or general repository and cited within the review. Refer to the Cochrane Library editorial policy on data sharing for full details.

III.4 Chapter information

Authors: Miranda Cumpston, Toby Lasserson, Ella Flemyng, Matthew J Page,

Acknowledgements: We thank previous chapter author Jacqueline Chandler, on whose text this version is based. This chapter builds on an earlier version of the Handbook ( Version 5, Chapter 4 : Guide to the contents of a Cochrane protocol and review), edited by Julian Higgins and Sally Green. We thank them for their contributions to the earlier chapter. We thank Sue Brennan, Rachel Churchill, Robin Featherstone, Ruth Foxlee, Kayleigh Kew, Nuala Livingstone and Denise Mitchell for their feedback on this chapter.

Declarations of interest:  Toby Lasserson and Ella Flemyng are employees of Cochrane. Matthew Page co-led the development of the PRISMA 2020 statement.

III.5 References

Beller EM, Glasziou PP, Altman DG, Hopewell S, Bastian H, Chalmers I, Gøtzsche PC, Lasserson T, Tovey D, for the PfAG. PRISMA for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts. PLoS Medicine 2013; 10 : e1001419.

Campbell M, Katikireddi SV, Hoffmann T, Armstrong R, Waters E, Craig P. TIDieR-PHP: a reporting guideline for population health and policy interventions. BMJ 2018; 361 : k1079.

Campbell M, Katikireddi SV, Sowden A, Thomson H. Lack of transparency in reporting narrative synthesis of quantitative data: a methodological assessment of systematic reviews. Journal of Clinical Epidemiology 2019; 105 : 1-9.

Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, Hartmann-Boyce J, Ryan R, Shepperd S, Thomas J, Welch V, Thomson H. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ 2020; 368 : l6890.

Cumpston MS, Brennan SE, Ryan R, McKenzie JE. Synthesis methods other than meta-analysis were commonly used but seldom specified: survey of systematic reviews. Journal of Clinical Epidemiology 2023; 156 : 42-52.

Dwan KM, Williamson PR, Kirkham JJ. Do systematic reviews still exclude studies with "no relevant outcome data"? BMJ 2017; 358 : j3919.

Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, Michie S, Moher D, Wager E. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014; 383 : 267-276.

Hoffmann TC, Oxman AD, Ioannidis JP, Moher D, Lasserson TJ, Tovey DI, Stein K, Sutcliffe K, Ravaud P, Altman DG, Perera R, Glasziou P. Enhancing the usability of systematic reviews by improving the consideration and description of interventions. BMJ 2017; 358 : j2998.

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JP, Straus S, Thorlund K, Jansen JP, Mulrow C, Catala-Lopez F, Gotzsche PC, Dickersin K, Boutron I, Altman DG, Moher D. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Annals of Internal Medicine 2015; 162 : 777-784.

Kneale D, Thomas J, Harris K. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PLoS One 2015; 10 : e0142187.

Lasserson T, Churchill R, Chandler J, Tovey D, Higgins JPT. Standards for the reporting of protocols of new Cochrane Intervention Reviews. In: Higgins JPT, Lasserson T, Chandler J, Tovey D, Churchill R, editors. Methodological Expectations of Cochrane Intervention Reviews . London: Cochrane; 2016.

Lewis S, Clarke M. Forest plots: trying to see the wood and the trees. BMJ 2001; 322 : 1479-1480.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews 2015; 4 : 1.

Nguyen PY, Kanukula R, McKenzie JE, Alqaidoom Z, Brennan SE, Haddaway NR, Hamilton DG, Karunananthan S, McDonald S, Moher D, Nakagawa S, Nunan D, Tugwell P, Welch VA, Page MJ. Changing patterns in reporting and sharing of review data in systematic reviews with meta-analysis of the effects of interventions: cross sectional meta-research study. BMJ 2022; 379 : e072428.

Page MJ, Altman DG, Shamseer L, McKenzie JE, Ahmadzai N, Wolfe D, Yazdi F, Catalá-López F, Tricco AC, Moher D. Reproducible research practices are underused in systematic reviews of biomedical interventions. Journal of Clinical Epidemiology 2018; 94 : 8-18.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hrobjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021a; 372 : n71.

Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hrobjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, McKenzie JE. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ 2021b; 372 : n160.

Pollock A, Campbell P, Struthers C, Synnot A, Nunn J, Hill S, Goodare H, Watts C, Morley R. Stakeholder involvement in systematic reviews: a protocol for a systematic review of methods, outcomes and effects. Research Involvement and Engagement 2017; 3 : 9.

Reeves BC, Wells GA, Waddington H. Quasi-experimental study designs series-paper 5: a checklist for classifying studies evaluating the effects on health interventions-a taxonomy without labels. Journal of Clinical Epidemiology 2017; 89 : 30-42.

Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, Koffel JB, Blunt H, Brigham T, Chang S, Clark J, Conway A, Couban R, de Kock S, Farrah K, Fehrmann P, Foster M, Fowler SA, Glanville J, Harris E, Hoffecker L, Isojarvi J, Kaunelis D, Ket H, Levay P, Lyon J, McGowan J, Murad MH, Nicholson J, Pannabecker V, Paynter R, Pinotti R, Ross-White A, Sampson M, Shields T, Stevens A, Sutton A, Weinfurter E, Wright K, Young S, Group P-S. PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Systematic Reviews 2021; 10 : 39.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, Group P-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015; 349 : g7647.

Whiting P, Savovi ć J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J, Churchill R. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. Journal of Clinical Epidemiology 2016; 69 : 225-234.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

  • Guidelines as Topic
  • Meta-Analysis as Topic*
  • Publication Bias
  • Review Literature as Topic
  • Systematic Reviews as Topic*

Introduction to Systematic Reviews

  • Reference work entry
  • First Online: 20 July 2022
  • pp 2159–2177
  • Cite this reference work entry

Book cover

  • Tianjing Li 3 ,
  • Ian J. Saldanha 4 &
  • Karen A. Robinson 5  

200 Accesses

A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter. We discuss the motivation for and importance of performing systematic reviews and their relevance to trialists. We introduce the key steps in completing a systematic review, including framing the question, searching for and selecting studies, collecting data, assessing risk of bias in included studies, conducting a qualitative synthesis and a quantitative synthesis (i.e., meta-analysis), grading the certainty of evidence, and writing the systematic review report. We also describe how to identify systematic reviews and how to assess their methodological rigor. We discuss the challenges and criticisms of systematic reviews, and how technology and innovations, combined with a closer partnership between trialists and systematic reviewers, can help identify effective and safe evidence-based practices more quickly.

  • Systematic review
  • Meta-analysis
  • Research synthesis
  • Evidence-based
  • Risk of bias

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

AHRQ (2015) Methods guide for effectiveness and comparative effectiveness reviews. Available from https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview . Accessed on 27 Oct 2019

Andersen MZ, Gülen S, Fonnes S, Andresen K, Rosenberg J (2020) Half of Cochrane reviews were published more than two years after the protocol. J Clin Epidemiol 124:85–93. https://doi.org/10.1016/j.jclinepi.2020.05.011

Article   Google Scholar  

Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, Gartlehner G, Hartling L, McPheeters M, Morgan LC, Reston J, Sista P, Whitlock E, Chang S (2015) Grading the strength of a body of evidence when assessing health care interventions: an EPC update. J Clin Epidemiol 68(11):1312–1324

Borah R, Brown AW, Capers PL, Kaiser KA (2017) Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open 7(2):e012545. https://doi.org/10.1136/bmjopen-2016-012545

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JP, Oliver S (2014) How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156–165. https://doi.org/10.1016/S0140-6736(13)62229-1

Clarke M, Chalmers I (1998) Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA 280(3):280–282

Cooper NJ, Jones DR, Sutton AJ (2005) The use of systematic reviews when designing studies. Clin Trials 2(3):260–264

Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, Hozo I, Clarke M, Sargent D, Schell MJ (2011) Optimism bias leads to inconclusive results-an empirical study. J Clin Epidemiol 64(6):583–593. https://doi.org/10.1016/j.jclinepi.2010.09.007

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol 91:23–30

Equator Network. Reporting guidelines for systematic reviews. Available from https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=systematic-reviews-and-meta-analyses&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+ . Accessed 9 Mar 2020

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ (2016) Panel for updating guidance for systematic reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 354:i3507. https://doi.org/10.1136/bmj.i3507 . Erratum in: BMJ 2016 Sep 06 354:i4853

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64(4):383–394

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) (2019a) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester

Google Scholar  

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (2019b) Standards for the conduct of new Cochrane intervention reviews. In: JPT H, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (eds) Methodological expectations of Cochrane intervention reviews. Cochrane, London

IOM (2011) Committee on standards for systematic reviews of comparative effectiveness research, board on health care services. In: Eden J, Levit L, Berg A, Morton S (eds) Finding what works in health care: standards for systematic reviews. National Academies Press, Washington, DC

Jonnalagadda SR, Goyal P, Huffman MD (2015) Automating data extraction in systematic reviews: a systematic review. Syst Rev 4:78

Krnic Martinic M, Pieper D, Glatt A, Puljak L (2019) Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol 19(1):203. Published 4 Nov 2019. https://doi.org/10.1186/s12874-019-0855-0

Lasserson TJ, Thomas J, Higgins JPT (2019) Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327(4):248–254

Lau J (2019) Editorial: systematic review automation thematic series. Syst Rev 8(1):70. Published 11 Mar 2019. https://doi.org/10.1186/s13643-019-0974-z

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6(7):e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, Chalmers I (2016) Towards evidence based research. BMJ 355:i5440. https://doi.org/10.1136/bmj.i5440

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC (2018) Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 9(4):602–614. https://doi.org/10.1002/jrsm.1287

Michelson M, Reuter K (2019) The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun 16:100443. https://doi.org/10.1016/j.conctc.2019.100443 . Erratum in: Contemp Clin Trials Commun 2019 16:100450

Moher D, Liberati A, Tetzlaff J (2009) Altman DG; PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269. W64

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

NIHR HTA Stage 1 guidance notes. Available from https://www.nihr.ac.uk/documents/hta-stage-1-guidance-notes/11743 ; Accessed 10 Mar 2020

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D (2016) Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al (eds) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester, pp 349–374

Chapter   Google Scholar  

Robinson KA (2009) Use of prior research in the justification and interpretation of clinical trials. Johns Hopkins University

Robinson KA, Goodman SN (2011) A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 154(1):50–55. https://doi.org/10.7326/0003-4819-154-1-201101040-00007

Rouse B, Cipriani A, Shi Q, Coleman AL, Dickersin K, Li T (2016) Network meta-analysis for clinical practice guidelines – a case study on first-line medical therapies for primary open-angle glaucoma. Ann Intern Med 164(10):674–682. https://doi.org/10.7326/M15-2367

Saldanha IJ, Lindsley K, Do DV et al (2017) Comparison of clinical trial and systematic review outcomes for the 4 most prevalent eye diseases. JAMA Ophthalmol 135(9):933–940. https://doi.org/10.1001/jamaophthalmol.2017.2583

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 7:10

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007) How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 147(4):224–233

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S (2019) Chapter 2: determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

USPSTF U.S. Preventive Services Task Force Procedure Manual (2017). Available from: https://www.uspreventiveservicestaskforce.org/uspstf/sites/default/files/inline-files/procedure-manual2017_update.pdf . Accessed 21 May 2020

Whitaker (2015) UCSF guides: systematic review: when will i be finished? https://guides.ucsf.edu/c.php?g=375744&p=3041343 , Accessed 13 May 2020

Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J (2016) Churchill R; ROBIS group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234. https://doi.org/10.1016/j.jclinepi.2015.06.005

Download references

Author information

Authors and affiliations.

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

Department of Health Services, Policy, and Practice and Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Ian J. Saldanha

Department of Medicine, Johns Hopkins University, Baltimore, MD, USA

Karen A. Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tianjing Li .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Epidemiology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins University, Baltimore, MD, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Li, T., Saldanha, I.J., Robinson, K.A. (2022). Introduction to Systematic Reviews. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_194

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_194

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

DistillerSR Logo

How to Write a Systematic Review Introduction

systematic review report example

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

How to write an introduction of a systematic review.

For anyone who has experience working on any sort of study, be it a review, thesis, or even just an academic paper – writing an introduction shouldn’t be a foreign concept. It generally follows the same rules, requiring it to give the readers the context of the study explaining what the review is all about: the topic it tackles, why the study was performed and the goals of its findings.

That said, most systematic reviews are governed by two guidelines that help improve the reporting of the research. These are the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement and the Cochrane handbook. Both have specifications on how to write the report, including the introduction. Look at a systematic review example , and you’ll find that it uses either one of these frameworks.

PRISMA Statement vs. Cochrane Guidelines

Learn more about distillersr.

(Article continues below)

systematic review report example

What Is The Right Length Of A Systematic Review Introduction?

There’s no hard and fast rule about the length of a systematic review introduction. However, it’s best to keep it concise. Limit it to just two to four paragraphs, not exceeding one full page. Don’t worry, you’ll have the rest of the paper to fill with data!

What To Include In A Systematic Review Introduction

Whether or not you’re following writing guidelines, here are some pieces of information that you should include in your systematic review introduction:

Give background information about the review, including what’s already known about the topic and what you’re attempting to discover with your findings.

Definitions

This is optional, but if your review is dealing with important terms and concepts that require defining beforehand for better understanding on the readers’ part, add them to your introduction.

Delve a little into why the study topic is important, and why a systematic review must be done for it. This prompts a discussion about the knowledge gaps, a lack of cohesion in existing studies, and the potential implications of the review.

Research Question

Introduce your topic, specifically the research question that’s driving the study. Be sure that it’s new, focused, specific, and answerable, and that it ties together with your conclusion later on.

3 Reasons to Connect

systematic review report example

  • University of Michigan Library
  • Research Guides

Systematic Reviews

Reporting results.

  • Work with a Search Expert
  • Covidence Review Software
  • Types of Reviews
  • Evidence in a Systematic Review
  • Information Sources
  • Search Strategy
  • Managing Records
  • Selection Process
  • Data Collection Process
  • Study Risk of Bias Assessment

PRISMA Diagram Generators

Other reporting templates, we can help.

  • For Search Professionals

PRISMA provides a list of items to consider when reporting results. 

  • Study selection:   Give numbers of studies screened, assessed for eligibility, & included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.
  • Study characteristics:   For each study, present characteristics for which data were extracted (e.g., study size, PICOs, follow-up period) & provide the citations.
  • Risk of bias within studies:   Present data on risk of bias of each study &, if available, any outcome level assessment.
  • Results of individual studies:   For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group  (b) effect estimates & confidence intervals, ideally with a forest plot. 
  • Synthesis of results:   Present results of each meta-analysis done, including confidence intervals & measures of consistency.
  • Risk of bias across studies:   Present results of any assessment of risk of bias across studies.
  • Additional analysis:   Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression).

References:

  • Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009 Aug 18;151(4):264-9, W64. doi: 10.7326/0003-4819-151-4-200908180-00135. Epub 2009 Jul 20. PMID: 19622511.  https://pubmed.ncbi.nlm.nih.gov/19622511/
  • Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009 Aug 18;151(4):W65-94. doi: 10.7326/0003-4819-151-4-200908180-00136. Epub 2009 Jul 20. PMID: 19622512. https://pubmed.ncbi.nlm.nih.gov/1962251 2
  • Tricco AC, Lillie E, Zarin W,  et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018 Oct 2;169(7):467-473. doi: 10.7326/M18-0850. Epub 2018 Sep 4. PMID: 30178033.  https://www.acpjournals.org/doi/epdf/10.7326/M18-0850
  • Flow Diagram Generator This is an updated version of the original PRISMA flow generator. Includes a downloadable PDF version.
  • Flow Diagram (PRISMA) Contains both PDF & Word versions. From PRISMA.

See the EQUATOR network for more guidelines for reporting health research.

As a collaborator on your research team, an informationist can write the methods section of your publication. With an informationist as a co-author you can be confident that the methods section of your paper will meet the relevant PRISMA reporting standards and be replicable by other groups.

RMIT University

Teaching and Research guides

Systematic reviews.

  • Starting the review
  • About systematic reviews
  • Research question
  • Plan your search
  • Sources to search
  • Search example
  • Screen and analyse

What to include

Introduction, discussion and conclusion.

  • Guides and software
  • Further help

In general, the writing process for a systematic review is similar to the process of writing any other kind of review.

A systematic review should provide an answer to the research question , it is not a broad overview of current trends and gaps in research. The review should show the reader how the answer was found, and provide the results you have identified.

A systematic review must have a detailed methodology that describes the search process and the selection process. This is why careful documentation of the methodology is important. A reader of the review should be able to critically interpret the findings- to understand why sources were chosen, how they were assessed, and how conclusions were reached.

The structure of the systematic review differs from the narrative review or the traditional literature review that allows you to organise it to best support your argument. A systematic review should reflect the stages outlined in the protocol . With a systematic review reporting guidelines should be followed that help you identify what should be included in each section of the review. One such standard approach is PRISMA .

Although much time is invested in developing a search strategy and screening results, a systematic review is valued by the critical reflection and interpretation of the findings . Focus on analysing, not summarising. Use a critical analysis tool to assess the studies.

Your systematic review needs to tell a story, and it needs to clearly articulate how it provides meaningful and original advancement of the field .

The abstract provides an overview of the systematic review. It usually covers the following:

  • A brief background (what we know and often the gap that the review will fill)
  • The aim or hypothesis
  • Summary of methods
  • Summary of results
  • Summary of conclusion (and sometimes recommendations).

Note that these points represent the general ‘story line’ seen in most systematic reviews: What we know (and perhaps what the gap is); what we set out to do; what we did; what we found; what this means.

The introduction provides an overview of the systematic review and enough contextual information for the reader to make sense of the remainder of the report. It usually covers the following:

  • Background information to contextualise the review (what we already know about this area)
  • Definitions of key terms and concepts if needed
  • The rationale for the study (often in terms of a gap in knowledge that needs to be filled, a lack of agreement within the literature that needs to be resolved, or the potential implications of the findings)
  • The aims and/or objectives (optional)
  • The research question/s emanating from the rationale
  • Additional information (Optional)

Note however, that these points are not always in this order. Some writers prefer to begin with the research questions, followed by the context, building to the rationale.

The  methods  section can be divided up into two main sections.

The first section describes how the literature search was conducted. This section may contain any of the following information: 

  • The databases searched and whether any manual searches were completed 
  • How search terms were identified 
  • What terms were employed in the key word searches 
  • If particular sections of articles were looked at during the search and collection stage i.e. titles, abstracts, table of contents ( note : the information in these sections may have informed the selection process) 

The second  section discusses the criteria for including or excluding studies. This section may include any of the following information:

  • Your selection criteria
  • How you identified relevant studies for further analysis 
  • What articles you reviewed 
  • What particular areas you looked at in the selected articles i.e. a relationship or association between two things (such as a genetic predisposition and a drug), the outcome measures of a health campaign, drug treatment, or clinical intervention, the differing impact of a particular drug or treatment. 

Details about the kind of systematic review undertaken, i.e. thematic analysis, might also be mentioned in the methods section.​

Broadly speaking, in the  results  section,  everything you have done so far needs to be presented.  This can include any of the following: ​

  • briefly mention the  databases used for the searching
  • identify the number of hits
  • show how the articles were selected by title, abstract, table of contents or other procedures. 
  • Overview of the kinds of studies selected for the review i.e. the types of methodologies or study designs used.
  • where the trials were conducted
  • treatment  duration
  • details about participants
  • similarities and differences in the way data was measured
  • similar or different approaches to the same treatment or condition 
  • Risk of bias across studies 
  • the kinds of relationships or associations demonstrated by the studies
  • frequency of positive or adverse effects of a particular treatment or drug
  • the number of studies that found a positive correlation between two phenomena or found a causal relationship between two variables

Often, researchers will include tables in the Results section or Appendix to provide on overview of data found in the studies. Remember, tables in the Results section need to be explained fully.

A primary function of your discussion and conclusion is to help readers understand the main findings and implications of the review.

The following elements are commonly found in the discussion and conclusion sections. Note that the points listed are neither mandatory nor in any prescribed order.   

Discussion:

  • Summary of main findings
  • Interpretation of main findings (don’t repeat results)
  • Strengths and weaknesses
  • Comparison with previous review findings or general literature
  • The degree to which the review answers the research question
  • Whether the hypothesis was confirmed
  • Limitations (e.g. biases, lack of methodological rigour or weak evidence in the articles)

Conclusion:

  • Summary of how it answers the research question (the ‘take home’ message)
  • Significance of the findings
  • Reminder of the limitations
  • Implications and recommendations for further research.

Separate or combined?

A key difference between a discussion and a conclusion relates to how specific or general the observations are. A discussion closely interprets results in the context of the review. A conclusion identifies the significance and the implications beyond the review. Some reviews present these as separately headed sections. Many reviews, however, present only one section using a combination of elements. This section may be headed either Discussion or Conclusion.

  • << Previous: Synthesise
  • Next: Guides and software >>

Creative Commons license: CC-BY-NC.

  • Last Updated: Apr 10, 2024 2:53 PM
  • URL: https://rmit.libguides.com/systematicreviews
  • Open access
  • Published: 01 April 2024

The impact of housing prices on residents’ health: a systematic review

  • Ashmita Grewal 1 ,
  • Kirk J. Hepburn 1 ,
  • Scott A. Lear 1 ,
  • Marina Adshade 2 &
  • Kiffer G. Card 1  

BMC Public Health volume  24 , Article number:  931 ( 2024 ) Cite this article

479 Accesses

19 Altmetric

Metrics details

Rising housing prices are becoming a top public health priority and are an emerging concern for policy makers and community leaders. This report reviews and synthesizes evidence examining the association between changes in housing price and health outcomes.

We conducted a systematic literature review by searching the SCOPUS and PubMed databases for keywords related to housing price and health. Articles were screened by two reviewers for eligibility, which restricted inclusion to original research articles measuring changes in housing prices and health outcomes, published prior to June 31st, 2022.

Among 23 eligible studies, we found that changes in housing prices were heterogeneously associated with physical and mental health outcomes, with multiple mechanisms contributing to both positive and negative health outcomes. Income-level and home-ownership status were identified as key moderators, with lower-income individuals and renters experience negative health consequences from rising housing prices. This may have resulted from increased stress and financial strain among these groups. Meanwhile, the economic benefits of rising housing prices were seen to support health for higher-income individuals and homeowners – potentially due to increased wealth or perception of wealth.

Conclusions

Based on the associations identified in this review, it appears that potential gains to health associated with rising housing prices are inequitably distributed. Housing policies should consider the health inequities born by renters and low-income individuals. Further research should explore mechanisms and interventions to reduce uneven economic impacts on health.

Peer Review reports

Introduction

In contemporary society, the structures we live in, as well as our legal relationships to these structures, are intertwined with our fundamental senses of self and belonging [ 1 , 2 , 3 ]. For decades, homeownership has been recognized as a core measure of success [ 4 , 5 ]. Recognizing the importance of housing, studies have variously examined the effects of wide-ranging housing-related factors on health, including housing quality, overcrowding, neighbourhood deprivation, social cohesion, housing density, housing suitability or sufficiency, and neighbourhood socioeconomic status [ 6 , 7 ]. While these effects continue to be explored, it is generally agreed that housing is a fundamental determinant of health [ 7 ], which broadly exerts impacts on health through a variety of mechanisms.

Indeed, housing-related health effects arise from specific housing conditions, as well as the legal conditions that define our relationships to these spaces, and our emotional attachments to these various factors. For example, living and owning a home can create access to opportunities that can further bolster health [ 8 ]. Similarly, housing related factors—such as indebtedness, mortgage stress, and credit problems—can cause severe mental health problems, depression, and suicide ideation [ 9 , 10 ]. With these factors in mind, people in most countries face numerous barriers to securing their right to a home [ 5 , 11 ], and a wide array of policies have been proposed and implemented to address these barriers [ 12 , 13 , 14 ]. In addition to these factors, the location of a home, the quality of a building, or the neighbourhood context in which a home exists are also hugely influential to health [ 7 , 15 , 16 ].

In conceptualizing these varied mechanisms, it is important to consider both direct and indirect mechanisms through which the relationship between housing and health manifests. Direct effects predominantly emerge from psycho-physiological stress responses. Elevated housing costs can induce chronic stress, leading to mental health conditions, like anxiety and depression, and other health problems [ 17 ]. Indirectly, escalating housing prices exert economic pressures that limit individuals' capacity to allocate resources towards health-promoting activities and necessities. This economic strain can result in compromised nutrition, reduced access to healthcare services, and diminished ability to manage chronic conditions, therefore, exacerbating health disparities. Moreover, the financial burden can lead to other lifestyle changes that further impair physical and mental well-being, such as increased substance use or reduced physical activity.

Despite these effects being documented in previous studies, there are no systematic reviews on the impact of rising housing prices on health. The present review aims to examine the effect of housing price on health by considering whether changes in housing market price impact the health of residents living in an area. To accomplish this aim, we conduct a systematic review. This review is especially timely since housing prices have risen in the past five years at an alarming rate.

Article search

The first step in our multi-stage systematic literature review was to manually identify relevant articles through a rudimentary search on SCOPUS and PubMed ( Appendix B ). We then created a list of keywords to use for our search. Keywords aimed to identify articles that measured changes in housing prices and health impacts, Appendix A outlines how we identified keywords and provides a complete list of selected keywords. After conducting the keyword search in PubMed and SCOPUS, duplicates were removed and the remaining articles were then uploaded to Rayyan, an online software that aids in systematic reviews [ 18 ]. To assess whether our search is comprehensive, AG confirmed that the articles identified in the rudimentary initial search, mentioned earlier, were also included in this search. For the purposes of this literature review, we define health using the language provided by the World Health Organization (1948): “health is a complete state of mental, physical, and social well-being, and not merely the absence of disease.” As such, no additional inclusion or exclusion criteria were used to exclude or include specific health conditions. We felt this was appropriate given that this is the first literature review on this topic and because after a review of included articles, it was apparent that a wide variety of health outcomes have been considered. Furthermore, the biopsychosocial models of health that we engage to inform our view that housing prices have direct and indirect effects on health underscore that diverse and nuanced pathways across various mental and physical domains of health are likely important to consider. Using Rayyan, AG and LW reviewed the titles of each manuscript to remove articles that were clearly not relevant to this review [ 18 ]. The application of inclusion and exclusion criteria resulted in 21 articles that were directly relevant to this review. AG and LW also searched the reference lists for these 21 articles to identify any additional articles. These missed articles were added to our final inclusion list, creating a total of 23 included articles.

Data extraction

Data were extracted by AG and LW from each of the identified and included articles and AG re-reviewed the data extraction to verify accuracy. Extracted variables included: first author name, year of publication, years of data collection, sample size, location(s) of study, study design (e.g., case control, cohort, cross-sectional, serial cross-sectional study), analysis type (e.g., regression), outcome, explanatory factor, confounders/mediators/moderators, and a summary of primary findings (including effect size measures). This data extraction is provided as Table  1 .

Risk of bias assessment

During the data extraction process, we conducted an assessment based on the Joanna Briggs Institute Critical Appraisal Tools [ 42 ]. Each study was classified according to its study design and rated using the appropriate tool designed for each study. However, despite varying methodological quality, no studies were excluded based on risk of bias assessment, as there were no clear sources of systematic bias with sufficient likelihood of challenging the conclusions of the source studies.

Narrative synthesis

During the data extraction and risk of bias assessment phases, AG and LW recorded general notes on each of the studies. These notes, along with the extracted information, were used to construct a narrative synthesis of the evidence. This process was guided by Popay et al.’s [ 43 ] Guidance for Narrative Synthesis in Systematic Reviews. A narrative approach was selected to allow for an examination of the potential complexity inherent in the synthesis of findings across contexts, time periods, and populations to provide a nuanced discussion of what roles housing and rental markets might play in shaping health, with attention to both outcomes and potential mechanisms. Findings within study classes were reviewed to determine potential mediation and moderation. These explorations informed the development of a list of key points used to organize the presentation of our results. We then integrated and contextualized these findings with those from other relevant (though excluded) studies identified through our review process and from the texts of the included articles.

Included studies

Our keyword search returned 6,180 articles. Of these, 5,590 were removed based on review of the abstract and title as they were not directly related to our review topic (i.e., they did not measure changes in housing price and/or health outcomes). The remaining articles were reviewed based on their full-texts and a final list of 26 articles were considered for inclusion. However, five articles were not able to be retrieved (even after emailing the original authors), leaving us with 21 articles. The reference lists and bibliographies for these 21 included articles were then screened and two additional articles were thus included in our review resulting in a final sample of 23 articles. Figure  1 shows the flow diagram for included studies and these studies are listed in Appendix B .

figure 1

PRISMA systematic review flow diagram

Dates and locations of studies

A full description of studies is included in Table  1 . Studies were published from 2013–2022. Ten studies were from East Asia, eight from the United States, three in Europe, one from Australia, and one included nine countries (France, Japan, Netherlands, Spain, Switzerland, Sweden, United Kingdom, USA).

Study design

Of the included studies, ten had a longitudinal study design, and thirteen studies were serial cross-sectional studies. Studies examined the effect of housing prices on health over time by repeatedly surveying a specific geographical area or population. One study included both qualitative and quantitative data collection.

Outcome variable measurement

Most studies compared multiple outcomes. Seven studies focused on mental health as the outcome variable—utilizing various measures, including self-rated mental health, standardized scales for depression or anxiety, and receipt of pharmaceutical prescriptions [ 22 , 32 , 33 , 35 , 40 , 41 ]. Nine studies analyzed the impact of housing prices on physical health—utilizing various measures of physical health, including objective assessments of physical health (e.g., body mass), self-rated physical health assessments, reports of specific health conditions (e.g., COVID-19), reported health behaviours (e.g., alcohol use, smoking), and mortality ( [ 24 , 28 , 29 , 37 , 44 , 30 , 36 , 38 , 39 ]). Seven studies included both physical and mental health measures as their outcome variable [ 19 , 20 , 23 , 26 , 27 , 34 , 45 ].

Explanatory variable measurement

Housing prices were measured using many different types of data, including house price index, self-reported housing price (extracted from surveys), and average market price. Many studies used house price index as a measure of housing prices [ 19 , 21 , 22 , 30 , 38 , 39 , 41 ]. Zhang & Zhang [ 33 ] included self-reported housing price. Alternatively, many studies examined housing market prices using existing survey data [ 20 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 31 , 32 , 34 , 35 , 36 , 37 , 40 ].

Key findings

Studies included in our review highlighted a plurality of results when testing the relationships between housing prices and health. As shown in Tables  2 (physical health) and Table  3 (mental health), the included articles reported mixed findings across the outcomes explored. Given the heterogeneity of findings regarding the associations between housing prices and health outcomes, several authors examined potential moderators and mediators in attempt to understand the mechanisms at play. These included studies examine the role of wealth effects (by comparing effects on homeowners and renters), socioeconomic status (e.g., income level), and broader economic forces (e.g., area-level improvements). While keeping these pathways in mind, there are likely other alternative explanations beyond those explored. However, these appear to be the most dominant frameworks used to understand the effects in our included studies.

Wealth effects

The first major pathway has been described as a “wealth effect” – which produces different effects for homeowners and renters [ 19 , 20 , 23 , 25 , 27 , 33 , 35 , 37 , 45 ]. For example, Hamoudi & Dowd [ 37 ] report that homeowners living in areas with steep price increases, perceive this as an increase in their overall wealth, resulting in positive health outcomes (not observed for renters). Similarly, Zhang & Zhang [ 33 ] show that increases in house prices has a positive effect on homeowner’s subjective well-being. De & Segura [ 30 ] specifically notes that price depreciation causes homeowners to experience feelings of a loss of wealth, leading to increases in alcohol consumption. Among studies that fail to show a wealth effect, Daysal et al. [ 29 ] shows that rising prices in Denmark do not impact households due to the buffering effects of government supports. Conversely, when examining the effects among renters, Wang & Liang [ 31 ] argue that rising housing prices have detrimental "strain" effect, which is also observed in several studies included in our review [ 25 , 27 , 38 , 39 , 45 ].

Income level

In addition to the wealth and strain effects illustrated through studies among homeowners and renters, many studies also examined the mediating effects of income [ 19 , 20 , 22 , 28 , 29 , 30 , 32 , 33 , 35 , 38 , 39 , 40 ]. Several of these studies show that housing unaffordability constrains spending and that low-income individuals are particularly impacted [ 22 , 24 , 33 , 38 ]. For example, Wong et al. [ 39 ] show that housing prices lead to reduced fruit consumption. However, results also show positive impacts for low-income homeowners – as exemplified by work showing that low-income homeowners are more sensitive to housing price gains [ 38 , 40 ].

Broader economic forces

In considering both mechanisms described above, authors of included studies have also considered whether housing prices are merely an indicator of broader economic trends merits consideration. The most common strategy for accounting for this has been to include other indicators that might capture area level improvements. Indeed, most studies controlled for both individual characteristics or variables, such as age, gender, marital status, years of education, race/ethnicity, and employment status [ 20 , 23 , 24 , 25 , 26 , 27 , 29 , 30 , 32 , 34 , 35 , 37 , 39 , 40 , 41 , 45 ], and a variety of economic factors, including individual income, country-level median income, and local area characteristics [ 19 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 30 , 32 , 36 , 37 , 38 , 39 , 40 , 44 , 45 ]. These factors are important to control for because rising housing prices can indicate a growing economy in which there are substantial improvements to neighbourhoods and communities [ 27 , 29 , 33 , 38 , 40 ]. As such, the observed improvements in health could simply arise from broader economic benefits (rather than being specifically attributable to housing prices) [ 31 , 33 ]. However, generally speaking studies showed that there were independent effects of housing price or value, even after controlling for local area level improvements, and wider economic conditions [ 19 , 27 , 38 ].

Strength of effects

Given heterogeneity in the direction of effects, the lack of standardization in the reporting of effect sizes from study to study, differences in the measurement of exposure and outcome variables, and variation in the inclusion of mediators, moderators, and confounders, we did not conduct a meta-analysis to describe the effect size of housing price on health. However, housing prices appear to exert influence on health and wellbeing with statistically significant effects across various health-related outcomes (See Table  1 for range of effect measures). The effects generally appear to be smaller when considering specific health conditions and greater when considering more subjective and more broad definitions of health (e.g., self-rated health). Of course, at a population-level, even relatively small effect sizes may pose a considerable challenge. For example, Xu & Wang [ 36 ] report that a 10% increase in housing prices is associated with a 6.5% increase in probability of reporting a chronic disease – a relatively small increase on a person-level, but when scaled could easily pose a considerable burden to the health system. In summary, further careful measurement and methodological refinement is needed to quantify the effects of housing prices on various health conditions. For any given health condition, this will require multiple well-designed studies across place and time. Such replication is particularly important given the observed sensitivity of findings to the inclusion of confounders, moderators, and mediators.

Primary findings

While examining whether changes in housing prices are associated with changes in health, we recognized it is difficult to establish a directional and causal relationship between housing price and health. This is particularly true given that health may increase opportunities for home ownership and economic success [ 3 , 46 , 47 , 48 , 49 ]. Nevertheless, given the wealth of literature highlighting housing affordability as a key determinant of health [ 7 , 9 , 10 ], it is reasonably anticipated that rising housing prices would be associated with worse health outcomes among individuals who do not own housing. However, based on analyses of the studies included in this review, the relationship between housing price and health is complex and nuanced, with a significant degree of heterogeneity across outcomes and populations.

First, this review illuminates that changing housing prices impacted different people differently, depending on for example, income level, gender and/or homeownership status [ 19 , 22 , 25 ]. The negative impact of housing price on health for renters and low-income individuals could be due to the existential angst from being excluded from home ownership, which is often considered an important indicator of social class and success [ 8 ]. However, this could also be due to the cost effect that is created from the rising house prices, subsequently raising low-income owners and renters’ cost of living [ 31 ]. Additionally, renting may be associated with lower neighbourhood tenure, especially when individuals are priced out of a neighbourhood [ 8 ]. As a result, they may experience deleterious health effects associated with loneliness, social isolation, lack of neighbourhood cohesion, and community disconnectedness [ 8 ]. Likewise, the positive effect observed among homeowners and high-income individuals may be explained by increases in psychological safety leading to changes in health behaviors, for example, knowing they have invested in a home that will support them or their heirs financially, people may be better able to focus on their well-being. As well, homeowners may be able to directly leverage the value of their home to gain access to additional capital and investment opportunities – which could support increased financial wellbeing [ 8 ].

The effects of housing on health can be conceptualized as arising from two sources. It appears that rising “cost effects” (the increased costs of houses and the costs passed on to tenants) are inversely correlated with health while “wealth effects” (the contributions of housing price to person wealth) contribute positively to health (for example, for homeowners and investors whose wealth increases due to the rising cost of housing). The balance of these effects differs depending on their unique impact on individuals – with lower income people and renters more strongly impacted by cost effects, and higher income people and homeowners more strongly impacted by wealth effects.

In considering these effects, we note that there is likely considerable geographic, temporal, and contextual variation in the health effects of rising housing prices. For example, rising housing prices may occur alongside neighbourhood improvements (or degradation) and economic booms (or recessions) [ 45 ], which themselves are associated with improvements (or deterioration) to health [ 8 ]. As such, the presence of these factors may obscure or interact with the gains to health. Similarly, variations across cultures and countries may change how individuals internalize the rising housing prices [ 50 ], causing them to experience greater or lesser distress in reaction to rising prices.

Limitations of included studies and directions for future research

Given these two primary factors, research highlights several opportunities for improving this literature. First, future studies should give more careful attention to how moderators and mediators are conceptualized. For example, “home-owners” are hardly a homogenous class of individuals: some own their homes outright and others are paying mortgages that offer varying levels of security (e.g., fixed vs. variable mortgages, 5-year vs. 30-year mortgages) [ 8 ]. Second, a broader range of effect moderators should be explored. For example, few studies specifically examined the health effects of rising home prices on vulnerable populations, including young adults and first time home buyers who may be especially disadvantaged by rapidly growing housing prices [ 50 ]. Similarly, isolating effects as arising from economic, legal, environmental, and social pathways can help identify strategies for mitigating health harms. For example, it may be important to understand whether changes to neighbourhood environments drive health harms as opposed to changes in personal financial status. Third, more within-person studies are needed to understand the potential mechanisms and situational factors that promote or mitigate the health effects of rising housing prices. Along with use of appropriate, theoretically informed moderators, isolating the within-person effects can help us better quantify the effects of interest to inform policies and prevention strategies. Fourth, longer follow-up times may allow for better understanding regarding the time-horizons of the effects explored. Indeed, it is possible that rising housing prices could have differential effects on the health of a population in the short versus long term. This is particularly important given the interaction between housing prices (which may act as a price signal for investments) and other economic factors with strong potential to increase health [ 8 ]. Fifth, the studies used a variety of measures for housing price and health outcomes – which varied in quality. For example, health outcomes were primarily measured using self-reported measures [ 19 , 20 , 23 , 25 , 26 , 27 , 30 , 31 , 33 , 34 , 35 , 37 , 38 , 39 , 40 , 41 ] – which may be highly sensitive to bias due to the likelihood that individuals might report worse health when they are unhappy with economic factors. Improving measurement of outcomes can be done by leveraging administrative and other data sources. Sixth, it can be difficult to link area-level and individual-level factors, particularly in the context of limited cross-sectional studies or in longitudinal studies with only a few follow-up points. Likewise, many cohort-based studies have limited geographic coverage or insufficient temporal scope. As such, longer, larger, and wider studies are needed to fully ascertain the relationships under consideration.

Implications of findings

Although further research is required to overcome the limitations mentioned, existing evidence indicates that increases in housing prices may significantly influence health outcomes. Future studies should aim to exclude alternative explanations, examine the effects over longer periods, and establish consistent measurement methods to predict the impact of housing prices more accurately on health. The findings of those students will aid policymakers in creating strategies that address the health implications of rising housing prices. Policy makers should develop frameworks that respond to the impacts of rising housing prices on health. Such approaches could be facilitated through frameworks such as the WHO’s Health in All Policies (HiAP) policy, which advocates for the inclusion of health and social impacts among other criteria used throughout decision making processes [ 51 ]. Many studies in this review support this view and describe their work as having important implications for housing and health policy [ 19 , 20 , 21 , 24 , 26 , 27 , 28 , 29 , 31 , 34 , 35 , 37 , 38 , 39 ]. For example, Yuan et al. [ 26 ] notes the importance of directing government support and housing subsidies towards vulnerable groups – though these should be packaged with other policies [ 26 ]. Such supports can apparently buffer against the negative effects of rising housing prices by creating a saftey net that reduces the psychosocial and cognitive effects associated with economic changes in one's personal circumnstance. Arcaya et al. [ 21 ] also recommends governments investigate establishing more mental health facilities in areas where housing price fluctuations impact people's mental health but warns economic development that allows for greater investment in health infrastructure can also lead to increases in housing prices.

Of course, other types of interventions may also be warranted, including broader financial interventions (e.g., direct loans; [ 52 ], those which promote community, neighborhood, and social cohesion among residents [ 53 ], or those that aim to change how people value home ownership [ 26 ]. With respect to this final option, communities should consider whether renting may in fact be a desirable outcome for some individuals and therefore promote a culture in which individuals realize the variety of investment opportunities available to them rather than being overly-focused on a traditional model of investment [ 26 , 32 ]. For example, Zhang & Zhang [ 33 ] writes that homeowners should be provided financial and economic knowledge to better manage wealth gains, however, this could be taken one step further and include the importance of educating people on the dangers on the commodification of housing, to prevent an over reliance on the importance of housing wealth gains.

Limitations of our review

In addition to the limitations specific to studies included in our review, our review itself also has several limitations. First, while we trained two reviewers to conduct article screening, assessed inter-rater reliability as greater than 80%, and adjudicated conflicts with the help of a third reviewer, it is possible some articles that could have been included were excluded due to the many different forms of outcome and explanatory variable measurement. Second, while we have searched multiple databases, used comprehensive key words for our search, and conducted manual searches of the reference lists, it is possible there were studies that we missed and failed to include in this study. That said, it is unlikely the exclusion of these articles would change our conclusion that the literature is currently mixed and that there is a need to flesh out the mechanisms and moderators that link housing prices to health. Third, we were not able to conduct a meta-analysis, and our numeric reports of the number of studies with each characteristic should not be treated as a meta-analysis. Rather, these findings and analyses of these studies should be interpreted as a descriptive analysis that highlights significant heterogeneity of findings and critical inconsistencies in the mechanisms, mediators, and moderators that give rise to these associations. Fourth, we did not exclude any studies based on article quality because we did not find there to be sufficient heterogeneity in the quality of the observational studies to merit exclusion according to variable inclusion, study design, or sampling method. In other words, we sought to avoid introducing bias by arbitrarily excluding articles – particularly given that the number of articles captured here was already relatively low (at least given the diversity of methods, measures, and populations captured). However, future reviews might consider more narrowed inclusion and exclusion criteria when a sufficient body of literature is available for a given outcome. For example, limiting analyses to only well-designed cohort studies might support a more careful selection of articles. Finally, we note that we included studies across a wide variety of health outcomes. While this was done to maximize inclusion (given the wide heterogeneity of measures used), we acknowledge that future research might be strengthened by studying specific pathways linking housing prices to specific health and social outcomes. Such detailed research is greatly needed so as to not only highlight the relevance of housing prices to health but identify strategies for mitigating potential harms of rising housing prices.

Our review shows that there are complex relationships between housing prices and health – with studies arriving to mixed conclusions across a wide-variety of health outcomes and populations. Yet, there is insufficient evidence for a causal relationship, but it appears that if such a relationship exists it likely differs according to homeownership status, income-level, and as a factor of the broader economic and structural forces in play, including the level of economic supports provided by governments for low income individuals. Future research should explore these pathways, moderators, and confounders using long-term, geographically diverse, cohort studies that account for a broad diversity of causal or alternative mechanisms. Such future research will allow for a more nuanced understanding of health and health inequities related to rising housing prices.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary information files.

Saunders P. The meaning of ‘home’ in contemporary english culture. Hous Stud. 1989;4:177–92.

Article   Google Scholar  

Shields M. Community belonging and self-perceived health. Health Rep. 2008;19:51–60.

PubMed   Google Scholar  

Smith A. Exploring the interrelationship between the meanings of homeownership and identity management in a liquid society: a case study approach. [Doctoral dissertation, Keele University] Keele Business School. 2017.

Després C. The meaning of home: literature review and directions for future research and theoretical development. Locke Sci Publ Co Inc. 1991;8:96–115.

Goodman LS, Mayer C. Homeownership and the American Dream. J Econ Perspect. 2018;32:31–58.

Kawachi I. Social Capital and Community Effects on Population and Individual Health. Ann N Y Acad Sci. 1999;896:120–30.

Article   CAS   PubMed   Google Scholar  

Rolfe S, Garnham L, Godwin J, Anderson I, Seaman P, Donaldson C. Housing as a social determinant of health and wellbeing: developing an empirically-informed realist theoretical framework. BMC Public Health. 2020;20:1138.

Article   PubMed   PubMed Central   Google Scholar  

Rohe WM, Van Zandt S, McCarthy G. Home Ownership and Access to Opportunity. Hous Stud. 2002;17:51–61.

Turunen E, Hiilamo H. Health effects of indebtedness: a systematic review. BMC Public Health. 2014;14:489.

Waldron R, Redmond D. “We’re just existing, not living!” Mortgage stress and the concealed costs of coping with crisis. Hous Stud. 2017;32:584–612.

Lemphers B. Barriers & Opportunities; A Meta-Analysis of Affordable Rental Housing In Canada. Dalhousie University; 2017.  https://cdn.dal.ca/content/dam/dalhousie/pdf/faculty/architecture-planning/school-of-planning/pdfs/studentwork/MPlanIndependentStudy/Lemphers-Barriers%20and%20Opportunity-A%20Meta-Analysis%20of%20Affordable%20Rental%20Housing%20in%20Canada-2017.pdf .

Canadian Mortgage and Housing Association. Demand or Supply Side Housing Assistance?. 2019. https://www.cmhc-schl.gc.ca/professionals/housing-markets-data-and-research/housing-research/research-reports/accelerate-supply/research-insight-updating-debate-between-demand-supply-side-housing-assistance .

Glaeser EL. Rethinking the Federal Bias Toward Homeownership. SSRN Electron J. 2011. https://doi.org/10.2139/ssrn.1914468 .

Pomeroy S. Optimizing Demand and Supply Side Housing Assistance Programs. 2016. Carleton University Centre for Urban Research and Education (CURE). https://www.focus-consult.com/wp-content/uploads/Policy-Brief_Optimizing-Supply-and-Demand-Measures-2016.pdf .

Ige-Elegbede J, Pilkington P, Orme J, Williams B, Prestwood E, Black D, et al. Designing healthier neighbourhoods: a systematic review of the impact of the neighbourhood design on health and wellbeing. Cities Health. 2022;6:1004–19.

Article   PubMed   Google Scholar  

Pérez E, Braën C, Boyer G, Mercille G, Rehany É, Deslauriers V, et al. Neighbourhood community life and health: A systematic review of reviews. Health Place. 2020;61: 102238.

Çalıyurt O. The Mental Health Consequences of the Global Housing Crisis. Alpha Psychiatry. 2022;23:264–5.

Rayyan. 2022. https://www.rayyan.ai

Yue D, Ponce NA. Booms and Busts in Housing Market and Health Outcomes for Older Americans. Innov Aging. 2021;5:igab012.

Chen N, Shen Y, Liang H, Guo R. Housing and Adult Health: Evidence from Chinese General Social Survey (CGSS). Int J Environ Res Public Health. 2021;18:916.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Arcaya MC, Nidam Y, Binet A, Gibson R, Gavin V. Rising home values and Covid-19 case rates in Massachusetts. Soc Sci Med. 2020;265: 113290.

Lee CY, Chen PH, Lin YK. An Exploratory Study of the Association between Housing Price Trends and Antidepressant Use in Taiwan: A 10-Year Population-Based Study. Int J Environ Res Public Health. 2021;18:4839.

Fichera E, Gathergood J. Do Wealth Shocks Affect Health? New Evidence from the Housing Boom. Health Econ. 2016;25:57–69.

Kim I. Spatial distribution of neighborhood-level housing prices and its association with all-cause mortality in Seoul, Korea (2013–2018): A spatial panel data analysis. SSM - Popul Health. 2021;16: 100963.

Hamoudi A, Dowd JB. Housing Wealth, Psychological Well-being, and Cognitive Functioning of Older Americans. J Gerontol Ser B. 2014;69:253–62.

Yuan W, Gong S, Han Y. How does rising housing price affect the health of middle-aged and elderly people? The complementary mediators of social status seeking and competitive saving motive. PLoS ONE. 2020;15: e0243982.

Atalay K, Edwards R, Liu BYJ. Effects of house prices on health: New evidence from Australia. Soc Sci Med. 2017;192:36–48.

Bao W, Tao R, Afzal A, Dördüncü H. Real Estate Prices, Inflation, and Health Outcomes: Evidence From Developed Economies. Front Public Health. 2022;10: 851388.

Daysal NM, Lovenheim M, Siersbæk N, Wasser DN. Home prices, fertility, and early-life health outcomes. J Public Econ. 2021;198:104366.

De PK, Segura-Escano R. Drinking during downturn: New evidence from the housing market fluctuations in the United States during the Great Recession. Econ Hum Biol. 2021;43: 101070.

Wang H-Q, Liang L-Q. How Do Housing Prices Affect Residents’ Health? New Evidence From China. Front Public Health. 2022;9: 816372.

Wei G, Zhu H, Han S, Chen J, Shi L. Impact of house price growth on mental health: Evidence from China. SSM - Popul Health. 2021;13: 100696.

Zhang C, Zhang F. Effects of housing wealth on subjective well-being in urban China. J Hous Built Environ. 2019;34:965–85.

Feng Y, Nie C. The effect of housing price on health: new evidence from China. Appl Econ Lett. 2022;29:1439–46.

Chun H. Do housing price changes affect mental health in South Korea? Ethiop J Health Dev. 2020;34:48–59.

Google Scholar  

Xu Y, Wang F. The health consequence of rising housing prices in China. J Econ Behav Organ. 2022;200:114–37.

Hamoudi A, Dowd JB. Physical health effects of the housing boom: quasi-experimental evidence from the health and retirement study. Am J Public Health. 2013;103:1039–45.

Sung J, Qiu Q. The Impact of Housing Prices on Health in the United States Before, During, and After the Great Recession. South Econ J. 2020;86:910–40.

Wong ES, Oddo VM, Jones-Smith JC. Are Housing Prices Associated with Food Consumption? Int J Environ Res Public Health. 2020;17:3882.

Ratcliffe A. Wealth Effects, Local Area Attributes, and Economic Prospects: On the Relationship between House Prices and Mental Wellbeing. Rev Income Wealth. 2015;61:75–92.

Joshi NK. Local house prices and mental health. Int J Health Econ Manag. 2016;16:89–102.

Joanna Briggs Institute. Critical Appraisal Tools. 2022. https://jbi.global/critical-appraisal-tools

Popay J, Roberts H, Sowden AJ, Petticrew M, Arai L, Rodgers ME., Britton, N. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. 2006. https://www.lancaster.ac.uk/media/lancaster-university/content-assets/documents/fhm/dhr/chir/NSsynthesisguidanceVersion1-April2006.pdf .

Arcaya MC, Nidam Y, Binet A, Gibson R, Gavin V. Rising home values and Covid-19 case rates in Massachusetts. Soc Sci Med. 1982;2020(265): 113290.

Wang H-Q, Liang L-Q. How Do Housing Prices Affect Residents’ Health? New Evidence From China. Front Public Health. 2022;9:1–11.

Shlay AB. Low-income Homeownership: American Dream or Delusion? Urban Stud. 2006;43:511–31.

Boehm TP, Schlottmann AM. The dynamics of race, income, and homeownership. J Urban Econ. 2004;55:113–30.

Fazel S, Khosla V, Doll H, Geddes J. The Prevalence of Mental Disorders among the Homeless in Western Countries: Systematic Review and Meta-Regression Analysis. PLOS Med. 2008;5: e225.

McConnell ED. Who has Housing Affordability Problems? Disparities in Housing Cost Burden by Race, Nativity, and Legal Status in Los Angeles. Race Soc Probl. 2013;5:173–90.

Ding Y, Chin L, Lu M, Deng P. Do high housing prices crowd out young professionals?—Micro-evidence from China. Econ Res-Ekon Istraživanja. 2022;36(2):1–19.

CAS   Google Scholar  

World Health Organization. 2023. https://www.who.int/about/accountability/governance/constitution .

Fernald LC, Hamad R, Karlan D, Ozer EJ, Zinman J. Small individual loans and mental health: a randomized controlled trial among South African adults. BMC Public Health. 2008;8:409.

Silverstein M, Cong Z, Li S. Intergenerational Transfers and Living Arrangements of Older People in Rural China: Consequences for Psychological Well-Being. J Gerontol Ser B. 2006;61:S256–66.

Download references

Acknowledgements

We would like to acknowledge the support of Logan White for his support in conducting the literature review.

KGC is supported by a Michael Smith Health Research BC Scholar Award. This project was funded by grants from The Canadian Institutes for Health Research and the GenWell Project.

Author information

Authors and affiliations.

Faculty of Health Sciences, Simon Fraser University, Blusson Hall, 8888 University Dr. , Burnaby, BC, V5A 1S6, Canada

Ashmita Grewal, Kirk J. Hepburn, Scott A. Lear & Kiffer G. Card

Vancouver School of Economics, University of British Columbia, Vancouver, BC, Canada

Marina Adshade

You can also search for this author in PubMed   Google Scholar

Contributions

KGC & AG conceptualized the study design. KGC, LW, and AG conducted the literature review, search, and data extraction. KGC & AG drafted the initial manuscript. All authors provided substantive intellectual and editorial revisions and approved the final manuscript.

Corresponding author

Correspondence to Ashmita Grewal .

Ethics declarations

Ethics approval and consent to participate.

This study does not require ethics approval as it is a literature review.

Consent for publication

There are no figures or materials presented in this manuscript that require consent.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Grewal, A., Hepburn, K.J., Lear, S.A. et al. The impact of housing prices on residents’ health: a systematic review. BMC Public Health 24 , 931 (2024). https://doi.org/10.1186/s12889-024-18360-w

Download citation

Received : 04 August 2023

Accepted : 14 March 2024

Published : 01 April 2024

DOI : https://doi.org/10.1186/s12889-024-18360-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Housing price
  • Economic stress
  • Social determinants of health

BMC Public Health

ISSN: 1471-2458

systematic review report example

Disclaimer » Advertising

  • HealthyChildren.org

Issue Cover

  • Previous Article
  • Next Article

Selection Criteria

Search strategy, data extraction, risk of bias and applicability, data synthesis and analysis, parent ratings, teacher ratings, youth self-reports, combined rating scales, additional clinician tools, neuropsychological tests, biospecimen, neuroimaging, variation in diagnostic accuracy with clinical setting or patient subgroup, measures for diagnostic performance, available tools, importance of the comparator sample, clinical implications, future research, conclusions, acknowledgments, tools for the diagnosis of adhd in children and adolescents: a systematic review.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • CME Quiz Close Quiz
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Bradley S. Peterson , Joey Trampush , Morah Brown , Margaret Maglione , Maria Bolshakova , Mary Rozelle , Jeremy Miles , Sheila Pakdaman , Sachi Yagyu , Aneesa Motala , Susanne Hempel; Tools for the Diagnosis of ADHD in Children and Adolescents: A Systematic Review. Pediatrics April 2024; 153 (4): e2024065854. 10.1542/peds.2024-065854

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Correct diagnosis is essential for the appropriate clinical management of attention-deficit/hyperactivity disorder (ADHD) in children and adolescents.

This systematic review provides an overview of the available diagnostic tools.

We identified diagnostic accuracy studies in 12 databases published from 1980 through June 2023.

Any ADHD tool evaluation for the diagnosis of ADHD, requiring a reference standard of a clinical diagnosis by a mental health specialist.

Data were abstracted and critically appraised by 1 reviewer and checked by a methodologist. Strength of evidence and applicability assessments followed Evidence-based Practice Center standards.

In total, 231 studies met eligibility criteria. Studies evaluated parental ratings, teacher ratings, youth self-reports, clinician tools, neuropsychological tests, biospecimen, EEG, and neuroimaging. Multiple tools showed promising diagnostic performance, but estimates varied considerably across studies, with a generally low strength of evidence. Performance depended on whether ADHD youth were being differentiated from neurotypically developing children or from clinically referred children.

Studies used different components of available tools and did not report sufficient data for meta-analytic models.

A valid and reliable diagnosis of ADHD requires the judgment of a clinician who is experienced in the evaluation of youth with and without ADHD, along with the aid of standardized rating scales and input from multiple informants across multiple settings, including parents, teachers, and youth themselves.

Attention-deficit/hyperactivity disorder (ADHD) is one of the most prevalent neurodevelopmental conditions in youth. Its prevalence has remained constant at ∼5.3% worldwide over the years, and diagnostic criteria have remained constant when based on rigorous diagnostic procedures. 1   Clinical diagnoses, however, have increased steadily over time, 2   and currently, ∼10% of US children receive an ADHD diagnosis. 3   Higher rates of clinical compared with research-based diagnoses are because of an increasing clinician recognition of youth who have ADHD symptoms that are functionally impairing but do not fully meet formal diagnostic criteria. 4   The higher diagnostic rates over time in clinical samples also results from youth receiving a diagnosis incorrectly. Some youth, for example, are misdiagnosed as having ADHD when they have symptoms of other disorders that overlap with ADHD symptoms, such as difficulty concentrating, which occurs in many other conditions. 5   Moreover, ADHD is more than twice as likely to be diagnosed in boys than in girls, 3   in lower-income families, 6   and in white compared with nonwhite youth 7   ; differences that derive at least in part from diagnostic and cultural biases. 8   – 11  

Improving clinical diagnostic accuracy is essential to ensure that youth who truly have ADHD benefit from receiving treatment without delay. Similarly, youth who do not have ADHD should not be diagnosed since an incorrect diagnosis risks exposing them to unbeneficial treatments. 12 , 13   Clinician judgement alone, however, especially by nonspecialist clinicians, is poor in diagnosing ADHD 14   compared with expert, research-grade diagnoses made by mental health clinicians. 15   Accurately diagnosing ADHD is difficult because diagnoses are often made using subjective clinical impressions, and putative diagnostic tools have a confusing, diverse, and poorly described evidence base that is not widely accessible. The availability of valid diagnostic tools would especially help to reduce misdiagnoses from cultural biases and symptom overlap with ADHD. 12 , 16   – 19  

This review summarizes evidence for the performance of tools for children and adolescents with ADHD. We did not restrict to a set of known diagnostic tools but instead explored the range of available diagnostic tools, including machine-learning assisted and virtual reality-based tools. The review aimed to assess how diagnostic performance varies by clinical setting and patient characteristics.

The review aims were developed in consultation with the Agency for Healthcare Research and Quality (AHRQ), the Patient-Centered Outcomes Research Institute, the topic nominator American Academy of Pediatrics, key informants, a technical expert panel (TEP), and public input. The TEP reviewed the protocol and advised on key outcomes. Subgroup analyses and key outcomes were prespecified. The review is registered in PROSPERO (CRD42022312656) and the protocol is available on the AHRQ Web site as part of a larger evidence report on ADHD. The systematic review followed Methods of the (AHRQ) Evidence-based Practice Center Program. 20  

Population: age <18 years.

Interventions: any ADHD tool for the diagnosis of ADHD.

Comparators: diagnosis by a mental health specialist, such as a psychologist, psychiatrist, or other provider, who often used published scales or semistructured diagnostic interviews to ensure a reliable DSM-based diagnosis of ADHD.

Key outcomes: diagnostic accuracy (eg, sensitivity, specificity, area under the curve).

Setting: any.

Study design: diagnostic accuracy studies.

Other: English language, published from 1980 to June 2023.

We searched PubMed, Embase, PsycINFO, ERIC, and ClinicalTrials.gov. We identified reviews for reference-mining through PubMed, Cochrane Database of Systematic Reviews, Campbell Collaboration, What Works in Education, PROSPERO, ECRI Guidelines Trust, G-I-N, and ClinicalKey. The peer reviewed strategy is in the Supplemental Appendix . All citations were screened by trained literature reviewers supported by machine learning ( Fig 1 ). Two independent reviewers assessed full text studies for eligibility. The TEP reviewed studies to ensure all were captured. Publications reporting on the same participants were consolidated into 1 record.

Literature flow diagram.

Literature flow diagram.

The data abstraction form included extensive guidance to aid reproducibility and standardization in recording study details, results, risk of bias, and applicability. One reviewer abstracted data and a methodologist checked accuracy and completeness. Data are publicly available in the Systematic Review Data Repository.

We assessed characteristics pertaining to patient selection, index test, reference standard, flow and timing that may have introduced bias, and evaluated applicability of study results, such as whether the test, its conduct, or interpretation differed from how the test is used in clinical practice. 21 , 22  

We differentiated parent, teacher, and youth self-report ratings; tools for clinicians; neuropsychological tests; biospecimens; EEG; and neuroimaging. We organized analyses according to prespecified outcome measures. A narrative overview summarized the range of diagnostic performance for key outcomes. Because lack of reported detail in many individual studies hindered use of meta-analytic models, we created summary figures to document the diagnostic performance reported in each study. We used meta-regressions across studies to assess the effects of age, comorbidities, racial and ethnic composition, and diagnostic setting (differentiating primary care, specialty care, school settings, mixed settings, and not reported) on diagnostic performance. One researcher with experience in use of specified standardized criteria 23   initially assessed the overall strength of evidence (SoE) (see Supplemental Appendix ) for each study, then discussed it with the study team to communicate our confidence in each finding.

We screened 23 139 citations and 7534 publications retrieved as full text against the eligibility criteria. In total, 231 studies reported in 290 publications met the eligibility criteria (see Fig 1 ).

Methodological quality of the studies varied. Selection bias was likely in two-thirds of studies; several were determined to be problematic in terms of reported study flow and timing of assessments (eg, not stating whether diagnosis was known before the results of the index test); and several lacked details on diagnosticians or diagnostic procedures ( Supplemental Fig 1 ). Applicability concerns limited the generalizability of findings ( Supplemental Fig 2 ), usually because youth with comorbidities were excluded. Many different tools were assessed within the broader categories (eg, within neuropsychological tests), and even when reporting on the same diagnostic tool, studies often used different components of the tool (eg, different subscales of rating scales), or they combined components in a variety of ways (eg, across different neuropsychological test parameters).

The evidence table ( Supplemental Table 10 , Supplemental Appendix ) shows each study’s finding. The following highlights key findings across studies.

Fifty-nine studies used parent ratings to diagnose ADHD ( Fig 2 ). The most frequently evaluated tool was the CBCL (Child Behavior Checklist), alone or in combination with other tools, often using different score cutoffs for diagnosis, and evaluating different subscales (most frequently the attention deficit/hyperactivity problems subscale). Sensitivities ranged from 38% (corresponding specificity = 96%) to 100% (specificity = 4% to 92%). 24 , 25  

Diagnostic performance parent and teacher ratings. For a complete list of scales see Supplemental Appendix.

Diagnostic performance parent and teacher ratings. For a complete list of scales see Supplemental Appendix .

Area under the curve (AUC) for receiver operator characteristic curves ranged widely from 0.55 to 0.95 but 3 CBCL studies reported AUCs of 0.83 to 0.84. 26   – 28   Few studies reported measurement of reliability. SoE was downgraded for study limitation (lack of detailed reporting), imprecision (large performance variability), and inconsistent findings ( Supplemental Table 1 ).

Twenty-three studies used teacher ratings to diagnose ADHD ( Fig 2 ). No 2 studies reported on rater agreement, internal consistency, or test-retest reliability for the same teacher rating scale. The highest sensitivity was 97% (specificity = 26%). 25   The Teacher Report Form, alone or in combination with Conners teacher rating scales, yielded sensitivities of 72% to 79% 29   and specificities of 64% to 76%. 30 , 32   reported AUCs ranged from 0.65 to 0.84. 32   SoE was downgraded to low for imprecision (large performance variability) and inconsistency (results for specific tools not replicated), see Supplemental Table 2 .

Six studies used youth self-reports to diagnose ADHD. No 2 studies used the same instrument. Sensitivities ranged from 53% (specificity = 98%) to 86% (specificity = 70%). 35   AUCs ranged from 0.56 to 0.85. 36   We downgraded SoE for domain inconsistency (only 1 study reported on a given tool and outcome), see Supplemental Table 3 .

Thirteen studies assessed diagnostic performance of ratings combined across informants, often using machine learning for variable selection. Only 1 study compared performance of combined data to performance from single informants, finding negligible improvement (AUC youth = 0.71; parent = 0.85; combined = 0.86). 37   Other studies reported on limited outcome measures and used ad hoc methods to combine information from multiple informants. The best AUC was reported by a machine learning supported study combining parent and teacher ratings (AUC = 0.98). 38  

Twenty-four studies assessed additional tools, such as interview guides, that can be used by clinicians to aid diagnosis of ADHD. Sensitivities varied, ranging from 67% (specificity = 65%) to 98% (specificity = 100%); specificities ranged from 36% (sensitivity = 89%) to 100% (sensitivity = 98%). 39   Some of the tools measured activity levels objectively using an actometer or commercially available activity tracker, either alone or as part of a diagnostic test battery. Reported performance was variable (sensitivity range 25% to 100%, 40   specificity range 66% to 100%, 40   AUCs range 0.75–0.9996 41   ). SoE was downgraded for imprecision (large performance variability) and inconsistency (outcomes and results not replicated), see Supplemental Table 4 .

Seventy-four studies used measures from various neuropsychological tests, including continuous performance tests (CPTs). Four of these included 3- and 4-year-old children. 42   – 44   A large majority used a CPT, which assessed omission errors (reflecting inattention), commission errors (impulsivity), and reaction time SD (response time variability). Studies varied in use of traditional visual CPTs, such as the Test of Variables of Attention, more novel, multifaceted “hybrid” CPT paradigms, and virtual reality CPTs built upon environments designed to emulate real-world classroom distractibility. Studies used idiosyncratic combinations of individual cognitive measures to achieve the best performance, though many reported on CPT attention and impulsivity measures.

Sensitivity for all neuropsychological tests ranged from 22% (specificity = 96%) to 100% (specificity = 100%) 45   ( Fig 3 ), though the latter study reported performance for unique composite measures without replication. Specificities ranged from 22% (sensitivity = 91%) 46   to 100% (sensitivity = 100% to 75%). 45 , 47   AUCs ranged from 0.59 to 0.93. 48   Sensitivity for all CPT studies ranged from 22% ( specificity = 96) to 100% (specificity = 75%). 49   Specificities for CPTs ranged from 22% (sensitivity = 91%) to 100% (sensitivity = 89%) 47   ( Fig 3 ). AUCs ranged from 0.59 to 0.93. 50 , 51   SoE was deemed low for imprecise studies (large performance variability), see Supplemental Table 5.

Diagnostic performance neuropsychological tests, CPTs, activity monitors, biospecimen, EEG.

Diagnostic performance neuropsychological tests, CPTs, activity monitors, biospecimen, EEG.

Seven studies assessed blood or urine biomarkers to diagnose ADHD. These measured erythropoietin or erythropoietin receptor, membrane potential ratio, micro RNA levels, or urine metabolites. Sensitivities ranged from 56% (specificity = 95%) to 100% (specificity = 100% for erythropoietin and erythropoietin receptors levels). 52   Specificities ranged from 25% (sensitivity = 79%) to 100% (sensitivity = 100%). 52   AUCs ranged from 0.68 to 1.00. 52   Little information was provided on reliability of markers or their combinations. SoE was downgraded for inconsistent and imprecise studies ( Supplemental Table 6 ).

Forty-five studies used EEG markers to diagnose ADHD. EEG signals were obtained in a variety of patient states, even during neuropsychological test performance. Two-thirds used machine learning algorithms to select classification parameters. Several combined EEG with demographic variables or rating scales. Sensitivity ranged widely from 46% to 100% (corresponding specificities 74 and 71%). 53 , 54   One study that combined EEG with demographics data supported by machine learning reported perfect sensitivity and specificity. 54   Specificity was also variable and ranged from 38% (sensitivity = 95%) to 100% (specificities = 71% or 100%). 53   – 56   Reported AUCs ranged from 0.63 to 1.0. 57 , 58   SoE was downgraded for study imprecision (large performance variability) and limitations (diagnostic approaches poorly described), see Supplemental Table 7 .

Nineteen studies used neuroimaging for diagnosis. One public data set (ADHD-200) produced several analyses. All but 2 used MRI: some functional MRI (fMRI), some structural, and some in combination, with or without magnetic resonance spectroscopy (2 used near-infrared spectroscopy). Most employed machine learning to detect markers that optimized diagnostic classifications. Some combined imaging measures with demographic or other clinical data in the prediction model. Sensitivities ranged from 42% (specificity = 95%) to 99% (specificity = 100%) using resting state fMRI and a complex machine learning algorithm 56   to differentiate ADHD from neurotypical youth. Specificities ranged from 55% (sensitivity = 95%) to 100% 56   using resting state fMRI data. AUCs ranged from 0.58 to over 0.99, 57   SoE was downgraded for imprecision (large performance variability) and study limitations (diagnostic models are often not well described, and the number and type of predictor variables entering the model were unclear). Studies generally did not validate diagnostic algorithms or assess performance measures in an independent sample ( Supplemental Table 8 ).

Regression analyses indicated that setting was associated with both sensitivity ( P = .03) and accuracy ( P = .006) but not specificity ( P = .68) or AUC ( P = .28), with sensitivities lowest in primary care ( Fig 4 ). Sensitivity, specificity, and accuracy were also lower when differentiating youth with ADHD from a clinical sample than from typically developing youth (sensitivity P = .04, specificity P < .001, AUC P < .001) ( Fig 4 ), suggesting that clinical population is a source of heterogeneity in diagnostic performance. Findings should be interpreted with caution, however, as they were not obtained in meta-analytic models and, consequently, do not take into account study size or quality.

Diagnostic performance by setting and population.

Diagnostic performance by setting and population.

Supplemental Figs 3–5 in the Supplemental Appendix document effects by age and gender. We did not detect statistically significant associations of age with sensitivity ( P = .54) or specificity ( P = .37), or associations of the proportion of girls with sensitivity ( P = .63), specificity ( P = .80), accuracy ( P = .34), or AUC ( P = .90).

We identified a large number of publications reporting on ADHD diagnostic tools. To our knowledge, no prior review of ADHD diagnostic tools has been as comprehensive in the range of tools, outcomes, participant ages, and publication years. Despite the large number of studies, we deemed the strength of evidence for the reported performance measures across all categories of diagnostic tools to be low because of large performance variability across studies and various limitations within and across studies.

We required that studies report diagnoses when using the tool compared with diagnoses made by expert mental health clinicians. Studies most commonly reported sensitivity (true-positive rate) and specificity (true-negative rate) when a study-specific diagnostic threshold was applied to measures from the tool being assessed. Sensitivity and specificity depend critically on that study-specific threshold, and their values are inherently a trade-off, such that varying the threshold to increase either sensitivity or specificity reduces the other. Interpreting diagnostic performance in terms of sensitivity and specificity, and comparing those performance measures across studies, is therefore challenging. Consequently, researchers more recently often report performance for sensitivity and specificity in terms of receiver operating characteristics (ROC) curves, a plot of sensitivity versus specificity across the entire range of possible diagnostic thresholds. The area under this ROC curve (AUC) provides an overall, single index of performance that ranges from 0.5 (indicating that the tool provides no information above chance for classification) to 1.0 (indicating a perfect test that can correctly classify all participants as having ADHD and all non-ADHD participants as not having it). AUC values of 90 to 100 are commonly classified as excellent performance; 80 to 90 as good; 70 to 80 as fair; 60 to 70 as poor; and 50 to 60 failed performance.

Most research is available on parental ratings. Overall, AUCs for parent rating scales ranged widely from “poor” 58   to “excellent.” 59   Analyses restricted to the CBCL, the most commonly evaluated scale, yielded more consistent “good” AUCs for differentiating youth with ADHD from others in clinical samples, but the number of studies contributing data were small. Internal consistency for rating scale items was generally high across most rating scales. Test-retest reliability was good, though only 2 studies reported it. One study reported moderate rater agreement between mothers and fathers for inattention, hyperactivity, and impulsivity symptoms. Few studies included youth under 7 years of age.

AUCs for teacher rating scales ranged from “failed” 33   to “good.” 34   Internal consistency for scale items was generally high. Teacher ratings demonstrated very low rater agreement with corresponding parent scales, suggesting either a problem with the instruments or a large variability in symptom presentation with environmental context (home or school).

Though data were limited, self-reports from youth seemed to perform less well than corresponding parent and teacher reports, with AUCs ranging from “failed” for CBCL or ASEBA when distinguishing ADHD from other patients 33   to “good” for the SWAN in distinguishing ADHD from neurotypical controls. 36 , 37  

Studies evaluating neuropsychological tests yielded AUCs ranging from “poor” 60 , 61   to “excellent.” 50   Many used idiosyncratic combinations of cognitive measures, which complicates interpretation of the results across studies. Nevertheless, extracting specific, comparable measures of inattention and impulsivity from CPTs yielded diagnostic performance ranging from “poor” to “excellent” in differentiating ADHD youth from neurotypical controls and “fair” in differentiating ADHD youth from other patients. 42 , 60 , 62   No studies provided an independent replication of diagnosis using the same measure.

Blood biomarkers yielded AUCs ranging from “poor” (serum miRNAs) 63   to “excellent” (erythropoietin and erythropoietin receptors levels) 52   in differentiating ADHD from neurotypical youth. None have been independently replicated, and test-retest reliability was not reported. Most EEG studies used machine learning for diagnostic classification. AUCs ranged from “poor” 64   to “excellent” when differentiating ADHD youth from neurotypical controls. 65   Diagnostic performance was not prospectively replicated in any independent samples.

Most neuroimaging studies relied on machine learning to develop diagnostic algorithms. AUCs ranged from “poor” 66   to “excellent” for distinguishing ADHD youth from neurotypically developing controls. 57   Most studies used pre-existing data sets or repositories to retrospectively discriminate youths with ADHD from neurotypical controls, not from other clinical populations and not prospectively, and none assessed test-retest reliability or the independent reproducibility of findings. Reporting of final mathematical models or algorithms for diagnosis was limited. Activity monitors have the advantage of providing inexpensive, objective, easily obtained, and quantified measures that can potentially be widely disseminated and scaled.

Studies of combined approaches, such as integrating diagnostic tools with clinician impressions, were limited. One study reported increased sensitivity and specificity when an initial clinician diagnosis combined EEG indicators (the reference standard was a consensus diagnosis from a panel of ADHD experts). 67   These findings were not independently replicated, however, and no test-retest reliability was reported.

Many studies aimed to distinguish ADHD youth from neurotypical controls, which is a distinction of limited clinical relevance. In clinically referred youth, most parents, teachers, and clinicians are reasonably confident that something is wrong, even if they are unsure whether the cause of their concern is ADHD. To be informed by a tool that the child is not typically developing is not particularly helpful. Moreover, we cannot know whether diagnostic performance for tools that discriminate ADHD youth only from neurotypical controls is determined by the presence of ADHD or by the presence of any other characteristics that accompany clinical “caseness,” such as the presence of comorbid illnesses or symptoms shared or easily confused with those of other conditions, or the effects of chronic stress or current or past treatment. The clinically more relevant and difficult question is, therefore, how well the tool distinguishes youth with ADHD from those who have other emotional and behavioral problems. Consistent with these conceptual considerations that argue for assessing diagnostic performance in differentiating youth with ADHD from those with other clinical conditions, we found significant evidence that, across all studies, sensitivity, specificity, and AUC were all lower when differentiating youth with ADHD from a clinical sample than when differentiating them from neurotypical youth. These findings also suggest that the comparison population was a significant source of heterogeneity in diagnostic performance.

Despite the large number of studies on diagnostic tools, a valid and reliable diagnosis of ADHD ultimately still requires the judgement of a clinician who is experienced in the evaluation of youth with and without ADHD, along with the aid of standardized rating scales and input from multiple informants across multiple settings, including parents, teachers, and youth themselves. Diagnostic tools perform best when the clinical question is whether a youth has ADHD or is healthy and typically developing, rather than when the clinical question is whether a youth has ADHD or another mental health or behavioral problem. Diagnostic tools yield more false-positive and false-negative diagnoses of ADHD when differentiating youth with ADHD from youth with another mental health problem than when differentiating them from neurotypically developing youth.

Scores for rating scales tended to correlate poorly across raters, and ADHD symptoms in the same child varied across settings, indicating that no single informant in a single setting is a gold-standard for diagnosis. Therefore, diagnosis using rating scales will likely benefit from a more complete representation of symptom expression across multiple informants (parents, school personnel, clinicians, and youth) across more than 1 setting (home, school, and clinic) to inform clinical judgement when making a diagnosis, thus, consistent with current guidelines. 68   – 70   Unfortunately, methods for combining scores across raters and settings that improve diagnosis compared with scores from single raters have not been developed or prospectively replicated.

Despite the widespread use of neuropsychological testing to “diagnose” youth with ADHD, often at considerable expense, indirect comparisons of AUCs suggest that performance of neuropsychological test measures in diagnosing ADHD is comparable to the diagnostic performance of ADHD rating scales from a single informant. Moreover, the diagnostic accuracy of parent rating scales is typically better than neuropsychological test measures in head-to-head comparisons. 44 , 71   Furthermore, the overall SoE for estimates of diagnostic performance with neuropsychological testing is low. Use of neuropsychological test measures of executive functioning, such as the CPT, may help inform a clinical diagnosis, but they are not definitive either in ruling in or ruling out a diagnosis of ADHD. The sole use of CPTs and other neuropsychological tests to diagnose ADHD, therefore, cannot be recommended. We note that this conclusion regarding diagnostic value is not relevant to any other clinical utility that testing may have.

No independent replication studies have been conducted to validate EEG, neuroimaging, or biospecimen to diagnose ADHD, and no clinical effectiveness studies have been conducted using these tools to diagnose ADHD in the real world. Thus, these tools do not seem remotely close to being ready for clinical application to aid diagnosis, despite US Food and Drug Administration approval of 1 EEG measure as a purported diagnostic aid. 67 , 72  

All studies of diagnostic tools should report data in more detail (ie, clearly report false-positive and -negative rates, the diagnostic thresholds used, and any data manipulation undertaken to achieve the result) to support meta-analytic methods. Studies should include ROC analyses to support comparisons of test performance across studies that are independent of the diagnostic threshold applied to measures from the tool. They should also include assessment of test-retest reliability to help discern whether variability in measures and test performance is a function of setting or of measurement variability over time. Future studies should address the influence of co-occurring disorders on diagnostic performance and how well the tools distinguish youth with ADHD from youth with other emotional and behavioral problems, not simply from healthy controls. More studies should compare the diagnostic accuracy of different test modalities, head-to-head. Independent, prospective replication of performance measures of diagnostic tools in real-world settings is essential before US Food and Drug Administration approval and before recommendations for widespread clinical use.

Research is needed to identify consensus algorithms that combine rating scale data from multiple informants to improve the clinical diagnosis of ADHD, which at present is often unguided, ad hoc, and suboptimal. Diagnostic studies using EEG, neuroimaging, and neuropsychological tests should report precise operational definitions and measurements of the variable(s) used for diagnosis, any diagnostic algorithm employed, the selected statistical cut-offs, and the number of false-positives and false-negatives the diagnostic tool yields to support future efforts at synthetic analyses.

Objective, quantitative neuropsychological test measures of executive functioning correlate only weakly with the clinical symptoms that define ADHD. 73   Thus, many youth with ADHD have normal executive functioning profiles on neuropsychological testing, and many who have impaired executive functioning on testing do not have ADHD. 74   Future research is needed to understand how test measures of executive functioning and the real-world functional problems that define ADHD map on to one another and how that mapping can be improved.

One of the most important potential uses of systematic reviews and meta-analyses in improving the clinical diagnosis of ADHD and treatment planning would be identification of effect modifiers for the performance of diagnostic tools: determining, for example, whether tools perform better in patients who are younger or older, in ethnic minorities, or those experiencing material hardship, or who have a comorbid illness or specific ADHD presentation. Future studies of ADHD should more systematically address the modifier effects of these patient characteristics. They should make available in public repositories the raw, individual-level data and the algorithms or computer code that will aid future efforts at replication, synthesis, and new discovery for diagnostic tools across data sets and studies.

Finally, no studies meeting our inclusion criteria assessed the consequences of being misdiagnosed or labeled as either having or not having ADHD, the diagnosis of ADHD specifically in preschool-aged children, or the potential adverse consequences of youth being incorrectly diagnosed with or without ADHD. This work is urgently needed.

We thank Cynthia Ramirez, Erin Tokutomi, Jennifer Rivera, Coleman Schaefer, Jerusalem Belay, Anne Onyekwuluje, and Mario Gastelum for help with data acquisition. We thank Kymika Okechukwu, Lauren Pilcher, Joanna King, and Robyn Wheatley from the American Academy of Pediatrics (AAP), Jennie Dalton and Paula Eguino Medina from PCORI, Christine Chang and Kim Wittenberg from AHRQ, and Mary Butler from the Minnesota Evidence-based Practice Center. We thank Glendy Burnett, Eugenia Chan, MD, MPH, Matthew J. Gormley, PhD, Laurence Greenhill, MD, Joseph Hagan, Jr, MD, Cecil Reynolds, PhD, Le'Ann Solmonson, PhD, LPC-S, CSC, and Peter Ziemkowski, MD, FAAFP who served as key informants. We thank Angelika Claussen, PhD, Alysa Doyle, PhD, Tiffany Farchione, MD, Matthew J. Gormley, PhD, Laurence Greenhill, MD, Jeffrey M. Halperin, PhD, Marisa Perez-Martin, MS, LMFT, Russell Schachar, MD, Le'Ann Solmonson, PhD, LPC-S, CSC, and James Swanson, PhD who served as a technical expert panel. Finally, we thank Joel Nigg, PhD, and Peter S. Jensen, MD for their peer review of the data.

Drs Peterson and Hempel conceptualized and designed the study, collected data, conducted the analyses, drafted the initial manuscript, and critically reviewed and revised the manuscript; Dr Trampush conducted the critical appraisal; Ms Brown, Ms Maglione, Drs Bolshakova and Padkaman, and Ms Rozelle screened citations and abstracted the data; Dr Miles conducted the analyses; Ms Yagyu designed and executed the search strategy; Ms Motala served as data manager; and all authors provided critical input for the manuscript, approved the final manuscript as submitted, and agree to be accountable for all aspects of the work.

This trial has been registered at PROSPERO (identifier CRD42022312656).

COMPANION PAPER: A companion to this article can be found online at www.pediatrics.org/cgi/doi/10.1542/peds.2024-065787 .

Data Sharing: Data are available in SRDRPlus.

FUNDING: The work is based on research conducted by the Southern California Evidence-based Practice Center under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract 75Q80120D00009). The Patient-Centered Outcomes Research Institute (PCORI) funded the research (PCORI Publication No. 2023-SR-03). The findings and conclusions in this manuscript are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ or PCORI, its Board of Governors, or Methodology Committee. Therefore, no statement in this report should be construed as an official position of PCORI, AHRQ or of the US Department of Health and Human Services.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no conflicts of interest to disclose.

attention-deficit/hyperactivity disorder

area under the curve

Child Behavior Checklist

continuous performance test

functional magnetic resonance imaging

receiver operating characteristics

strength of evidence

technical expert panel

Supplementary data

Advertising Disclaimer »

Citing articles via

Email alerts.

systematic review report example

Affiliations

  • Editorial Board
  • Editorial Policies
  • Journal Blogs
  • Pediatrics On Call
  • Online ISSN 1098-4275
  • Print ISSN 0031-4005
  • Pediatrics Open Science
  • Hospital Pediatrics
  • Pediatrics in Review
  • AAP Grand Rounds
  • Latest News
  • Pediatric Care Online
  • Red Book Online
  • Pediatric Patient Education
  • AAP Toolkits
  • AAP Pediatric Coding Newsletter

First 1,000 Days Knowledge Center

Institutions/librarians, group practices, licensing/permissions, integrations, advertising.

  • Privacy Statement | Accessibility Statement | Terms of Use | Support Center | Contact Us
  • © Copyright American Academy of Pediatrics

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 24, Issue 2
  • Five tips for developing useful literature summary tables for writing review articles
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-0157-5319 Ahtisham Younas 1 , 2 ,
  • http://orcid.org/0000-0002-7839-8130 Parveen Ali 3 , 4
  • 1 Memorial University of Newfoundland , St John's , Newfoundland , Canada
  • 2 Swat College of Nursing , Pakistan
  • 3 School of Nursing and Midwifery , University of Sheffield , Sheffield , South Yorkshire , UK
  • 4 Sheffield University Interpersonal Violence Research Group , Sheffield University , Sheffield , UK
  • Correspondence to Ahtisham Younas, Memorial University of Newfoundland, St John's, NL A1C 5C4, Canada; ay6133{at}mun.ca

https://doi.org/10.1136/ebnurs-2021-103417

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research. 1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis in reviews, the use of literature summary tables is of utmost importance. A literature summary table provides a synopsis of an included article. It succinctly presents its purpose, methods, findings and other relevant information pertinent to the review. The aim of developing these literature summary tables is to provide the reader with the information at one glance. Since there are multiple types of reviews (eg, systematic, integrative, scoping, critical and mixed methods) with distinct purposes and techniques, 2 there could be various approaches for developing literature summary tables making it a complex task specialty for the novice researchers or reviewers. Here, we offer five tips for authors of the review articles, relevant to all types of reviews, for creating useful and relevant literature summary tables. We also provide examples from our published reviews to illustrate how useful literature summary tables can be developed and what sort of information should be provided.

Tip 1: provide detailed information about frameworks and methods

  • Download figure
  • Open in new tab
  • Download powerpoint

Tabular literature summaries from a scoping review. Source: Rasheed et al . 3

The provision of information about conceptual and theoretical frameworks and methods is useful for several reasons. First, in quantitative (reviews synthesising the results of quantitative studies) and mixed reviews (reviews synthesising the results of both qualitative and quantitative studies to address a mixed review question), it allows the readers to assess the congruence of the core findings and methods with the adapted framework and tested assumptions. In qualitative reviews (reviews synthesising results of qualitative studies), this information is beneficial for readers to recognise the underlying philosophical and paradigmatic stance of the authors of the included articles. For example, imagine the authors of an article, included in a review, used phenomenological inquiry for their research. In that case, the review authors and the readers of the review need to know what kind of (transcendental or hermeneutic) philosophical stance guided the inquiry. Review authors should, therefore, include the philosophical stance in their literature summary for the particular article. Second, information about frameworks and methods enables review authors and readers to judge the quality of the research, which allows for discerning the strengths and limitations of the article. For example, if authors of an included article intended to develop a new scale and test its psychometric properties. To achieve this aim, they used a convenience sample of 150 participants and performed exploratory (EFA) and confirmatory factor analysis (CFA) on the same sample. Such an approach would indicate a flawed methodology because EFA and CFA should not be conducted on the same sample. The review authors must include this information in their summary table. Omitting this information from a summary could lead to the inclusion of a flawed article in the review, thereby jeopardising the review’s rigour.

Tip 2: include strengths and limitations for each article

Critical appraisal of individual articles included in a review is crucial for increasing the rigour of the review. Despite using various templates for critical appraisal, authors often do not provide detailed information about each reviewed article’s strengths and limitations. Merely noting the quality score based on standardised critical appraisal templates is not adequate because the readers should be able to identify the reasons for assigning a weak or moderate rating. Many recent critical appraisal checklists (eg, Mixed Methods Appraisal Tool) discourage review authors from assigning a quality score and recommend noting the main strengths and limitations of included studies. It is also vital that methodological and conceptual limitations and strengths of the articles included in the review are provided because not all review articles include empirical research papers. Rather some review synthesises the theoretical aspects of articles. Providing information about conceptual limitations is also important for readers to judge the quality of foundations of the research. For example, if you included a mixed-methods study in the review, reporting the methodological and conceptual limitations about ‘integration’ is critical for evaluating the study’s strength. Suppose the authors only collected qualitative and quantitative data and did not state the intent and timing of integration. In that case, the strength of the study is weak. Integration only occurred at the levels of data collection. However, integration may not have occurred at the analysis, interpretation and reporting levels.

Tip 3: write conceptual contribution of each reviewed article

While reading and evaluating review papers, we have observed that many review authors only provide core results of the article included in a review and do not explain the conceptual contribution offered by the included article. We refer to conceptual contribution as a description of how the article’s key results contribute towards the development of potential codes, themes or subthemes, or emerging patterns that are reported as the review findings. For example, the authors of a review article noted that one of the research articles included in their review demonstrated the usefulness of case studies and reflective logs as strategies for fostering compassion in nursing students. The conceptual contribution of this research article could be that experiential learning is one way to teach compassion to nursing students, as supported by case studies and reflective logs. This conceptual contribution of the article should be mentioned in the literature summary table. Delineating each reviewed article’s conceptual contribution is particularly beneficial in qualitative reviews, mixed-methods reviews, and critical reviews that often focus on developing models and describing or explaining various phenomena. Figure 2 offers an example of a literature summary table. 4

Tabular literature summaries from a critical review. Source: Younas and Maddigan. 4

Tip 4: compose potential themes from each article during summary writing

While developing literature summary tables, many authors use themes or subthemes reported in the given articles as the key results of their own review. Such an approach prevents the review authors from understanding the article’s conceptual contribution, developing rigorous synthesis and drawing reasonable interpretations of results from an individual article. Ultimately, it affects the generation of novel review findings. For example, one of the articles about women’s healthcare-seeking behaviours in developing countries reported a theme ‘social-cultural determinants of health as precursors of delays’. Instead of using this theme as one of the review findings, the reviewers should read and interpret beyond the given description in an article, compare and contrast themes, findings from one article with findings and themes from another article to find similarities and differences and to understand and explain bigger picture for their readers. Therefore, while developing literature summary tables, think twice before using the predeveloped themes. Including your themes in the summary tables (see figure 1 ) demonstrates to the readers that a robust method of data extraction and synthesis has been followed.

Tip 5: create your personalised template for literature summaries

Often templates are available for data extraction and development of literature summary tables. The available templates may be in the form of a table, chart or a structured framework that extracts some essential information about every article. The commonly used information may include authors, purpose, methods, key results and quality scores. While extracting all relevant information is important, such templates should be tailored to meet the needs of the individuals’ review. For example, for a review about the effectiveness of healthcare interventions, a literature summary table must include information about the intervention, its type, content timing, duration, setting, effectiveness, negative consequences, and receivers and implementers’ experiences of its usage. Similarly, literature summary tables for articles included in a meta-synthesis must include information about the participants’ characteristics, research context and conceptual contribution of each reviewed article so as to help the reader make an informed decision about the usefulness or lack of usefulness of the individual article in the review and the whole review.

In conclusion, narrative or systematic reviews are almost always conducted as a part of any educational project (thesis or dissertation) or academic or clinical research. Literature reviews are the foundation of research on a given topic. Robust and high-quality reviews play an instrumental role in guiding research, practice and policymaking. However, the quality of reviews is also contingent on rigorous data extraction and synthesis, which require developing literature summaries. We have outlined five tips that could enhance the quality of the data extraction and synthesis process by developing useful literature summaries.

  • Aromataris E ,
  • Rasheed SP ,

Twitter @Ahtisham04, @parveenazamali

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.71(2); 2018 Apr

Introduction to systematic review and meta-analysis

1 Department of Anesthesiology and Pain Medicine, Inje University Seoul Paik Hospital, Seoul, Korea

2 Department of Anesthesiology and Pain Medicine, Chung-Ang University College of Medicine, Seoul, Korea

Systematic reviews and meta-analyses present results by combining and analyzing data from different studies conducted on similar research topics. In recent years, systematic reviews and meta-analyses have been actively performed in various fields including anesthesiology. These research methods are powerful tools that can overcome the difficulties in performing large-scale randomized controlled trials. However, the inclusion of studies with any biases or improperly assessed quality of evidence in systematic reviews and meta-analyses could yield misleading results. Therefore, various guidelines have been suggested for conducting systematic reviews and meta-analyses to help standardize them and improve their quality. Nonetheless, accepting the conclusions of many studies without understanding the meta-analysis can be dangerous. Therefore, this article provides an easy introduction to clinicians on performing and understanding meta-analyses.

Introduction

A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective, and scientific method of analyzing and combining different results. Usually, in order to obtain more reliable results, a meta-analysis is mainly conducted on randomized controlled trials (RCTs), which have a high level of evidence [ 2 ] ( Fig. 1 ). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. Following the Quality of Reporting of Meta-analyses (QUORUM) statement [ 3 ], and the appearance of registers such as Cochrane Library’s Methodology Register, a large number of systematic literature reviews have been registered. In 2009, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [ 4 ] was published, and it greatly helped standardize and improve the quality of systematic reviews and meta-analyses [ 5 ].

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f1.jpg

Levels of evidence.

In anesthesiology, the importance of systematic reviews and meta-analyses has been highlighted, and they provide diagnostic and therapeutic value to various areas, including not only perioperative management but also intensive care and outpatient anesthesia [6–13]. Systematic reviews and meta-analyses include various topics, such as comparing various treatments of postoperative nausea and vomiting [ 14 , 15 ], comparing general anesthesia and regional anesthesia [ 16 – 18 ], comparing airway maintenance devices [ 8 , 19 ], comparing various methods of postoperative pain control (e.g., patient-controlled analgesia pumps, nerve block, or analgesics) [ 20 – 23 ], comparing the precision of various monitoring instruments [ 7 ], and meta-analysis of dose-response in various drugs [ 12 ].

Thus, literature reviews and meta-analyses are being conducted in diverse medical fields, and the aim of highlighting their importance is to help better extract accurate, good quality data from the flood of data being produced. However, a lack of understanding about systematic reviews and meta-analyses can lead to incorrect outcomes being derived from the review and analysis processes. If readers indiscriminately accept the results of the many meta-analyses that are published, incorrect data may be obtained. Therefore, in this review, we aim to describe the contents and methods used in systematic reviews and meta-analyses in a way that is easy to understand for future authors and readers of systematic review and meta-analysis.

Study Planning

It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical methods on estimates from two or more different studies to form a pooled estimate [ 1 ]. Following a systematic review, if it is not possible to form a pooled estimate, it can be published as is without progressing to a meta-analysis; however, if it is possible to form a pooled estimate from the extracted data, a meta-analysis can be attempted. Systematic reviews and meta-analyses usually proceed according to the flowchart presented in Fig. 2 . We explain each of the stages below.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f2.jpg

Flowchart illustrating a systematic review.

Formulating research questions

A systematic review attempts to gather all available empirical research by using clearly defined, systematic methods to obtain answers to a specific question. A meta-analysis is the statistical process of analyzing and combining results from several similar studies. Here, the definition of the word “similar” is not made clear, but when selecting a topic for the meta-analysis, it is essential to ensure that the different studies present data that can be combined. If the studies contain data on the same topic that can be combined, a meta-analysis can even be performed using data from only two studies. However, study selection via a systematic review is a precondition for performing a meta-analysis, and it is important to clearly define the Population, Intervention, Comparison, Outcomes (PICO) parameters that are central to evidence-based research. In addition, selection of the research topic is based on logical evidence, and it is important to select a topic that is familiar to readers without clearly confirmed the evidence [ 24 ].

Protocols and registration

In systematic reviews, prior registration of a detailed research plan is very important. In order to make the research process transparent, primary/secondary outcomes and methods are set in advance, and in the event of changes to the method, other researchers and readers are informed when, how, and why. Many studies are registered with an organization like PROSPERO ( http://www.crd.york.ac.uk/PROSPERO/ ), and the registration number is recorded when reporting the study, in order to share the protocol at the time of planning.

Defining inclusion and exclusion criteria

Information is included on the study design, patient characteristics, publication status (published or unpublished), language used, and research period. If there is a discrepancy between the number of patients included in the study and the number of patients included in the analysis, this needs to be clearly explained while describing the patient characteristics, to avoid confusing the reader.

Literature search and study selection

In order to secure proper basis for evidence-based research, it is essential to perform a broad search that includes as many studies as possible that meet the inclusion and exclusion criteria. Typically, the three bibliographic databases Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) are used. In domestic studies, the Korean databases KoreaMed, KMBASE, and RISS4U may be included. Effort is required to identify not only published studies but also abstracts, ongoing studies, and studies awaiting publication. Among the studies retrieved in the search, the researchers remove duplicate studies, select studies that meet the inclusion/exclusion criteria based on the abstracts, and then make the final selection of studies based on their full text. In order to maintain transparency and objectivity throughout this process, study selection is conducted independently by at least two investigators. When there is a inconsistency in opinions, intervention is required via debate or by a third reviewer. The methods for this process also need to be planned in advance. It is essential to ensure the reproducibility of the literature selection process [ 25 ].

Quality of evidence

However, well planned the systematic review or meta-analysis is, if the quality of evidence in the studies is low, the quality of the meta-analysis decreases and incorrect results can be obtained [ 26 ]. Even when using randomized studies with a high quality of evidence, evaluating the quality of evidence precisely helps determine the strength of recommendations in the meta-analysis. One method of evaluating the quality of evidence in non-randomized studies is the Newcastle-Ottawa Scale, provided by the Ottawa Hospital Research Institute 1) . However, we are mostly focusing on meta-analyses that use randomized studies.

If the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) system ( http://www.gradeworkinggroup.org/ ) is used, the quality of evidence is evaluated on the basis of the study limitations, inaccuracies, incompleteness of outcome data, indirectness of evidence, and risk of publication bias, and this is used to determine the strength of recommendations [ 27 ]. As shown in Table 1 , the study limitations are evaluated using the “risk of bias” method proposed by Cochrane 2) . This method classifies bias in randomized studies as “low,” “high,” or “unclear” on the basis of the presence or absence of six processes (random sequence generation, allocation concealment, blinding participants or investigators, incomplete outcome data, selective reporting, and other biases) [ 28 ].

The Cochrane Collaboration’s Tool for Assessing the Risk of Bias [ 28 ]

Data extraction

Two different investigators extract data based on the objectives and form of the study; thereafter, the extracted data are reviewed. Since the size and format of each variable are different, the size and format of the outcomes are also different, and slight changes may be required when combining the data [ 29 ]. If there are differences in the size and format of the outcome variables that cause difficulties combining the data, such as the use of different evaluation instruments or different evaluation timepoints, the analysis may be limited to a systematic review. The investigators resolve differences of opinion by debate, and if they fail to reach a consensus, a third-reviewer is consulted.

Data Analysis

The aim of a meta-analysis is to derive a conclusion with increased power and accuracy than what could not be able to achieve in individual studies. Therefore, before analysis, it is crucial to evaluate the direction of effect, size of effect, homogeneity of effects among studies, and strength of evidence [ 30 ]. Thereafter, the data are reviewed qualitatively and quantitatively. If it is determined that the different research outcomes cannot be combined, all the results and characteristics of the individual studies are displayed in a table or in a descriptive form; this is referred to as a qualitative review. A meta-analysis is a quantitative review, in which the clinical effectiveness is evaluated by calculating the weighted pooled estimate for the interventions in at least two separate studies.

The pooled estimate is the outcome of the meta-analysis, and is typically explained using a forest plot ( Figs. 3 and ​ and4). 4 ). The black squares in the forest plot are the odds ratios (ORs) and 95% confidence intervals in each study. The area of the squares represents the weight reflected in the meta-analysis. The black diamond represents the OR and 95% confidence interval calculated across all the included studies. The bold vertical line represents a lack of therapeutic effect (OR = 1); if the confidence interval includes OR = 1, it means no significant difference was found between the treatment and control groups.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f3.jpg

Forest plot analyzed by two different models using the same data. (A) Fixed-effect model. (B) Random-effect model. The figure depicts individual trials as filled squares with the relative sample size and the solid line as the 95% confidence interval of the difference. The diamond shape indicates the pooled estimate and uncertainty for the combined effect. The vertical line indicates the treatment group shows no effect (OR = 1). Moreover, if the confidence interval includes 1, then the result shows no evidence of difference between the treatment and control groups.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f4.jpg

Forest plot representing homogeneous data.

Dichotomous variables and continuous variables

In data analysis, outcome variables can be considered broadly in terms of dichotomous variables and continuous variables. When combining data from continuous variables, the mean difference (MD) and standardized mean difference (SMD) are used ( Table 2 ).

Summary of Meta-analysis Methods Available in RevMan [ 28 ]

The MD is the absolute difference in mean values between the groups, and the SMD is the mean difference between groups divided by the standard deviation. When results are presented in the same units, the MD can be used, but when results are presented in different units, the SMD should be used. When the MD is used, the combined units must be shown. A value of “0” for the MD or SMD indicates that the effects of the new treatment method and the existing treatment method are the same. A value lower than “0” means the new treatment method is less effective than the existing method, and a value greater than “0” means the new treatment is more effective than the existing method.

When combining data for dichotomous variables, the OR, risk ratio (RR), or risk difference (RD) can be used. The RR and RD can be used for RCTs, quasi-experimental studies, or cohort studies, and the OR can be used for other case-control studies or cross-sectional studies. However, because the OR is difficult to interpret, using the RR and RD, if possible, is recommended. If the outcome variable is a dichotomous variable, it can be presented as the number needed to treat (NNT), which is the minimum number of patients who need to be treated in the intervention group, compared to the control group, for a given event to occur in at least one patient. Based on Table 3 , in an RCT, if x is the probability of the event occurring in the control group and y is the probability of the event occurring in the intervention group, then x = c/(c + d), y = a/(a + b), and the absolute risk reduction (ARR) = x − y. NNT can be obtained as the reciprocal, 1/ARR.

Calculation of the Number Needed to Treat in the Dichotomous table

Fixed-effect models and random-effect models

In order to analyze effect size, two types of models can be used: a fixed-effect model or a random-effect model. A fixed-effect model assumes that the effect of treatment is the same, and that variation between results in different studies is due to random error. Thus, a fixed-effect model can be used when the studies are considered to have the same design and methodology, or when the variability in results within a study is small, and the variance is thought to be due to random error. Three common methods are used for weighted estimation in a fixed-effect model: 1) inverse variance-weighted estimation 3) , 2) Mantel-Haenszel estimation 4) , and 3) Peto estimation 5) .

A random-effect model assumes heterogeneity between the studies being combined, and these models are used when the studies are assumed different, even if a heterogeneity test does not show a significant result. Unlike a fixed-effect model, a random-effect model assumes that the size of the effect of treatment differs among studies. Thus, differences in variation among studies are thought to be due to not only random error but also between-study variability in results. Therefore, weight does not decrease greatly for studies with a small number of patients. Among methods for weighted estimation in a random-effect model, the DerSimonian and Laird method 6) is mostly used for dichotomous variables, as the simplest method, while inverse variance-weighted estimation is used for continuous variables, as with fixed-effect models. These four methods are all used in Review Manager software (The Cochrane Collaboration, UK), and are described in a study by Deeks et al. [ 31 ] ( Table 2 ). However, when the number of studies included in the analysis is less than 10, the Hartung-Knapp-Sidik-Jonkman method 7) can better reduce the risk of type 1 error than does the DerSimonian and Laird method [ 32 ].

Fig. 3 shows the results of analyzing outcome data using a fixed-effect model (A) and a random-effect model (B). As shown in Fig. 3 , while the results from large studies are weighted more heavily in the fixed-effect model, studies are given relatively similar weights irrespective of study size in the random-effect model. Although identical data were being analyzed, as shown in Fig. 3 , the significant result in the fixed-effect model was no longer significant in the random-effect model. One representative example of the small study effect in a random-effect model is the meta-analysis by Li et al. [ 33 ]. In a large-scale study, intravenous injection of magnesium was unrelated to acute myocardial infarction, but in the random-effect model, which included numerous small studies, the small study effect resulted in an association being found between intravenous injection of magnesium and myocardial infarction. This small study effect can be controlled for by using a sensitivity analysis, which is performed to examine the contribution of each of the included studies to the final meta-analysis result. In particular, when heterogeneity is suspected in the study methods or results, by changing certain data or analytical methods, this method makes it possible to verify whether the changes affect the robustness of the results, and to examine the causes of such effects [ 34 ].

Heterogeneity

Homogeneity test is a method whether the degree of heterogeneity is greater than would be expected to occur naturally when the effect size calculated from several studies is higher than the sampling error. This makes it possible to test whether the effect size calculated from several studies is the same. Three types of homogeneity tests can be used: 1) forest plot, 2) Cochrane’s Q test (chi-squared), and 3) Higgins I 2 statistics. In the forest plot, as shown in Fig. 4 , greater overlap between the confidence intervals indicates greater homogeneity. For the Q statistic, when the P value of the chi-squared test, calculated from the forest plot in Fig. 4 , is less than 0.1, it is considered to show statistical heterogeneity and a random-effect can be used. Finally, I 2 can be used [ 35 ].

I 2 , calculated as shown above, returns a value between 0 and 100%. A value less than 25% is considered to show strong homogeneity, a value of 50% is average, and a value greater than 75% indicates strong heterogeneity.

Even when the data cannot be shown to be homogeneous, a fixed-effect model can be used, ignoring the heterogeneity, and all the study results can be presented individually, without combining them. However, in many cases, a random-effect model is applied, as described above, and a subgroup analysis or meta-regression analysis is performed to explain the heterogeneity. In a subgroup analysis, the data are divided into subgroups that are expected to be homogeneous, and these subgroups are analyzed. This needs to be planned in the predetermined protocol before starting the meta-analysis. A meta-regression analysis is similar to a normal regression analysis, except that the heterogeneity between studies is modeled. This process involves performing a regression analysis of the pooled estimate for covariance at the study level, and so it is usually not considered when the number of studies is less than 10. Here, univariate and multivariate regression analyses can both be considered.

Publication bias

Publication bias is the most common type of reporting bias in meta-analyses. This refers to the distortion of meta-analysis outcomes due to the higher likelihood of publication of statistically significant studies rather than non-significant studies. In order to test the presence or absence of publication bias, first, a funnel plot can be used ( Fig. 5 ). Studies are plotted on a scatter plot with effect size on the x-axis and precision or total sample size on the y-axis. If the points form an upside-down funnel shape, with a broad base that narrows towards the top of the plot, this indicates the absence of a publication bias ( Fig. 5A ) [ 29 , 36 ]. On the other hand, if the plot shows an asymmetric shape, with no points on one side of the graph, then publication bias can be suspected ( Fig. 5B ). Second, to test publication bias statistically, Begg and Mazumdar’s rank correlation test 8) [ 37 ] or Egger’s test 9) [ 29 ] can be used. If publication bias is detected, the trim-and-fill method 10) can be used to correct the bias [ 38 ]. Fig. 6 displays results that show publication bias in Egger’s test, which has then been corrected using the trim-and-fill method using Comprehensive Meta-Analysis software (Biostat, USA).

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f5.jpg

Funnel plot showing the effect size on the x-axis and sample size on the y-axis as a scatter plot. (A) Funnel plot without publication bias. The individual plots are broader at the bottom and narrower at the top. (B) Funnel plot with publication bias. The individual plots are located asymmetrically.

An external file that holds a picture, illustration, etc.
Object name is kjae-2018-71-2-103f6.jpg

Funnel plot adjusted using the trim-and-fill method. White circles: comparisons included. Black circles: inputted comparisons using the trim-and-fill method. White diamond: pooled observed log risk ratio. Black diamond: pooled inputted log risk ratio.

Result Presentation

When reporting the results of a systematic review or meta-analysis, the analytical content and methods should be described in detail. First, a flowchart is displayed with the literature search and selection process according to the inclusion/exclusion criteria. Second, a table is shown with the characteristics of the included studies. A table should also be included with information related to the quality of evidence, such as GRADE ( Table 4 ). Third, the results of data analysis are shown in a forest plot and funnel plot. Fourth, if the results use dichotomous data, the NNT values can be reported, as described above.

The GRADE Evidence Quality for Each Outcome

N: number of studies, ROB: risk of bias, PON: postoperative nausea, POV: postoperative vomiting, PONV: postoperative nausea and vomiting, CI: confidence interval, RR: risk ratio, AR: absolute risk.

When Review Manager software (The Cochrane Collaboration, UK) is used for the analysis, two types of P values are given. The first is the P value from the z-test, which tests the null hypothesis that the intervention has no effect. The second P value is from the chi-squared test, which tests the null hypothesis for a lack of heterogeneity. The statistical result for the intervention effect, which is generally considered the most important result in meta-analyses, is the z-test P value.

A common mistake when reporting results is, given a z-test P value greater than 0.05, to say there was “no statistical significance” or “no difference.” When evaluating statistical significance in a meta-analysis, a P value lower than 0.05 can be explained as “a significant difference in the effects of the two treatment methods.” However, the P value may appear non-significant whether or not there is a difference between the two treatment methods. In such a situation, it is better to announce “there was no strong evidence for an effect,” and to present the P value and confidence intervals. Another common mistake is to think that a smaller P value is indicative of a more significant effect. In meta-analyses of large-scale studies, the P value is more greatly affected by the number of studies and patients included, rather than by the significance of the results; therefore, care should be taken when interpreting the results of a meta-analysis.

When performing a systematic literature review or meta-analysis, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results can be biased and the outcomes can be incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results that could usually only be achieved using large-scale RCTs, which are difficult to perform in individual studies. As our understanding of evidence-based medicine increases and its importance is better appreciated, the number of systematic reviews and meta-analyses will keep increasing. However, indiscriminate acceptance of the results of all these meta-analyses can be dangerous, and hence, we recommend that their results be received critically on the basis of a more accurate understanding.

1) http://www.ohri.ca .

2) http://methods.cochrane.org/bias/assessing-risk-bias-included-studies .

3) The inverse variance-weighted estimation method is useful if the number of studies is small with large sample sizes.

4) The Mantel-Haenszel estimation method is useful if the number of studies is large with small sample sizes.

5) The Peto estimation method is useful if the event rate is low or one of the two groups shows zero incidence.

6) The most popular and simplest statistical method used in Review Manager and Comprehensive Meta-analysis software.

7) Alternative random-effect model meta-analysis that has more adequate error rates than does the common DerSimonian and Laird method, especially when the number of studies is small. However, even with the Hartung-Knapp-Sidik-Jonkman method, when there are less than five studies with very unequal sizes, extra caution is needed.

8) The Begg and Mazumdar rank correlation test uses the correlation between the ranks of effect sizes and the ranks of their variances [ 37 ].

9) The degree of funnel plot asymmetry as measured by the intercept from the regression of standard normal deviates against precision [ 29 ].

10) If there are more small studies on one side, we expect the suppression of studies on the other side. Trimming yields the adjusted effect size and reduces the variance of the effects by adding the original studies back into the analysis as a mirror image of each study.

IMAGES

  1. (PDF) How to Write a Systematic Review

    systematic review report example

  2. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    systematic review report example

  3. Systematic Literature Review Sample

    systematic review report example

  4. How to Write a Systematic Review

    systematic review report example

  5. Simple Systematic Review Using a 5-step

    systematic review report example

  6. (PDF) Assessment of the abstract reporting of systematic reviews of

    systematic review report example

VIDEO

  1. Systematic Literature Review Paper presentation

  2. Systematic Review: Explained!

  3. Systematic Literature Review Technique

  4. :020; Systematic review: How to report your findings(methodology)?

  5. Example for critical appraisal of studies: ivermectin for COVID19

  6. Example of Systematic Review Poster

COMMENTS

  1. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  2. Examples of systematic reviews

    Please choose the tab below for your discipline to see relevant examples. For more information about how to conduct and write reviews, please see the Guidelines section of this guide. Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018). Nicotine effects on exercise performance and ...

  3. Guidance to best tools and practices for systematic reviews

    Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines . Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence.

  4. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  5. PRISMA 2020 explanation and elaboration: updated guidance and exemplars

    Furthermore, a meta-analysis can be done outside the context of a systematic review (for example, when researchers meta-analyse results from a limited set of studies that they have conducted). ... Many systematic review reports include narrative summaries of the characteristics and risk of bias across all included studies. 36 However, such ...

  6. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement ...

  7. Systematic Reviews: Step 8: Write the Review

    reports not retrieved (Step 3). Review the full text for these items to assess their eligibility for inclusion in your systematic review. Step 6: Reports excluded After reviewing all items in the full-text screening stage for eligibility, enter the total number of articles you exclude in the box titled "Reports Excluded," and then list your reasons

  8. Steps of a Systematic Review

    This diagram illustrates what is actually in a published systematic review and gives examples from the relevant parts of a systematic review housed online on The Cochrane Library. It will help you to read or navigate a systematic review. ... Summarizing - report provides description of methods and results in a clear and transparent manner ...

  9. Chapter III: Reporting the review

    The effort of undertaking a systematic review is wasted if review authors do not report clearly what they did and what they found (Glasziou et al 2014). ... the accumulating evidence base now justify a systematic review. For example, it might be the case that studies have reached conflicting conclusions, that there is debate about the evidence ...

  10. (PDF) How to Do a Systematic Review: A Best Practice Guide for

    How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses January 2019 Annual Review of Psychology 70(1)

  11. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  12. Introduction to Systematic Reviews

    A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question (Lasserson et al. 2019; IOM 2011).What sets a systematic review apart from a narrative review is that it follows consistent, rigorous, and transparent methods established in a protocol in order to minimize bias and errors.

  13. How to Write A Systematic Review Introduction

    These are the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement and the Cochrane handbook. Both have specifications on how to write the report, including the introduction. Look at a systematic review example, and you'll find that it uses either one of these frameworks. PRISMA Statement vs. Cochrane Guidelines

  14. PDF How to write a systematic literature review: a guide for medical students

    Collected data from systematic searches should be documented in an appropriate format. This is conducted in a way that suits the reviewer best. An example is provided below in which the data from a systematic search are documented in Microsoft Excel and the references retained in Mendeley referencing software.

  15. PDF Writing a Systematic Literature Review: Resources for Students ...

    A systematic review is a review of the literature that addresses a clearly formulated ... report the combined results from relevant publications. Meta-analysis is a statistical method that integrates and summarises results from relevant publications ... website provides excellent resources including a checklist and an example of a flow diagram. ...

  16. PDF How to Write a Systematic Review: A Step-by-Step Guide

    HOWTO WRITE A SYSTEmATIC REVIEW: A STEP-BY-STEP GUIDE 65 VOLUmE 23, JUNE 2013 or 6) improve study generalizability. Bear in mind that the purpose of a systematic review is to not only collect all the relevant literature in an unbiased fashion, but to extract data presented in these articles in order to provide readers with a

  17. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  18. Research Guides: Systematic Reviews: Reporting Results

    PRISMA provides a list of items to consider when reporting results.. Study selection: Give numbers of studies screened, assessed for eligibility, & included in the review, with reasons for exclusions at each stage, ideally with a flow diagram. Study characteristics: For each study, present characteristics for which data were extracted (e.g., study size, PICOs, follow-up period) & provide the ...

  19. Write

    What to include. In general, the writing process for a systematic review is similar to the process of writing any other kind of review. A systematic review should provide an answer to the research question, it is not a broad overview of current trends and gaps in research. The review should show the reader how the answer was found, and provide ...

  20. (PDF) Systematic Literature Review: Some Examples

    Report. 4. ii. Example for a Systematic Literature Review: In references 5 example for paper that use Systematic Literature Review (SlR) example: ( Event-Driven Process Chain for Modeling and ...

  21. The impact of housing prices on residents' health: a systematic review

    Background Rising housing prices are becoming a top public health priority and are an emerging concern for policy makers and community leaders. This report reviews and synthesizes evidence examining the association between changes in housing price and health outcomes. Methods We conducted a systematic literature review by searching the SCOPUS and PubMed databases for keywords related to ...

  22. Standards for Reporting Systematic Reviews

    Abstract: Authors of publicly sponsored systematic reviews (SRs) should produce a detailed, comprehensive final report. The committee recommends three related standards for documenting the SR process, responding to input from peer reviewers and other users and stakeholders, and making the final report publicly available. The standards draw extensively from the Preferred Reporting Items for ...

  23. Tools for the Diagnosis of ADHD in Children and Adolescents: A

    Subgroup analyses and key outcomes were prespecified. The review is registered in PROSPERO (CRD42022312656) and the protocol is available on the AHRQ Web site as part of a larger evidence report on ADHD. The systematic review followed Methods of the (AHRQ) Evidence-based Practice Center Program. 20

  24. Five tips for developing useful literature summary tables for writing

    Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research.1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis ...

  25. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...