• - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

critical literature review and meta analysis

REVIEW article

Emotions in self-regulated learning: a critical literature review and meta-analysis.

\r\nJuan Zheng*

  • 1 Department of Education and Human Services, Lehigh University, Bethlehem, PA, United States
  • 2 Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada
  • 3 Department of Community and Population Health, Lehigh University, Bethlehem, PA, United States

Emotion has been recognized as an important component in the framework of self-regulated learning (SRL) over the past decade. Researchers explore emotions and SRL at two levels. Emotions are studied as traits or states, whereas SRL is deemed functioning at two levels: Person and Task × Person. However, limited research exists on the complex relationships between emotions and SRL at the two levels. Theoretical inquiries and empirical evidence about the role of emotions in SRL remain somewhat fragmented. This review aims to illustrate the role of both trait and state emotions in SRL at Person and Task × Person levels. Moreover, we conducted a meta-analysis to synthesize 23 empirical studies that were published between 2009 and 2020 to seek evidence about the role of emotions in SRL. An integrated theoretical framework of emotions in SRL is proposed based on the review and the meta-analysis. We propose several research directions that deserve future investigation, including collecting multimodal multichannel data to capture emotions and SRL. This paper lays a solid foundation for developing a comprehensive understanding of the role of emotions in SRL and asking important questions for future investigation.

1. Introduction

Students experience a variety of emotions, which can be either beneficial or detrimental to their learning processes and performance. Positive emotions have a considerable impact on students’ academic achievement and can ultimately lead to success in the academic domain ( Pekrun et al., 2009 ). In contrast, negative emotions may impede students’ academic processes. For example, negative emotions (e.g., anger, anxiety, and boredom) have been found to be negatively associated with students’ motivation, learning strategies, and cognitive resources ( Pekrun et al., 2002 ). Given the impressive growth of research on emotions in education, the notion of emotions has been incorporated into various education theories, especially the theoretical frameworks of self-regulated learning (SRL).

Self-regulated learning (SRL) refers to thoughts, feelings, and behaviors that learners plan and adjust to attain learning goals ( Zimmerman, 2000 ). SRL theories account for the cognitive, metacognitive, motivational, and emotional processes and strategies that characterize learners’ efforts to build sophisticated mental models during learning ( Pintrich, 2000 ; Winne and Perry, 2000 ). Although theorists emphasize different aspects of SRL, the majority of them include emotions as one component of SRL ( Boekaerts, 1996 ; Efklides, 2011 ). Emotions are generally considered contributing factors that enhance or undermine the use of superficial learning strategies or deep strategies in SRL. A growing number of empirical studies provide general support for the significance of emotions in SRL by examining the effects of emotions on SRL strategies (e.g., Pekrun et al., 2010 ). However, both theoretical inquiries and empirical evidence about the role of emotions in SRL are still in a state of fragmentation. The field still needs a comprehensive framework that explains the complex connections between emotions and SRL. This paper addresses two research questions : (1) What theories can be found to explain the complex relationships between academic emotions and SRL? and (2) what empirical evidence exists to support the relationships between academic emotions and SRL ? Our goal is to synthesize the current theoretical frameworks and empirical evidence with the purpose of proposing a model that underpins the link between academic emotions and SRL in individualized learning environments.

2. Academic emotions: What are they?

Academic emotion is an important dimension of self-regulated learning that researchers should consider when focusing on within-individual factors influencing learning ( Ben-Eliyahu, 2019 ). Academic emotions are no longer just disruptions that learners should avoid or suppress ( Shuman and Scherer, 2014 ). Academic emotions can be beneficial and harmful, pleasant and unpleasant, and activating and deactivating, depending on the specific emotions and situations.

2.1. Taxonomy of emotions

Researchers generally agree to categorize emotions according to the focus of objects (stimulus of emotions), valence (positive or negative), and degree of activation (activating or deactivating). Based on the focus of objects, emotions in the learning context can be distinguished as achievement emotions, epistemic emotions, topic emotions, and social emotions ( Pekrun and Linnenbrink-Garcia, 2012 ). This review focuses on individual emotions, i.e., achievement and epistemic emotions that occur in individualized learning environments rather than social emotions that arise in group learning environments.

Achievement emotions are emotions that pertain to achievement activities or outcomes that are typically judged by competency-based standards, including anxiety, enjoyment, hope, pride, relief, anger, shame, hopelessness, and boredom ( Pekrun, 2006 ). Epistemic emotions are triggered by knowledge and knowledge-generating qualities in cognitive tasks and activities ( Trevors et al., 2016 ; Pekrun et al., 2017 ). For instance, when personal knowledge conflicts with external knowledge, namely cognitive incongruity, emotions may be activated by the epistemic nature of the task ( Muis et al., 2015a ). This kind of cognitive incongruity may cause surprise, curiosity, enjoyment, confusion, anxiety, frustration, or boredom. There are overlaps between achievement and epistemic emotions ( Pekrun and Stephens, 2012 ). For example, a student’s enjoyment can be an achievement emotion if it focuses on personal success or an epistemic emotion if it stems from a cognitive incongruity in knowledge. Achievement emotions and epistemic emotions are pervasive in different learning situations and have significant influences on learning ( Sinatra et al., 2015 ). To better understand how these types of emotions can be evaluated in SRL, Rosenberg (1998) suggested that we must also consider the levels and organization of emotions.

2.2. Trait emotions and state emotions

According to Rosenberg’s (1998) seminal work, emotions can be distinguished as traits and states. Trait emotions reflect a relatively general and stable way of responding to the world. In contrast, state emotions are characterized as episodic, experiential, and contextual and can be influenced by situational cues ( Goetz et al., 2015 ). These differences can also be applied to the educational context, where one can differentiate trait-like academic emotions from state-like emotions ( Pekrun et al., 2002 ). Trait-like emotions are typical course-related emotional experiences pertaining to a specific course, an exam, or a class. In contrast, state-like emotions are momentary emotional experiences within a single episode of academic life ( Ahmed et al., 2013 ). The differences between trait and state emotions can be traced back to the factors influencing emotions. Trait emotions are derived from memory and are influenced by students’ subjective beliefs and semantic knowledge ( Robinson and Clore, 2002 ). For example, students who do not have abundant knowledge of a specific situation may report more emotions than those with sufficient or similar knowledge. On the other hand, memory plays a less significant role in state emotions ( Bieg et al., 2013 ), where the intra-individual variance of state emotions is influenced more by the students’ interactions between the learning content and environment in a single learning episode. Consequently, these distinctions between trait-like and state-like emotions are essential in understanding the inconsistent self-report emotion measurements that often occur when people are asked to self-report feelings they generally experienced in a course versus those they are currently experiencing ( Robinson and Clore, 2002 ). Furthermore, these distinctions can also help us to better understand the role of emotions in SRL.

3. SRL: Two levels of development

Self-regulated learning (SRL) researchers evaluate self-regulated learners based on their theoretical orientations. Winne (1997) first distinguished between an aptitude and an event in terms of the property of SRL. An aptitude is a person’s relatively enduring attribute aggregated in multiple learning activities. For example, a student who reports their habit of memorizing everything in learning can be predicted as a learner who is more inclined to use memorizing strategy. However, this does not mean the student will use a memorizing strategy in every SRL event. An event is a transient and continuous learning state that has a clear starting point and endpoint. Completing a task and finishing an exam are all examples of event-like SRL. Greene and Azevedo (2009) further identified 35 event-like SRL processes at the micro level, e.g., re-reading, reviewing notes, and hypothesizing. Moreover, Efklides (2011) articulated the difference between the Person level SRL, represented by personal characteristics, and the Task × Person level SRL, guided by the monitoring features of task processing. In sum, there exist two levels of SRL: Person level SRL (or aptitude-like SRL) and the Task × Person level SRL (or event-like SRL). The underlying premise for this claim is that different learning contexts, including the nature of tasks and the structure of subjects, can influence how learners regulate their learning process ( Poitras and Lajoie, 2013 ). The claim calls for attention to the acknowledgment of the two levels of SRL while reviewing the theoretical and empirical evidence regarding the role of emotions in SRL.

4. What do SRL models say about emotions in SRL?

The answers to the role of emotions in SRL have changed over time because of changing conceptualizations of SRL and the development of SRL models. However, emotions are consistently viewed as an important dimension in SRL ( Lajoie, 2008 ). SRL models paved the way for understanding the role of emotions in SRL. In order to address our first research question regarding emotion and SRL theories, we reviewed five SRL models recognized by Panadero (2017) who argued that all have a consolidated theoretical and empirical foundation. Moreover, the five SRL models are all seminal theories and they are well-recognized in the literature. Järvelä and Hadwin’s (2013) socially shared regulated learning model was not included, as individualized learning is the focus of this paper. Table 1 presents our review of these six SRL models, focusing on what emotions are generated and how emotions affect SRL.

www.frontiersin.org

Table 1. The role of emotions in self-regulated learning (SRL) models.

Based on the social cognitive paradigm, Zimmerman (1990) acknowledged the existence of emotions and their role in SRL. He described self-satisfaction as a combination of emotions ranging from elation to depression. However, he did not specify which emotions were included in the umbrella of self-satisfaction feelings. Pintrich (2000) extended Zimmerman’s (1990) model and discussed emotions in the context of test anxiety. In Pintrich’s (2000) model, task or contextual features were proposed as factors that might activate test anxiety, and emotion regulation strategies were used to manage test anxiety. This model recognized both the generation and effect of emotions. However, the model only identified test anxiety as an emotion, failing to address other types of emotions that might affect learning. Boekaerts (1996 , 2011) gradually shifted her theory from cognition and motivation to emotion and emotion regulation ( Panadero, 2017 ). In her dual-processing model, emotions were proposed as a result of the dual processing of appraisals toward the learning situation ( Boekaerts, 2011 ). If the learning situation were appraised as congruent with personal goals, positive emotions toward the task would be triggered. In contrast, negative emotions would be triggered if the learning situation was appraised as threatening well-being because of task difficulty or insufficient support. The dual processing model highlighted the importance of emotions in SRL but did not specify the type of emotions and the outcomes of emotions in SRL. Winne and Hadwin (1998) emphasized how conditions, operations, products, evaluations, and standards (COPES) could influence the four phases of SRL tasks (i.e., task definition, goals and planning, studying tactics, adaptations). Emotion was not explicitly referred to in this model ( Panadero, 2017 ). The discussion about motivational factors could be an allusion to emotions. Learning feelings may influence the relationship between cognitive conditions and actual operations. In contrast, the metacognitive and affective model of self-regulated learning (MASRL) provided insight into the interactions of metacognition, motivation, and affect in SRL. This model puts more emphasis on the affect in SRL and refers explicitly to the two levels of SRL. As mentioned before, MASRL presented a Person level of SRL functioning, as well as a Task × Person level of SRL events in task processing ( Efklides, 2011 ). We will discuss further how this model describes the relationship between emotions and SRL at two levels.

At the Person level, decisions about what SRL strategies to choose are made based on stable personal characteristics and habitual representation of situational demands ( Efklides et al., 2018 ). Emotion is a relatively stable characteristic of the individual, namely trait emotions. Efklides (2011) describes three extreme scenarios pertaining to how emotions may interact with SRL at the Person level. In the first positive scenario, learners predict success with appropriate SRL strategies and positive emotions. In the second negative scenario, learners predict failure with inappropriate SRL strategies and negative emotions. In the third scenario, learners underestimate or overestimate personal competency; consequently, their emotional reactions and effort expenditure do not match learning outcomes. More specifically, if a student underestimates their mathematics skills, for example, they would emotionally feel anxious and spend more effort learning math, resulting in successful learning outcomes. By contrast, a student who overestimates his effort would experience positive emotions, exert insufficient effort, and have unsuccessful outcomes. These estimated efforts and emotions, restored at a general level, provide cues for subsequent specific tasks ( Efklides, 2006 ).

In a specific task, SRL happens in the form of dynamic events at a Task × Person level. According to this model, task features (e.g., complexity) are objective and independent of a specific learning context but intersect with the person’s attributes and must be considered jointly. The MASRL model proposed three phases of SRL that align with Zimmerman’s (2000) proposition of SRL phases (i.e., forethought, performance, and self-reflection ). The forethought phase may involve two types of cognitive processes. The first type is an automatic and unconscious cognitive process, which can be generated when dealing with familiar, fluent, and effortless tasks ( Efklides, 2011 ). When processes are automatic, emotions are neutral or moderately positive without conscious control processes and increased physiological activity ( Carver and Scheier, 1998 ). The second type of cognitive process is analytic and can be triggered by the task’s structure, novelty, and complexity ( Alter et al., 2007 ). Negative emotions may appear with increased arousal ( Efklides, 2011 ). On the other hand, emotions such as surprise and curiosity may be generated depending on the uncertainty and cognitive interruption that occurs during this phase ( Bar-Anan et al., 2009 ). At the performance phase , negative or positive emotions may also change according to the fluency of processing and the rate of progress ( Ainley et al., 2005 ). When tasks are completed, and outcomes are produced at the self-reflection phase , positive or negative emotions accompanying the outcomes of the task are triggered or enhanced.

5. Academic emotions and SRL: What does the empirical evidence tell us?

To address the second research question (i.e., empirical evidence regarding the role of emotions in SRL), we conducted a comprehensive literature search via the PsycINFO, ERIC, and Web of Science databases. The search syntax was (“self-regulated learning” OR “self-regulation” OR “metacognition”) AND (“emotion” OR “affective” OR “anxiety” OR “positive emotions” OR “negative emotions”). The search ended up retrieving 205 articles. We then applied these five inclusion criteria to screen articles: (1) The study must be published in English; (2) The study measured specific self-regulated learning strategies or self-regulated learning processes; (3) The study measured discrete emotions; (4) The study was conducted in a specific learning setting, including an exam, a task, a course, or a specific training program; (5) The study reported the correlation between specific SRL strategies/processes and discrete emotions. Only 23 studies that meet these five criteria are included.

As can be seen in Appendix A , these 23 empirical studies examined the relationship between emotions and SRL strategies. By analyzing the summary of the 23 included studies ( Appendix A ), we find anxiety, enjoyment, frustration, and boredom are the most frequently examined academic emotions. Metacognitive strategy is most frequently examined in SRL. We then coded these five variables to synthesize the correlation between the four academic emotions and metacognitive strategies. Particularly, there were 14 studies focusing on anxiety, 11 studies on enjoyment, 7 studies on frustration, and 10 studies on boredom. The number of studies was statistically sufficient, based on the rule of a minimum of five independent studies for reliable estimation in the small-sample meta-analysis ( Fisher and Tipton, 2015 ).

We adopted a random-effects model ( Hedges and Vevea, 1998 ) since the studies in our review differed in methodological characteristics. Among the positive emotions, we found that enjoyment was positively related to metacognitive strategies ( r = 0.42) (see Table 2 ). As displayed in the forest tree of enjoyment in Figure 1 , we found positive relationships between enjoyment and metacognitive strategies in all studies. In addition to enjoyment, pride also positively predicted cognitive and metacognitive strategies ( Ahmed et al., 2013 ).

www.frontiersin.org

Table 2. Meta-analysis results.

www.frontiersin.org

Figure 1. The forest tree of anxiety, enjoyment, frustration, and boredom.

In terms of negative emotions, anxiety and frustration are generally negatively related to metacognitive strategies ( r = −0.075 and r = −0.12, respectively). However, mixed findings have also been identified across studies. For instance, Muis et al. (2015a) and Peng et al. (2014) found positive relationships between anxiety and metacognitive strategies. Frustration was found to be both positively and negatively related to metacognitive strategies ( Artino and Stephens, 2009 ; Artino and Jones, 2012 ; Marchand and Gutierrez, 2012 ; Cho and Heron, 2015 ). Surprise, curiosity, and confusion are epistemic emotions that produce the most inconsistency in terms of valence categorization, meaning that for learners, they are sometimes pleasant and sometimes unpleasant when experiencing these three epistemic emotions ( Noordewier and Breugelmans, 2013 ). In our meta-analysis, we found boredom negatively related to metacognitive strategies in most of the studies ( r = −0.31). Curiosity and confusion either positively or negatively predict SRL depending on the depth of strategy use ( Muis et al., 2015b ). In other words, surprise, curiosity, and confusion can have different effects on shallow processing strategies, deep processing strategies, cognitive strategies, and metacognitive strategies.

The empirical studies provide evidence about the relationship between trait emotions and SRL strategies. Specifically, the majority of the empirical studies focused on examining how academic emotions affect SRL strategies at the Person level (see Appendix A ). This emphasis is partly because researchers initially conceptualized SRL as a relatively stable individual inclination, which led to trait-like measures of SRL strategies that have dominated the literature ( Boekaerts and Cascallar, 2006 ). The methodological and ethical issues regarding collecting online data also restrict the exploration of emotions and SRL as states and events ( Schutz and Davis, 2000 ).

To sum up, previous SRL models and empirical studies are not sufficient to reveal the underlying mechanisms of emotions in SRL. One reason is that existing SRL models put unequal emphasis on emotions and SRL. It is worth mentioning that the MASRL model took a crucial step toward a better understanding of emotions in SRL. The model emphasizes both the static and dynamic characteristics of emotions at the two levels of SRL. Nevertheless, the MASRL model provides no clues about how emotions are generated and how the complex interplays of emotions and SRL influence learning outcomes. In terms of empirical evidence, previous studies only addressed the relationships between emotions, SRL strategies, and learning outcomes. Many questions are still unanswered, for instance, (a) what academic emotions will be generated in the SRL process? (b) what are the effects of different emotions in SRL, (c) how do emotions change in different stages of SRL?, and (d) what are the relationships between emotion and SRL at the Task × Person level? An integrated framework is needed to illustrate the role of emotions in SRL better. To substantially advance this field of research, we contend that this framework should address the generation and effects of emotions in SRL. It should provide explanations of the reciprocal relationships between emotions and SRL. Furthermore, the two levels of SRL (i.e., Person and Task × Person level) should be considered to demonstrate how trait and state emotions unfold in different SRL phases, e.g., forethought, performance, and self-reflection.

6. Toward an integrated framework for understanding emotions in SRL

In this study, we proposed an integrative framework of emotions in SRL (ESRL) ( Figure 2 ). The ESRL framework was developed based on the previous conceptualizations of emotions and SRL. It retains the important contributions of previous SRL models. As shown in Figure 2 , the framework focuses on the generation and effects of emotions in SRL at two levels (i.e., Person and Task × Person level). In the center of the ESRL framework are the propositions that SRL is an aptitude influenced by trait emotions at the Person level. Moreover, SRL is also an event in a specific task that has dynamic state emotions unfold during different phases at the Task × Person level.

www.frontiersin.org

Figure 2. The role of emotions in self-regulated learning (ESRL).

6.1. Antecedents of academic emotions

Individual characteristics, environmental factors, control appraisals, and value appraisals are the antecedents of academic emotions at both the Person and Task × Person levels. According to the Control Value theory (CVT), individual antecedents include intraindividual differences such as gender and achievement goals ( Pekrun and Perry, 2014 ). Environmental antecedents (e.g., autonomy support and feedback) are factors that characterize general learning environments. Either trait or state emotions can be triggered depending on the specificity of these antecedents. When control appraisals are conceptualized as the general perception of a learning situation, such as attending online courses ( You and Kang, 2014 ) or a math course ( Villavicencio and Bernardo, 2013 ), control appraisals can predict how students generally feel (trait emotions) in these similar situations. In contrast, when control appraisals are conceptualized as the perception of a specific learning task, for example, solving a math problem ( Muis et al., 2015b ), control can influence students’ emotions during the problem-solving process (i.e., state emotions). From the perspective of CVT, generalized control-value beliefs can be linked to trait emotions. They can also influence momentary appraisals and state emotions ( Pekrun and Perry, 2014 ).

Furthermore, the majority of the literature in educational psychology has focused on the effect of task features (i.e., task novelty, complexity, and structure) on the occurrence of emotions, especially epistemic emotions ( D’Mello and Graesser, 2012 ; Muis et al., 2018 ). These task features are objective and independent of a specific learning context but intersect with the person’s attributes and must be considered jointly ( Efklides, 2011 ). Foster and Keane (2015) focused on how new, novel or unique information may trigger surprise if the individual perceives the information as unexpected. D’Mello et al. (2014) proposed that complexity is a crucial antecedent to confusion during learning. Silvia (2010) argued that the complexity of the task would also predict either curiosity or confusion after the surprise toward novelty. In addition to curiosity and confusion, boredom and anxiety are the consequences of task novelty, complexity, and structure. For example, a generally highly competent student may still feel anxious when solving a difficult math question (i.e., task complexity). A student who usually feels bored in a face-to-face math class may be curious about a novel math question that is presented in an innovative way (i.e., task novelty and structure). All the emotions arise from appraisals of uncertainty stemming from task novelty, complexity, or structure ( Ellsworth and Scherer, 2003 ). It is the cognitive disequilibrium underlying uncertainty that plays a critical role in triggering dynamic epistemic emotions ( D’Mello and Graesser, 2012 ). Compared with the dynamic appraisal process of state emotions and ever-changing task attributes, however, trait emotions are relatively stable to interact with SRL.

6.2. Interaction between emotions and SRL at the person level

Trait emotions have reciprocal relationships with SRL strategies. Trait emotions are a decontextualized and stable way of reporting feelings ( Goetz et al., 2016 ), while SRL strategies include all the components of SRL, namely cognitive strategies, metacognitive strategies, emotional strategies, and motivational strategies ( Warr and Downing, 2000 ; Ferla et al., 2009 ). In the interaction between emotions and SRL, trait emotions are presented as an emotional loop or cycle that monitors the strategies or efforts exerted in SRL ( Efklides et al., 2018 ; Ben-Eliyahu, 2019 ). On the one hand, trait emotions may interfere with students’ prioritization of SRL strategies. Results from empirical studies support these propositions. Positive emotions (e.g., enjoyment, pride) are positively related to students’ usage of cognitive strategies and metacognitive strategies ( Pekrun et al., 2002 ; Artino and Stephens, 2009 ; Ahmed et al., 2013 ; Villavicencio and Bernardo, 2013 , 2016 ; Mega et al., 2014 ; Chatzistamatiou et al., 2015 ; Chim and Leung, 2016 ). Villavicencio and Bernardo (2013 , 2016) examined the relationship between academic emotions, self-regulation, and achievement in a math course and found that both enjoyment and pride were positively correlated with self-regulation. In terms of negative emotions, boredom, frustration, and anxiety were generally negatively associated with SRL strategies ( Pekrun et al., 2002 ; Kim et al., 2014 ; Mega et al., 2014 ; Peng et al., 2014 ; Gonzalez et al., 2017 ). More interestingly, researchers found that SRL strategies also influenced students’ trait emotions. For example, Ben-Eliyahu and Linnenbrink-Garcia (2013) examined how self-regulated emotion strategies would influence students’ emotions in academic courses. Results suggested that self-regulated emotions were differentially employed based on course preference, which consequently influences students’ emotions in the course. Furthermore, students who heavily rely on ineffective strategies show prolonged frustration, boredom, and confusion ( D’Mello and Graesser, 2012 ; Sabourin and Lester, 2014 ; Azevedo et al., 2017 ). The cyclical effects between emotions and SRL strategies generate long-term effects on student learning outcomes, including persistence ( Drake et al., 2014 ), procrastination ( Rakes and Dunn, 2010 ), and academic achievements ( Peng et al., 2014 ; Gonzalez et al., 2017 ).

6.3. State emotions function at the Task × Person level

Achievement emotions and epistemic emotions are the dominant emotions triggered in a specific task ( Pekrun and Stephens, 2012 ), where these emotions dynamically influence three phases of the SRL cycle ( Efklides, 2011 ). Research provides support for the dynamic emotional changes throughout SRL processes. Anticipatory feelings start from the beginning of a learning activity (i.e., forethought), even though these feelings may be more salient in the self-reflection phase of SRL ( Usher and Schunk, 2018 ). Within the SRL cycle, individuals experience emotions in proportion to the challenges they are facing ( Usher and Schunk, 2018 ). The structure, complexity, and novelty attributes of a task reflect the challenges, which have become the catalyst of academic emotions ( Muis et al., 2018 ). In other words, task analysis in the forethought phase predicts the initial emotions students may feel. It is reasonable to anticipate confusion when facing unfamiliar structures, anxiety when facing complexity, and curiosity when facing novelty. In addition to the forethought phase , task fluency in the performance phase contributes to discrete emotions. In two studies by Winkielman and Cacioppo (2001) , students showed more negative emotions in reaction to processing fluency. Fulmer and Tulis (2013) examined students’ fluency and emotions multiple times in a reading task. A latent growth curve showed that positive emotions decline with a decrease in reading fluency. Finally, self-evaluation in the self-reflection phase can also affect a change of emotions ( Efklides et al., 2018 ). Learners judge their learning situations by comparing them with performance standards established by themselves and others ( Usher and Schunk, 2018 ). There is no doubt that students experience different emotions even when they have similar performances. A low-performing student may experience more happiness and even pride in the self-reflection phase if they consider themself to outperform what they expected. As discussed above, emotions are dynamic and change throughout the three SRL phases, which can influence the effort individuals put toward a task or create an obstacle to further progress in the SRL tasks. Feelings of happiness and pride may lead to renewed efforts, while anxiety and frustration may lead to task avoidance or withdrawal ( Usher and Schunk, 2018 ). Consequently, the short-term learning outcomes will be influenced, including achievements and learning gains, in this SRL event.

6.4. Interaction between the two levels of SRL

Self-regulated learning (SRL) is a life-long learning process where students need to plan for each session, each semester, and each training period ( Efklides et al., 2018 ). The short-term learning outcome of one section determines if students will persist with learning or quit on their attempt in the next session. Repeated engagement or disengagement with similar tasks provides consistent information about self-efficacy in a task domain and updates the domain-specific self-concept ( Efklides, 2011 ). Indeed, Metallidou and Efklides (2001) found that self-ratings of confidence and personal estimates of mathematical performance predicted competence at the Person level. It is possible that the short-term perseverance or withdrawal of effort will transfer into long-term outcomes that will be relatively stable over time. From the Task × Person level to Person level, the short-term outcomes of a specific SRL event will gradually influence long-term SRL outcomes. On the other hand, long-term learning outcomes will be transformed into more stable individual characteristics to affect SRL at the Task × Person level. These individual characteristics can be prior knowledge, motivation, and self-efficacy, which will affect how students appraise a specific task. Efklides and Tsiora (2002) conducted a longitudinal study to examine the mechanism between SRL at the Task × Person level and Person level. They found self-concept at the Person level influenced SRL at the Task × Person level. Therefore, from the perspective of life-long learning, we assume the existence of long-term interaction between the two levels of SRL, even though empirical evidence to support this argument is sparse.

7. Future directions that build upon the integrative framework

Future research should progress beyond the singular study of relations between emotions and SRL at the Person level. As proposed in our framework, the dynamic relationships between emotion and SRL exist at the Task × Person level. Therefore, it is crucial for future research to examine how emotions unfold in different phases of SRL using advanced methodologies. For example, the high sampling rates of physiological and behavioral measures make it possible to capture the components of SRL with high granularity. Further research in examining emotions and SRL at the Task × Person level could provide insights into the dynamics of SRL, which will consequently inform instruction and the scaffolding for SRL.

Another fruitful area of research resides in the longitudinal research that examines the long-term interplays between the Task × Person and the Person levels of SRL. For example, if students are trained to have proper task analysis skills at the beginning of a specific task, will this kind of training influence their general SRL strategies? Do students transfer the strategies across different contexts and gradually develop them into trait-like personal inclinations? If so, what factors can promote or prohibit this kind of influence? Do trait emotions influence state emotions? Do achievements and motivations associated with a specific task accumulate to influence students’ general persistence and procrastination in learning? Answering these types of questions can provide educators and researchers with tools for designing SRL training programs that can affect learners in the long run.

The third area in need of investigation is how the structure, difficulty, and novelty of a task are related to control-value appraisals and collectively influence academic emotions and SRL. Task difficulty is likely a well-explored direction in task analysis; however, task structure and novelty need further exploration. For example, different emotions and SRL patterns may be triggered by an exam that starts with easy questions or an exam that starts with difficult questions. Thus, a better understanding of the influence of task structures and novelty would contribute to the design of a task that is beneficial for SRL.

Additionally, empirical studies are needed to verify the antecedents of emotions across the three SRL phases. As discussed in our ESRL model, task analysis in the forethought phase, task fluency in the performance phase, and self-evaluation in the self-reflection phase triggers the occurrence and changes of academic emotions. More studies are needed to empirically explore these possibilities. More importantly, researchers can delve into the real-time modeling and visualization of the factors influencing emotions so that corresponding strategies or interventions can be incorporated to optimize the whole learning process. For example, it would be interesting to provide instructors with a dashboard that displays how students’ task fluency evolves and students’ experience of academic emotions over time. In doing so, teachers can provide students with effective emotional or instructional support in real-time.

It is also important to highlight the necessity of using multimodal multichannel data to study SRL for future research. Researchers currently use four types of methodological approaches to studying SRL: (a) self-report measures (i.e., self-report questionnaires, structured dairy, think-aloud/emote aloud, interview); (b) behavioral measures (i.e., facial expressions, body posture, eye-tracking); (c) physiological measures; and (d) computer trace log files. However, each of these four types of measures has its strengths and weaknesses. For example, self-report has its strength in examining SRL at the Person level. However, many self-report measures are static and not capable of capturing the dynamic changes of SRL at the Task × Person level. In contrast, computer log files are powerful in keeping track of SRL at the Task × Person level. However, researchers have to overcome the challenge of making reliable inferences from the trace data.

In response to the drawbacks and strengths of the current methodologies, we argue that researchers need to use multimodal multichannel data when examining the relationship between emotions and SRL at both levels. Self-report measures can be the focus of trait measures at the Person level, whereas physiological measures and trace data can contribute to the situational measures at the Task × Person level. Furthermore, when adopting multichannel data to examine the relationships proposed in the ESRL model, researchers must pay attention to the challenges regarding data analysis and data interpretation. Alignment is the major challenge of analyzing multiple data streams, especially when the starting times are different or the sampling rate varies across devices and methods. For example, it is obvious that physiological sensors need to be attached before an actual data collection process. Consequently, the physiological time stamps start earlier in data collection than in computer log files if both are used for measuring SRL events. In a similar vein, the eye-trackers capture data continuously at 60–120 Hz, but EDA occurs at 8–20 Hz, which means that transformation is necessary when using multimodal multichannel data to analyze different data channels. In terms of data interpretation, it is problematic when researchers cannot make consistent predictions related to specific indices of a method. For example, fixation duration has been interpreted as both cognitive engagement and emotional arousal. These mixed interpretations can cause misleading findings in the literature.

8. Conclusion

There is remarkable progress in the theoretical development of SRL toward a holistic understanding of learning or problem-solving process that underscores academic emotions. Although limited empirical studies are available, contemporary literature clearly suggests the complex relationships between academic emotions and SRL. However, this field of study is still scattered and fragmented, given the many ambiguities and arguments about the nature of the two constructs (i.e., academic emotions and SRL). In this study, we contended that emotions could be studied as either traits or states and SRL functions at both Person and Task × Person levels. By reviewing predominant SRL models and analyzing relevant empirical studies, we proposed an integrative framework to explain the role of trait and state emotions in SRL. Specifically, the proposed framework illustrates what the antecedents of emotions are and how they influence academic emotions and consequently SRL and learning outcomes. Moreover, the framework explains how trait emotions influence SRL strategies at the Person level and how state emotions unfold in different phases of SRL at the Task × Person level. We discuss future research directions that build upon our framework, which will advance this field of study considerably. We acknowledge that there is still a long way to go to pinpoint the complex interplays between emotions and SRL. The proposed framework in this study lays a solid foundation for developing a comprehensive understanding of the role of emotions in SRL and asking important questions for future investigation.

Author contributions

JZ: conceptualization, investigation, and writing–original draft. SLa: writing–reviewing and editing, supervision, and funding acquisition. SLi: writing–reviewing and editing. All authors contributed to the article and approved the submitted version.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Ahmed, W., van der Werf, G., Kuyper, H., and Minnaert, A. (2013). Emotions, self-regulated learning, and achievement in mathematics: a growth curve analysis. J. Educ. Psychol. 105, 150–161. doi: 10.1037/a0030160

CrossRef Full Text | Google Scholar

Ainley, M., Corrigan, M., and Richardson, N. (2005). Students, tasks and emotions: identifying the contribution of emotions to students’ reading of popular culture and popular science texts. Learn. Instruc. 15, 433–447. doi: 10.1016/j.learninstruc.2005.07.011

Alter, A. L., Oppenheimer, D. M., Epley, N., and Eyre, R. N. (2007). Overcoming intuition: metacognitive difficulty activates analytic reasoning. J. Exp. Psychol. Gen. 136, 569–576. doi: 10.1037/0096-3445.136.4.569

PubMed Abstract | CrossRef Full Text | Google Scholar

Artino, A. R. (2009). Think, feel, act: motivational and emotional influences on military students’ online academic success. J. Comput. High. Educ . 21, 146–166. doi: 10.1007/s12528-009-9020-9

Artino, A. R., and Jones, K. D. (2012). Exploring the complex relations between achievement emotions and self-regulated learning behaviors in online learning. Internet High. Educ. 15, 170–175. doi: 10.1016/j.iheduc.2012.01.006

Artino, A. R., and Stephens, J. M. (2009). Beyond grades in online learning: adaptive profiles of academic self-regulation among naval academy undergraduates. J. Adv. Acad. 20, 568–601. doi: 10.1177/1932202X0902000402

Azevedo, R., Mudrick, N., Taub, M., and Wortha, F. (2017). Coupling between metacognition and emotions during STEM learning with advanced learning technologies: a critical analysis, implications for future research, and design of Learning Systems. Teach. Coll. Record 119, 114–120.

Google Scholar

Bar-Anan, Y., Wilson, T. D., and Gilbert, D. T. (2009). The feeling of uncertainty intensifies affective reactions. Emotion 9, 123–127. doi: 10.1037/a0014607

Ben-Eliyahu, A. (2019). Academic emotional learning: a critical component of self-regulated learning in the emotional learning cycle. Educ. Psychol. 54, 84–105. doi: 10.1080/00461520.2019.1582345

Ben-Eliyahu, A., and Linnenbrink-Garcia, L. (2013). Extending self-regulated learning to include self-regulated emotion strategies. Motiv. Emot. 37, 558–573. doi: 10.1007/s11031-012-9332-3

Bieg, M., Goetz, T., and Hubbard, K. (2013). Can I master it and does it matter? An intraindividual analysis on control-value antecedents of trait and state academic emotions. Learn. Individ. Differ. 28, 102–108. doi: 10.1016/j.lindif.2013.09.006

Boekaerts, M. (1996). Self-regulated learning at the junction of cognition and motivation. Eur. Psychol. 1, 100–112. doi: 10.1027/1016-9040.1.2.100

Boekaerts, M. (2011). “Emotions, emotion regulation, and self-regulation of learning,” in Handbook of self-regulation of learning and performance , eds D. H. Schunk and B. Zimmerman (New York, NY: Routledge), 408–425.

Boekaerts, M., and Cascallar, E. (2006). How far have we moved toward the integration of theory and practice in self-regulation? Educ. Psychol. Rev. 18, 199–210. doi: 10.1007/s10648-006-9013-4

Burić, I., and Sorić, I. (2012). The role of test hope and hopelessness in self-regulated learning: relations between volitional strategies, cognitive appraisals and academic achievement. Learn. Individ. Differ . 22, 523–529. doi: 10.1016/j.lindif.2012.03.011

Carver, C. S., and Scheier, M. F. (1998). On the self-regulation of behavior. Cambridge: Cambridge University Press. doi: 10.1017/CBO9781139174794

Chatzistamatiou, M., Dermitzaki, I., Efklides, A., and Leondari, A. (2015). Motivational and affective determinants of self-regulatory strategy use in elementary school mathematics. Educ. Psychol. 35, 835–850. doi: 10.1080/01443410.2013.822960

Chim, W. M., and Leung, M. T. (2016). “The path analytic models of 2 X 2 classroom goal structures, achievement goals, achievement emotions and self-regulated learning of Hong Kong undergraduates in their English study,” in Applied psychology , eds J. M. Montague and L. M. Tan (Singapore: World Scientific Publ Co Pte Ltd). doi: 10.1142/9789814723398_0006

Cho, M. H., and Heron, M. L. (2015). Self-regulated learning: the role of motivation, emotion, and use of learning strategies in students’ learning experiences in a self-paced online mathematics course. Distance Educ . 36, 80–99. doi: 10.1080/01587919.2015.1019963

D’Mello, S., and Graesser, A. (2012). Dynamics of affective states during complex learning. Learn. Instruc. 22, 145–157. doi: 10.1016/j.learninstruc.2011.10.001

D’Mello, S., Lehman, B., Pekrun, R., and Graesser, A. (2014). Confusion can be beneficial for learning. Learn. Instr . 29, 153–170. doi: 10.1016/j.learninstruc.2012.05.003

Drake, K., Belsky, J., and Fearon, R. (2014). From early attachment to engagement with learning in school: the role of self-regulation and persistence. Dev. Psychol. 50, 1350–1361. doi: 10.1037/a0032779

Efklides, A. (2006). Metacognition and affect: what can metacognitive experiences tell us about the learning process? Educ. Res. Rev. 1, 3–14. doi: 10.1016/j.edurev.2005.11.001

Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: the MASRL model. Educ. Psychol. 46, 6–25. doi: 10.1080/00461520.2011.538645

Efklides, A., and Tsiora, A. (2002). Metacognitive experiences, self-concept, and self-regulation. Psychologia 45, 222–236. doi: 10.2117/psysoc.2002.222

Efklides, A., Schwartz, B. L., and Brown, V. (2018). “Motivation and affect in self-regulated learning: does metacognition play a role,” in Handbook of self-regulation of learning and performance , eds D. H. Schunk and J. A. Greene (Milton Park: Routledge), 64–82. doi: 10.4324/9781315697048-5

Ellsworth, P. C., and Scherer, K. R. (2003). Appraisal processes in emotion . Oxford: Oxford University Press, 572–595.

Ferla, J., Valcke, M., and Schuyten, G. (2009). Student models of learning and their impact on study strategies. Stud. High. Educ. 34, 185–202. doi: 10.1080/03075070802528288

Fisher, Z., and Tipton, E. (2015). robumeta: Robust variance meta-regression (R package version 1.6).

Foster, M. I., and Keane, M. T. (2015). Why some surprises are more surprising than others: surprise as a metacognitive sense of explanatory difficulty. Cogn. Psychol . 81, 74–116. doi: 10.1016/j.cogpsych.2015.08.004

Fulmer, S. M., and Tulis, M. (2013). Changes in interest and affect during a difficult reading task: relationships with perceived difficulty and reading fluency. Learn. Instruc. 27, 11–20. doi: 10.1016/j.learninstruc.2013.02.001

Goetz, T., Becker, E. S., Bieg, M., Keller, M. M., Frenzel, A. C., and Hall, N. C. (2015). The glass half empty: how emotional exhaustion affects the state-trait discrepancy in self-reports of teaching emotions. PLoS One 10:e0137441. doi: 10.1371/journal.pone.0137441

Goetz, T., Sticca, F., Pekrun, R., Murayama, K., and Elliot, A. J. (2016). Intraindividual relations between achievement goals and discrete achievement emotions: an experience sampling approach. Learn. Instruc. 41, 115–125. doi: 10.1016/j.learninstruc.2015.10.007

Gonzalez, A., Fernandez, M. V. C., and Paoloni, P. V. (2017). Hope and anxiety in physics class: exploring their motivational antecedents and influence on metacognition and performance. J. Res. Sci. Teach. 54, 558–585. doi: 10.1002/tea.21377

Greene, J. A., and Azevedo, R. (2009). A macro-level analysis of SRL processes and their relations to the acquisition of a sophisticated mental model of a complex system. Contemp. Educ. Psychol. 34, 18–29. doi: 10.1016/j.cedpsych.2008.05.006

Hedges, L. V., and Vevea, J. L. (1998). Fixed-and random-effects models in meta-analysis. Psychol. Methods 3, 486–504. doi: 10.1037/1082-989X.3.4.486

Järvelä, S., and Hadwin, A. F. (2013). New frontiers: regulating learning in CSCL. Educ. Psychol . 48, 25–39. doi: 10.1080/00461520.2012.748006

Kesici, S., Baloglu, M., and Deniz, M. E. (2011). Self-regulated learning strategies in relation with statistics anxiety. Learn. Individ. Differ. 21, 472–477. doi: 10.1016/j.lindif.2011.02.006

Kim, C., Park, S. W., and Cozart, J. (2014). Affective and motivational factors of learning in online mathematics courses. Br. J. Educ. Technol. 45, 171–185. doi: 10.1111/j.1467-8535.2012.01382.x

Lajoie, S. P. (2008). Metacognition, self regulation, and self-regulated learning: A rose by any other name? Educ. Psychol. Rev . 20, 469–475. doi: 10.1007/s10648-008-9088-1

Lajoie, S. P., Zheng, J., and Li, S. (2018). Examining the role of self-regulation and emotion in clinical reasoning: implications for developing expertise. Med. Teach . 40, 842–844. doi: 10.1080/0142159X.2018.1484084

Marchand, G. C., and Gutierrez, A. P. (2012). The role of emotion in the learning process: comparisons between online and face-to-face learning settings. Internet High. Educ. 15, 150–160. doi: 10.1016/j.iheduc.2011.10.001

Mega, C., Ronconi, L., and De Beni, R. (2014). What makes a good student? How emotions, self-regulated learning, and motivation contribute to academic achievement. J. Educ. Psychol. 106, 121–131. doi: 10.1037/a0033546

Metallidou, P., and Efklides, A. (2001). “The effects of general success-related beliefs and specific metacognitive experiences on causal attributions,” in Trends and prospects in motivation research , eds A. Efklides, J. Kuhl, and R. M. Sorrentino (Dordrecht: Kluwer), 325–347. doi: 10.1007/0-306-47676-2_17

Muis, K. R., Chevrier, M., and Singh, C. A. (2018). The role of epistemic emotions in personal epistemology and self-regulated learning. Educ. Psychol. 53, 165–184. doi: 10.1080/00461520.2017.1421465

Muis, K. R., Pekrun, R., Sinatra, G. M., Azevedo, R., Trevors, G., Meier, E., et al. (2015a). The curious case of climate change: testing a theoretical model of epistemic beliefs, epistemic emotions, and complex learning. Learn. Instruc. 39, 168–183. doi: 10.1016/j.learninstruc.2015.06.003

Muis, K. R., Psaradellis, C., Lajoie, S. P., Di Leo, I., and Chevrier, M. (2015b). The role of epistemic emotions in mathematics problem solving. Contemp. Educ. Psychol. 42, 172–185. doi: 10.1016/j.cedpsych.2015.06.003

Noordewier, M. K., and Breugelmans, S. M. (2013). On the valence of surprise. Cogn. Emot. 27, 1326–1334. doi: 10.1080/02699931.2013.777660

Panadero, E. (2017). A review of self-regulated learning: six models and four directions for research. Front. Psychol. 8:422. doi: 10.3389/fpsyg.2017.00422

Pekrun, R. (2006). The control-value theory of achievement emotions: assumptions, corollaries, and implications for educational research and practice. Educ. Psychol. Rev. 18, 315–341. doi: 10.1007/s10648-006-9029-9

Pekrun, R., and Linnenbrink-Garcia, L. (2012). “Academic emotions and student engagement,” in Handbook of research on student engagement . eds S. Christenson, A. Reschly, and C. Wylie (Boston, MA: Springer), 259–282.

Pekrun, R., and Perry, R. P. (2014). “Control-Value theory of achievement emotions,” in International handbook of emotions in education , eds R. Pekrun and L. Linnerbrink-Garcia (Milton Park: Taylor and Francis), 120–141.

Pekrun, R., and Stephens, E. J. (2012). “Academic emotions,” in Individual differences and cultural and contextual factors. APA educational psychology handbook , Vol. 2, eds K. Harris, S. Graham, T. Urdan, S. Graham, J. Royer, and M. Zeidner (Washington DC: American Psychological Association), 3–31.

Pekrun, R., Elliot, A. J., and Maier, M. A. (2009). Achievement goals and achievement emotions: testing a model of their joint relations with academic performance. J. Educ. Psychol. 101, 115–135. doi: 10.1037/a0013383

Pekrun, R., Goetz, T., Daniels, L. M., Stupnisky, R. H., and Perry, R. P. (2010). Boredom in achievement settings: exploring control–value antecedents and performance outcomes of a neglected emotion. J. Educ. Psychol. 102, 531–549. doi: 10.1037/a0019243

Pekrun, R., Goetz, T., Titz, W., and Perry, R. P. (2002). Academic emotions in students’ self-regulated learning and achievement: a program of qualitative and quantitative research. Educ. Psychol. 37, 91–105. doi: 10.1207/S15326985EP3702_4

Pekrun, R., Vogl, E., Muis, K. R., and Sinatra, G. M. (2017). Measuring emotions during epistemic activities: the epistemically-related emotion scales. Cogn. Emot. 31, 1268–1276. doi: 10.1080/02699931.2016.1204989

Peng, Y., Hong, E., and Mason, E. (2014). Motivational and cognitive test-taking strategies and their influence on test performance in mathematics. Educ. Res. Evaluat. 20, 366–385. doi: 10.1080/13803611.2014.966115

Pintrich, P. R. (2000). “The role of goal orientation in self-regulated learning,” in Handbook of self-regulation , eds P. R. Pintrich and M. Zeidner (San Diego, CA: Academic Press), 451–502. doi: 10.1016/B978-012109890-2/50043-3

Poitras, E. G., and Lajoie, S. P. (2013). A domain-specific account of self-regulated learning: the cognitive and metacognitive activities involved in learning through historical inquiry. Metacogn. Learn. 8, 213–234. doi: 10.1007/s11409-013-9104-9

Rakes, G. C., and Dunn, K. E. (2010). The impact of online graduate students’ motivation and self-regulation on academic procrastination. J. Interact. Online Learn. 9, 78–93.

Rienties, B., Tempelaar, D., Nguyen, Q., and Littlejohn, A. (2019). Unpacking the intertemporal impact of self-regulation in a blended mathematics environment. Comput. Hum. Behav. 100, 345–357. doi: 10.1016/j.chb.2019.07.007

Robinson, M. D., and Clore, G. L. (2002). Belief and feeling: evidence for an accessibility model of emotional self-report. Psychol. Bull. 128, 934–960. doi: 10.1037/0033-2909.128.6.934

Rosenberg, E. L. (1998). Levels of analysis and the organization of affect. Rev. Gen. Psychol . 2, 247–270. doi: 10.1037/1089-2680.2.3.247

Sabourin, J. L., and Lester, J. C. (2014). Affect and engagement in game-based learning environments. IEEE Trans. Affect. Comput. 5, 45–56. doi: 10.1109/T-AFFC.2013.27

Schutz, P. A., and Davis, H. A. (2000). Emotions and self-regulation during test taking. Educ. Psychol. 35, 243–256. doi: 10.1207/S15326985EP3504_03

Shuman, V., and Scherer, K. R. (2014). “Concepts and structures of emotions,” in International handbook of emotions in education , eds R. Pekrun and L. Linnenbrink-Garcia (New York, NY: Taylor and Francis), 13–36.

Silvia, P. J. (2010). Confusion and interest: the role of knowledge emotions in aesthetic experience. Psychol. Aesthet. Creat. Arts 4, 75–80. doi: 10.1037/a0017081

Sinatra, G. M., Heddy, B. C., and Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educ. Psychol. 50, 1–13. doi: 10.1080/00461520.2014.1002924

Taub, M., Azevedo, R., Rajendran, R., Cloude, E. B., Biswas, G., and Price, M. J. (2021). How are students’ emotions related to the accuracy of cognitive and metacognitive processes during learning with an intelligent tutoring system? Learn. Instr . 72:101200. doi: 10.1016/j.learninstruc.2019.04.001

Trevors, G. J., Muis, K. R., Pekrun, R., Sinatra, G. M., and Winne, P. H. (2016). Identity and epistemic emotions during knowledge revision: a potential account for the backfire effect. Discourse Process. 53, 339–370. doi: 10.1080/0163853X.2015.1136507

Usher, E. L., and Schunk, D. H. (2018). “Social cognitive theoretical perspective of self-regulation,” in Handbook of self-regulation of learning and performance , 2nd Edn, eds D. H. Schunk and J. A. Greene (New York, NY: Routledge/Taylor and Francis), 19–35. doi: 10.4324/9781315697048-2

Van Nguyen, H., Laohasiriwong, W., Saengsuwan, J., Thinkhamrop, B., and Wright, P. (2015). The relationships between the use of self-regulated learning strategies and depression among medical students: an accelerated prospective cohort study. Psychol. Health Med. 20, 59–70. doi: 10.1080/13548506.2014.894640

Villavicencio, F. T., and Bernardo, A. B. I. (2013). Positive academic emotions moderate the relationship between self-regulation and academic achievement. Br. J. Educ. Psychol. 83, 329–340. doi: 10.1111/j.2044-8279.2012.02064.x

Villavicencio, F. T., and Bernardo, A. B. I. (2016). Beyond math anxiety: positive emotions predict mathematics achievement, self-regulation, and self-efficacy. Asia Pacif. Educ. Res. 25, 415–422. doi: 10.1007/s40299-015-0251-4

Warr, P., and Downing, J. (2000). Learning strategies, learning anxiety and knowledge acquisition. Br. J. Psychol. 91, 311–333. doi: 10.1348/000712600161853

Winkielman, P., and Cacioppo, J. T. (2001). Mind at ease puts a smile on the face: psychophysiological evidence that processing facilitation elicits positive affect. J. Pers. Soc. Psychol. 81, 989–1000. doi: 10.1037/0022-3514.81.6.989

Winne, P. H. (1997). Experimenting to bootstrap self-regulated learning. J. Educ. Psychol. 89, 397–410. doi: 10.1037/0022-0663.89.3.397

Winne, P. H., and Perry, N. E. (2000). “Measuring self-regulated learning,” in Handbook of self-regulation , eds M. Boekaerts and P. R. Pintrich (Cambridge, MA: Academic Press), 531–566. doi: 10.1016/B978-012109890-2/50045-7

Winne, P., and Hadwin, A. (1998). “Studying as self-regulated learning,” in Metacognition in educational theory and practice , eds D. Hacker, J. Dunlosky, and A. Graesser (Mahwah, NJ: Erlbaum), 227–304.

You, J. W., and Kang, M. (2014). The role of academic emotions in the relationship between perceived academic control and self-regulated learning in online learning. Comput. Educ. 77, 125–133. doi: 10.1016/j.compedu.2014.04.018

Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: an overview. Educ. Psychol. 25, 3–17. doi: 10.1207/s15326985ep2501_2

Zimmerman, B. J. (2000). “Attaining self-regulation: a social cognitive perspective,” in Handbook of self-regulation , eds M. Boekaerts, P. R. Pintrich, and M. Zeidner (Cambridge, MA: Academic Press), 13–39. doi: 10.1016/B978-012109890-2/50031-7

www.frontiersin.org

Appendix A. Studies of the relationships between emotions and self-regulated learning (SRL) strategies.

Keywords : emotions, self-regulated learning, meta-analysis, review, framework

Citation: Zheng J, Lajoie S and Li S (2023) Emotions in self-regulated learning: A critical literature review and meta-analysis. Front. Psychol. 14:1137010. doi: 10.3389/fpsyg.2023.1137010

Received: 03 January 2023; Accepted: 20 February 2023; Published: 09 March 2023.

Reviewed by:

Copyright © 2023 Zheng, Lajoie and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Juan Zheng, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Systematic Reviews and Meta-Analyses: Critical Appraisal

  • Get Started
  • Exploratory Search
  • Where to Search
  • How to Search
  • Grey Literature
  • What about errata and retractions?
  • Eligibility Screening

Critical Appraisal

  • Data Extraction
  • Synthesis & Discussion
  • Assess Certainty
  • Share & Archive

All relevant studies must undergo a critical appraisal to evaluate the risk of bias , or internal and external validity, of all relevant references.

This step often occurs simultaneously with the Data Extraction  phase. It is a vital stage of the systematic review process to uphold the cornerstone of reducing bias .

Risk of Bias Tools

  • Presenting Results

Critical Appraisal 

Critical appraisal is also referred to as quality assessment , risk of bias assessment , and similar variations. Sometimes the critical appraisal phase is confused with the assessment of certainty of evidence  - although related, these are independent  stages of the systematic review process.

According to the Center for Evidence-Based Medicine (CEBM): 

"Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context. Critical appraisal looks at the way a study is conducted and examines factors such as internal validity , generalizability and relevance."

Systematic reviews require a formal, systematic, uniform appraisal of the quality - or  risk of bias  - of all   relevant  studies. In a critical appraisal, you are examining the methods   not  the results .

Process Details

Use risk of bias tools  f or this stage - these tools are often formatted as checklists. You can find more about risk of bias tools in the next tab! If a refresher of some common biases, definitions, and examples is helpful, check out the Catalogue of Bias  from the University of Oxford and CEBM.

Just like the other stages of a systematic review,  2 reviewers  should assess risk of bias in each reference . As such, your team should calculate and report interrater reliability , deciding ahead of time how to resolve conflicts. Oftentimes the critical appraisal occurs at the same time as data extraction .

In addition to the formal risk of bias assessment, your team should also consider meta-biases like publication bias, selective reporting, etc. Search for errata and retractions related to included research, and consider other limitations of and concerns about the included studies and how this may impact the reliability of your review.

Note: Subjectivity of Critical Appraisal 

The critical appraisal is inherently subjective , from the selection of the RoB tool(s) to the final assessment of each study. Therefore, it is important to consider how tools compare, and how this process may impact the results of your review. Check out these studies evaluating Risk of Bias Tools:

Page MJ, McKenzie JE, Higgins JPT  Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review   BMJ Open 2018;8:e019703. doi:  10.1136/bmjopen-2017-019703

Losilla, J.-M., Oliveras, I., Marin-Garcia, J. A., & Vives, J. (2018).  Three risk of bias tools lead to opposite conclusions in observational research synthesis.   Journal of Clinical Epidemiology ,  101 , 61–72.  https://doi.org/10.1016/j.jclinepi.2018.05.021

Margulis, A. V., Pladevall, M., Riera-Guardia, N., Varas-Lorenzo, C., Hazell, L., Berkman, N., Viswanathan, M., & Perez-Gutthann, S. (2014).  Quality assessment of observational studies in a drug-safety systematic review, comparison of two tools: The Newcastle-Ottawa Scale and the RTI item bank .  Clinical Epidemiology , 359.  https://doi.org/10.2147/CLEP.S66677

Select Risk of Bias Tool(s)

When you think of a critical appraisal in a systematic review and/or meta-analysis, think of assessing the  risk of bias  of included studies. The potential biases to consider will vary by study design. Therefore, risk of bias tool(s) should be selected based on the designs of included studies.  If you include more than one study design , you'll include more than one risk of bias tool.  Whenever possible, select tools developed for a discipline relevant to your topic.

Risk of bias tools  are simply checklists used to consider bias specific to a study design, and sometimes discipline. 

  • Cochrane Risk of Bias Tool  | randomized trials, health
  • Collaboration for Environmental Evidence (CEE) Critical Appraisal Tool, Prototype  | environmental management focused
  • Crowe Critical Appraisal Tool  | mixed methods
  • Meta-QAT  | public health focused
  • Meta-QAT Grey Literature Companion | grey literature
  • Mixed Method Appraisal Tool (MMAT) | mixed method ( more detail )
  • Newcastle-Ottawa Scale | non-randomized studies
  • RTI Item Bank  | observational studies
  • SYRCLE's Risk of Bias Tool | animal studies
  • Quality Checklist for Blogs | blogs
  • Quality Checklist for Podcasts | podcasts

Risk of Bias Toolsets

Risk of bias tool sets  are a series of tools developed by the same group or organization, where each tool addresses a specific study design. The organization is usually discipline specific. Note that many also include a systematic review and/or meta-analysis quality assessment tool, but that these tools will not be useful during this stage as existing reviews will not be folded into your synthesis.

Critical Appraisal Skills Programme (CASP) Checklists include tools for:

  • Randomized Controlled Trials 
  • Qualitative Studies
  • Cohort Study
  • Diagnostic Study
  • Case Control Study
  • Economic Evaluation
  • Clinical Prediction Rule 

National Institutes of Health (NIH) Study Quality Assessment Tools include tools for:

  • Controlled intervention studies
  • Observational cohort and cross-sectional studies
  • Case-control studies
  • Before-after (pre-post) studies without control
  • Case series studies

Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) includes tools for:

  • Case-control
  • Cross-sectional
  • Conference abstracts

Joanna Briggs Institute (JBI) Manual for Evidence Synthesis includes the following tools found in respective relevant chapters:

  • Qualitative research (appendix 2.1)
  • Randomized controlled trials (appendix 3.1)
  • Quasi-experimental studies (non-randomized experimental studies; appendix 3.3)
  • Text and opinion (appendix 4.1)  with explanation (Appendix 4.2)
  • Prevalence studies (appendix 5.1)
  • Cohort studies (appendix 7.1)
  • Case-control studies (appendix 7.2)
  • Case series (appendix 7.3)
  • Case reports (appendix 7.4)
  • Cross sectional studies (appendix 7.5)
  • Diagnostic test accuracy (appendix 9.1)

Latitudes Network 

  • Systematic Reviews ( ROBIS )
  • Randomized Controlled Trials ( RoB 2 ) 
  • Cohort studies - interventions ( ROBINS-I )
  • Cohort studies - exposure ( ROBINS-E ) 
  • Diagnostic accuracy studies ( QUADAS-2 ; QUADAS-C ) 
  • Prognostic accuracy studies ( QUAPAS ) 
  • Prediction models ( PROBAST ) 
  • Reliability studies ( COSMIN )

Risk of Bias Tool Repositories

Risk of bias tool repositories  are curated lists of existing tools - kind of like what we've presented above. Although we update this guide with new tools as we find them, these repositories may contain additional resources:

  • Quality Assessment and Risk of Bias Tool Repository , from Duke University's Medical Center Library & Archives
  • Interactive Tableau Dataset of 68 Risk of Bias Tools , from the National Toxicology Program

Presenting Critical Appraisal Results

Risk of bias within each reference should be presented in a table like the one seen below. Studies are presented along the y-axis and biases considered (what is addressed by the tool) along the x-axis, such that each row belongs to a study , and each column belongs to a bias (or domain/category of biases).

Example - Graphic representation of risk of bias within each study

It is also best practice to present the bias across the included set of literature  (seen below). Each bias or bias category  is represented as a row and each row is associated with a bar showing the  percentage of the total included literature  that was rated as low risk, some risk, high risk, or unable to determine the risk. 

Example - Graphic representation of risk of bias across each study

The images above can be created using the ROBVIS package of metaverse for evidence synthesis in R. You can create your own graphics without using this software.

Methodological Guidance

  • Health Sciences
  • Animal, Food Sciences
  • Social Sciences
  • Environmental Sciences

Cochrane Handbook  -  Part 2: Core Methods

Chapter 7 : Considering bias and conflicts of interest among the included studies

  • 7.2 Empirical evidence of bias
  • 7.3 General procedures for risk-of-bias assessment
  • 7.4 Presentation of assessment of risk of bias
  • 7.5 Summary assessments of risk of bias 
  • 7.6 Incorporating assessment of risk of bias into analyses 
  • 7.7 Considering risk of bias due to missing results
  • 7.8 Considering source of funding and conflict of interest of authors of included studies 

Chapter 8: Assessing risk of bias in randomized trial

  • 8.2 Overview of RoB 2
  • 8.3 Bias arising from the randomization process
  • 8.4 Bias due to deviations from intended interventions
  • 8.5 Bias due to missing outcome data 
  • 8.6 Bias in measurement of the outcome
  • 8.7 Bias in selection of the reported result
  • 8.8 Differences from the previous version of the tool

Chapter 25:  Risk of bias in non-randomized studies

SYREAF Resources

Step 3: identifying eligible papers.

Conducting systematic reviews of intervention questions II: Relevance screening, data extraction, assessing risk of bias , presenting the results and interpreting the findings.  Sargeant JM, O’Connor AM. Zoonoses Public Health. 2014 Jun;61 Suppl 1:39-51. doi: 10.1111/zph.12124. PMID: 24905995

Campbell -  MECCIR

C51. Assessing risk of bias / study quality ( protocol & review / final manuscript )

C52. Assessing risk of bias / study quality in duplicate  ( protocol & review / final manuscript )

C53. Supporting judgements of risk of bias / study quality ( review / final manuscript )

C54. Providing sources of information for risk of bias / study quality assessments ( review / final manuscript )

C55. Differentiating between performance bias and detection bias  ( protocol & review / final manuscript )

C56. If applicable, assessing risk of bias due to lack of blinding for different outcomes ( review / final manuscript )

C57. If applicable, assessing completeness of data for different outcomes ( review / final manuscript )

C58. If applicable, summarizing risk of bias when using the Cochrane Risk of Bias tool ( review / final manuscript )

C59. Addressing risk of bias / study quality in the synthesis  ( review / final manuscript )

C60. Incorporating assessments of risk of bias  ( review / final manuscript )

CEE  -  Guidelines and Standards for Evidence synthesis in Environmental Management

Section 7.  critical appraisal of study validity.

CEE Standards for conduct and reporting

7.1.2   Internal validity

7.1.3  External validity 

Reporting in Protocol and Final Manuscript

  • Final Manuscript

In the Protocol |  PRISMA-P

Risk of bias individual studies (item 14).

...planned approach to assessing risk of bias should include the constructs being assessed and a definition for each, reviewer judgment options (high, low, unclear), the number of assessors ...training, piloting, previous risk of bias assessment experience...method(s) of assessment (independent or in duplicate)...

Protocol for reporting results

" ...summarise risk of bias assessments across studies or outcomes ..."

Protocol for reporting  impact on synthesis

"...describe how risk of bias assessments will be incorporated into data synthesis (that is, subgroup or sensitivity analyses) and their potential influence on findings of the review (Item 15c) in the protocol..."

In the Final Manuscript |  PRISMA

For the critical appraisal stage, PRISMA requires specific items to be addressed in both the methods and results section.

Study Risk of Bias Assessment (Item 11; report in methods )

Essential items.

  • Specify the tool(s) (and version) used to assess risk of bias in the included studies.
  • Specify the methodological domains/components/items of the risk of bias tool(s) used.
  • Report whether an overall risk of bias judgment that summarised across domains/components/items was made, and if so, what rules were used to reach an overall judgment.
  • If any adaptations to an existing tool to assess risk of bias in studies were made (such as omitting or modifying items), specify the adaptations.
  • If a new risk of bias tool was developed for use in the review, describe the content of the tool and make it publicly accessible.
  • Report how many reviewers assessed risk of bias in each study, whether multiple reviewers worked independently (such as assessments performed by one reviewer and checked by another), and any processes used to resolve disagreements between assessors.
  • Report any processes used to obtain or confirm relevant information from study investigators.
  • If an automation tool was used to assess risk of bias in studies, report how the automation tool was used (such as machine learning models to extract sentences from articles relevant to risk of bias88), how the tool was trained , and details on the tool’s performance and internal validation

Risk of Bias in Studies (Item 18; report in results )

  • Present tables or figures indicating for each study the risk of bias in each domain /component/item assessed and overall study-level risk of bias.
  • Present justification for each risk of bias judgment—for example, in the form of relevant quotations from reports of included studies.

Additional Items

If assessments of risk of bias were done for specific outcomes or results in each study , consider displaying risk of bias judgments on a forest plot, next to the study results, so that the limitations of studies contributing to a particular meta-analysis are evident (see Sterne et al86 for an example forest plot).

Decorative - a recording on this topic is available!

We host a workshop each fall on critical appraisal, check out our latest recording !

  • << Previous: Eligibility Screening
  • Next: Data Extraction >>
  • Last Updated: Mar 28, 2024 2:54 PM
  • URL: https://guides.lib.vt.edu/SRMA
  • En español – ExME
  • Em português – EME

Systematic reviews vs meta-analysis: what’s the difference?

Posted on 24th July 2023 by Verónica Tanco Tellechea

""

You may hear the terms ‘systematic review’ and ‘meta-analysis being used interchangeably’. Although they are related, they are distinctly different. Learn more in this blog for beginners.

What is a systematic review?

According to Cochrane (1), a systematic review attempts to identify, appraise and synthesize all the empirical evidence to answer a specific research question. Thus, a systematic review is where you might find the most relevant, adequate, and current information regarding a specific topic. In the levels of evidence pyramid , systematic reviews are only surpassed by meta-analyses. 

To conduct a systematic review, you will need, among other things: 

  • A specific research question, usually in the form of a PICO question.
  • Pre-specified eligibility criteria, to decide which articles will be included or discarded from the review. 
  • To follow a systematic method that will minimize bias.

You can find protocols that will guide you from both Cochrane and the Equator Network , among other places, and if you are a beginner to the topic then have a read of an overview about systematic reviews.

What is a meta-analysis?

A meta-analysis is a quantitative, epidemiological study design used to systematically assess the results of previous research (2) . Usually, they are based on randomized controlled trials, though not always. This means that a meta-analysis is a mathematical tool that allows researchers to mathematically combine outcomes from multiple studies.

When can a meta-analysis be implemented?

There is always the possibility of conducting a meta-analysis, yet, for it to throw the best possible results it should be performed when the studies included in the systematic review are of good quality, similar designs, and have similar outcome measures.

Why are meta-analyses important?

Outcomes from a meta-analysis may provide more precise information regarding the estimate of the effect of what is being studied because it merges outcomes from multiple studies. In a meta-analysis, data from various trials are combined and generate an average result (1), which is portrayed in a forest plot diagram. Moreover, meta-analysis also include a funnel plot diagram to visually detect publication bias.

Conclusions

A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles included in a systematic-review. 

Remember: All meta-analyses involve a systematic review, but not all systematic reviews involve a meta-analysis.

If you would like some further reading on this topic, we suggest the following:

The systematic review – a S4BE blog article

Meta-analysis: what, why, and how – a S4BE blog article

The difference between a systematic review and a meta-analysis – a blog article via Covidence

Systematic review vs meta-analysis: what’s the difference? A 5-minute video from Research Masterminds:

  • About Cochrane reviews [Internet]. Cochranelibrary.com. [cited 2023 Apr 30]. Available from: https://www.cochranelibrary.com/about/about-cochrane-reviews
  • Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(Suppl 1):29–37.

' src=

Verónica Tanco Tellechea

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

critical literature review and meta analysis

How to read a funnel plot

This blog introduces you to funnel plots, guiding you through how to read them and what may cause them to look asymmetrical.

""

Heterogeneity in meta-analysis

When you bring studies together in a meta-analysis, one of the things you need to consider is the variability in your studies – this is called heterogeneity. This blog presents the three types of heterogeneity, considers the different types of outcome data, and delves a little more into dealing with the variations.

""

Natural killer cells in glioblastoma therapy

As seen in a previous blog from Davide, modern neuroscience often interfaces with other medical specialities. In this blog, he provides a summary of new evidence about the potential of a therapeutic strategy born at the crossroad between neurology, immunology and oncology.

Logo for OPEN OKSTATE

Literature Review, Systematic Review and Meta-analysis

Literature reviews can be a good way to narrow down theoretical interests; refine a research question; understand contemporary debates; and orientate a particular research project. It is very common for PhD theses to contain some element of reviewing the literature around a particular topic. It’s typical to have an entire chapter devoted to reporting the result of this task, identifying gaps in the literature and framing the collection of additional data.

Systematic review is a type of literature review that uses systematic methods to collect secondary data, critically appraise research studies, and synthesise findings. Systematic reviews are designed to provide a comprehensive, exhaustive summary of current theories and/or evidence and published research (Siddaway, Wood & Hedges, 2019) and may be qualitative or qualitative. Relevant studies and literature are identified through a research question, summarised and synthesized into a discrete set of findings or a description of the state-of-the-art. This might result in a ‘literature review’ chapter in a doctoral thesis, but can also be the basis of an entire research project.

Meta-analysis is a specialised type of systematic review which is quantitative and rigorous, often comparing data and results across multiple similar studies. This is a common approach in medical research where several papers might report the results of trials of a particular treatment, for instance. The meta-analysis then statistical techniques to synthesize these into one summary. This can have a high statistical power but care must be taken not to introduce bias in the selection and filtering of evidence.

Whichever type of review is employed, the process is similarly linear. The first step is to frame a question which can guide the review. This is used to identify relevant literature, often through searching subject-specific scientific databases. From these results the most relevant will be identified. Filtering is important here as there will be time constraints that prevent the researcher considering every possible piece of evidence or theoretical viewpoint. Once a concrete evidence base has been identified, the researcher extracts relevant data before reporting the synthesized results in an extended piece of writing.

Literature Review: GO-GN Insights

Sarah Lambert used a systematic review of literature with both qualitative and quantitative phases to investigate the question “How can open education programs be reconceptualised as acts of social justice to improve the access, participation and success of those who are traditionally excluded from higher education knowledge and skills?”

“My PhD research used systematic review, qualitative synthesis, case study and discourse analysis techniques, each was underpinned and made coherent by a consistent critical inquiry methodology and an overarching research question. “Systematic reviews are becoming increasingly popular as a way to collect evidence of what works across multiple contexts and can be said to address some of the weaknesses of case study designs which provide detail about a particular context – but which is often not replicable in other socio-cultural contexts (such as other countries or states.) Publication of systematic reviews that are done according to well defined methods are quite likely to be published in high-ranking journals – my PhD supervisors were keen on this from the outset and I was encouraged along this path. “Previously I had explored social realist authors and a social realist approach to systematic reviews (Pawson on realist reviews) but they did not sufficiently embrace social relations, issues of power, inclusion/exclusion. My supervisors had pushed me to explain what kind of realist review I intended to undertake, and I found out there was a branch of critical realism which was briefly of interest. By getting deeply into theory and trying out ways of combining theory I also feel that I have developed a deeper understanding of conceptual working and the different ways theories can be used at all stagesof research and even how to come up with novel conceptual frameworks.”

Useful references for Systematic Review & Meta-Analysis: Finfgeld-Connett (2014); Lambert (2020); Siddaway, Wood & Hedges (2019)

Research Toolkit for Librarians Copyright © by Kathy Essmiller; Jamie Holmes; and Marla Lobley is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Emotions in self-regulated learning: A critical literature review and meta-analysis

Affiliations.

  • 1 Department of Education and Human Services, Lehigh University, Bethlehem, PA, United States.
  • 2 Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada.
  • 3 Department of Community and Population Health, Lehigh University, Bethlehem, PA, United States.
  • PMID: 36968756
  • PMCID: PMC10033610
  • DOI: 10.3389/fpsyg.2023.1137010

Emotion has been recognized as an important component in the framework of self-regulated learning (SRL) over the past decade. Researchers explore emotions and SRL at two levels. Emotions are studied as traits or states, whereas SRL is deemed functioning at two levels: Person and Task × Person. However, limited research exists on the complex relationships between emotions and SRL at the two levels. Theoretical inquiries and empirical evidence about the role of emotions in SRL remain somewhat fragmented. This review aims to illustrate the role of both trait and state emotions in SRL at Person and Task × Person levels. Moreover, we conducted a meta-analysis to synthesize 23 empirical studies that were published between 2009 and 2020 to seek evidence about the role of emotions in SRL. An integrated theoretical framework of emotions in SRL is proposed based on the review and the meta-analysis. We propose several research directions that deserve future investigation, including collecting multimodal multichannel data to capture emotions and SRL. This paper lays a solid foundation for developing a comprehensive understanding of the role of emotions in SRL and asking important questions for future investigation.

Keywords: emotions; framework; meta-analysis; review; self-regulated learning.

Copyright © 2023 Zheng, Lajoie and Li.

Publication types

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Dtsch Arztebl Int
  • v.106(27); 2009 Jul

Systematic Literature Reviews and Meta-Analyses

Meike ressing.

1 Institut für Medizinische Biometrie, Epidemiologie und Informatik, Universitätsmedizin der Johannes Gutenberg-Universität Mainz

Maria Blettner

Stefanie j. klug.

Because of the rising number of scientific publications, it is important to have a means of jointly summarizing and assessing different studies on a single topic. Systematic literature reviews, meta-analyses of published data, and meta-analyses of individual data (pooled reanalyses) are now being published with increasing frequency. We here describe the essential features of these methods and discuss their strengths and weaknesses.

This article is based on a selective literature search. The different types of review and meta-analysis are described, the methods used in each are outlined so that they can be evaluated, and a checklist is given for the assessment of reviews and meta-analyses of scientific articles.

Systematic literature reviews provide an overview of the state of research on a given topic and enable an assessment of the quality of individual studies. They also allow the results of different studies to be evaluated together when these are inconsistent. Meta-analyses additionally allow calculation of pooled estimates of an effect. The different types of review and meta-analysis are discussed with examples from the literature on one particular topic.

Conclusions

Systematic literature reviews and meta-analyses enable the research findings and treatment effects obtained in different individual studies to be summed up and evaluated.

Every year, there is a great increase in the number of scientific publications. For example, the literature database PubMed registered 361 000 new publications in 1987, with 448 000 in 1997 and 766 000 in 2007 (research in Medline, last updated in January 2009). These figures make it clear how increasingly difficult it is for physicians in private practice, clinicians and scientists to obtain comprehensive current information on any given medical topic. This is why it is necessary to summarize and critically analyze individual studies on the same theme.

Summaries of individual studies are mostly prepared when the results of individual studies are unclear or inconsistent. They are also used to study relationships for which the individual studies do not have adequate statistical power, as the number of cases is too low ( 1 ).

The Cochrane Collaboration undertakes systematic processing and summary of the primary literature for many therapeutic topics, particularly randomized clinical studies ( www.cochrane.org ). They have published a handbook for the performance of systematic reviews and meta-analyses of randomized clinical studies ( 2 ). Cook et al. have published methodological guidelines for this process ( 3 ). Instructions of this sort help to lay down standards for the summary of individual studies. Guidelines have also been drawn up for the publication of meta-analyses on randomized clinical studies ( 4 ) and on observational studies ( 5 ).

Publications on individual studies may be summarized in various forms ( 1 , 6 – 10 ):

  • Narrative reviews
  • Systematic review articles
  • Meta-analyses of published data
  • Pooled reanalyses (meta-analyses with individual data).

These terms are often not clearly allocated in the literature. The aim of the present article is to describe and distinguish these forms and to allow the reader to perform a critical analysis of the results of individual studies and the quality of the systematic review or meta-analysis.

The various types of systematic reviews and meta-analyses of scientific articles will be defined and the procedure will be explained. A selective literature search was performed for this purpose.

A "review" is the qualitative summary of the results of individual studies ( 1 ). A distinction is made between narrative reviews and systematic reviews ( table 1 ). Narrative reviews (A) mostly provide a broad overview of a specific topic ( 1 , 11 ). They are therefore a good way of rapidly obtaining current information on research on a given topic. However, the articles to be included are selected subjectively and unsystematically ( 1 , 11 ). For some time, the Deutsches Ärzteblatt has been using the term "selective literature review" for this type of review. Narrative reviews will not be further discussed in this article.

In contrast, systematic review articles (B) claim that, if possible, they consider all published studies on a specific theme—after the application of previously defined inclusion and exclusion criteria ( 11 ). The aim is to extract relevant information systematically from the publications. What is important is to analyze the methodological quality of the included publications and to investigate the reasons for any differences between the results in the different studies. The results of each study are presented and analyzed according to defined criteria, such as study design and mode of recruitment.

The same applies to the meta-analysis of published data (C). In addition, the results are quantitatively summarized using statistical methods and pooled effect estimates ( glossary ) are calculated ( 1 ).

The summary of individual data

Distortion of study results from systematic errors

The confidence interval is the range within which the true value lies with a specified probability, usually 95%.

A confounder is a factor which is linked to both the studied disease and the studied exposure. For this reason, it can either enhance or weaken the true association between the disease and the target parameter.

An effect estimate, such as the odd ratio or relative risk, estimates the extent of the change in the frequency of a disease caused by a specific exposure.

Contact with a specific risk factor

A forest plot is a graphical representation of the individual studies, as well as the pooled estimate. The effect estimate of each individual study is generally represented on the horizontal or vertical axis, with a confidence interval. The larger the area of the effect estimate of the individual study, the greater is the weight of the study, as a result of the study size and other factors. The pooled effect estimate is mostly represented in the form of a diamond.

In a funnel plot, the study size is plotted against the effect estimates of the individual studies. The variances or the standard error of the effect estimate of the individual studies is given, rather than the study size. Smaller studies give larger variances and standard errors. The effect estimates from large studies are less scattered around the pooled effect estimate than are the effect estimates of small studies. This gives the shape of a funnel. A publication bias can be visualized with the help of funnel plots.

Statistical heterogeneity describes the differences between the studies with respect to the effect estimates. These may be caused by methodological differences between the studies, such as differences in study population or study size, or differences in the methods of measurement.

In individual data, all data (e.g. age, gender, diagnosis) are at the level of the individual.

In medicine and epidemiology, the odds is the ratio of the probability of exposure and the probability of not being exposed. The quotient of the odds of the cases and the odds of the controls gives the odds ratio. For rare diseases, the odds ratio is an approximation to the relative risk.

See individual data

Publication bias means that studies which failed to find any influence of exposure on the target disease ("negative studies") are more rarely published than studies which showed a positive or statistically significant association. Publication bias can be visualized with funnel plots.

A risk factor modifies the probability of the development of a specific disease. This can, for example, be an external environmental effect or an individual predisposition.

To calculate the relative risk, the probability that an exposed individual falls ill is divided by the probability that a non-exposed person falls ill. The relative risk is calculated on the basis of incident diseases.

Using sensitivity analyses, it is examined whether excluding individual studies from the analysis influences the pooled estimate. This tests the stability of the pooled effect estimate.

In subgroup analysis, separate groups in the study population, such as a homogenous ethnic group, are analyzed separately.

A pooled reanalysis (D) is a quantitative compilation of original data ( glossary ) from individual studies for combined analysis ( 1 ). The authors of each study included in the analysis then provide individual data ( glossary ). These are then compiled in a combined database and analyzed according to standard criteria fixed in advance. This form of pooled reanalysis is also referred to as "meta-analysis of individual data".

In a prospectively planned meta-analysis (E), the summary of the individual studies and the combined analysis is included in the planning of the individual studies. For this reason, the individual studies are performed in a standard manner. Prospectively planned meta-analyses will not be further discussed in this article.

It is essential for all forms of summary—except the narrative review—that they should include a prospectively prepared study protocol, with descriptions of the questions to be answered, the hypotheses, the inclusion and exclusion criteria, the selection of studies, and, where applicable, the combination of the data and the recoding of the individual data (only for pooled reanalysis).

Types of study summaries

The procedure for the summary of the studies will now be presented (modified from [7, 10, 12, 13]). This is intended to enable the reader to assess whether a given summary fulfils specific criteria ( Box ).

Checklist for the analysis of a systematic summary

  • Was there an a priori study protocol?
  • Was there an a priori hypothesis?
  • Was there a detailed description of the literature search used?
  • Were prospectively specified inclusion and exclusion criteria clearly described and applied?
  • Was the possible heterogeneity between the studies considered?
  • Was there a clear description of the statistical methods used?
  • Were the limitations of the summary discussed?

1. Was the question to be answered specified in advance?

The question to be answered in the review or meta-analysis and the hypotheses must be clearly defined and laid down in writing prospectively in a study protocol.

2. Were the inclusion and exclusion criteria specified in advance?

On the basis of the inclusion and exclusion criteria, it is decided whether the studies found in the literature search (see point 3) are included in the review/meta-analysis.

3. Were precautions taken to find all studies performed with reference to the specific question to be answered?

An extensive literature search must be performed for studies on the topic. If at all possible, this should be in several literature databases. To avoid bias, all relevant articles should be considered, whatever their language. Moreover, a search should be performed in the literature lists of the articles found and for unpublished studies in congress volumes, as well as with search machines on the Internet.

4. Was the relevant information extracted from the published articles or were the original data combined?

For a systematic review article (B) and for a meta-analysis of published data (C), relevant information should be extracted from the publications.

For a pooled reanalysis (D), authors of all identified studies must be contacted and requested to provide individual data. The individual data must then be coded according to standard specifications, compiled in a combined database and analyzed.

5. Was a descriptive analysis of the data performed?

In all forms of summary, it is usual for the most important characteristics of the individual studies to be presented in overview tables. Table 2 shows an example of such a table, taken from a meta-analysis with published data (C) ( 14 ). This helps to make the differences between the studies clear with respect to the data examined.

NK, not known; FISH, fluorescent in situ hybridation; *1 squamous cell carcinoma only; *2 ever use → 2 years’ use;

*3 relative risks for injectable contrceptives adjusted for oral contraceptive use; *4 Costa Rica, Colombia, Mexico, Panama;

*5 Australia, Chile, Colombia, Israel, Kenya, Mexico, Nigeria, Philippines, Thailand; *6 adenocarcinoma of the cervix only;

*7 Brasil, Colombia, Morocco, Paraguay, Peru, Philippines, Spain, Thailand (Shortened from: Smith J, Green J, Berrington de Gonzalez A et al.: Cervical cancer and use of hormonal contraceptives: a systematic review. Lancet 2003; 361: 1159–67. With the kind permission of Elsevier)

6. Were the calculations of the effect estimates of the individual studies and of the pooled effect estimate presented?

How were the effect estimates of the individual studies calculated?—Systematic review articles (B) usually contain tables with the effect estimates of the individual studies. In a meta-analysis of published data (C), the effect estimates of individual studies (for example, odds ratio or relative risk, see Glossary ) are either directly extracted from the publications or recalculated in a standard manner from the data in each publication ( figure 1 ). Depending on the nature of the factors and target parameters (binary, categorical or continuous variables), a logistic or a linear regression model is used to calculate the effect estimates of the individual studies in the meta-analyses of published data (C) and pooled reanalyses (D).

An external file that holds a picture, illustration, etc.
Object name is Dtsch_Arztebl_Int-106-0456_001.jpg

The results of the individual studies and the pooled estimate, presented as forest plots on the association between oral contraceptives and cervical carcinoma, as an example of the meta-analysis of published data ( 14 ); N.A. = not available; * never use means <2 years use. CI = confidence interval

(Shortened from: Smith J, Green J, Berrington de Gonzalez A et al.: Cervical cancer and use of hormonal contraceptives: a systematic review. Lancet 2003; 361: 1159–67. With the kind permission of Elsevier).

How was the pooled effect estimate calculated?— The effect estimates of the individual studies are combined by statistical procedures to give a common pooled effect estimate ( 9 ) ( figure 1 ). In meta-analyses with published data (C), two methods are mostly used to calculate a pooled effect estimate: either the fixed effect model or the random effect model (15, 16). They differ with respect to assumptions about the heterogeneity of the estimate between individual studies (see point 7). The method used should be given in the publication and justified. The effect estimates of the individual studies and the pooled effect estimates can be graphically presented in the form of so-called forest plots ( Glossary ; Figure 1 ; [14]).

In pooled reanalyses (D), the pooled effect estimates are mostly calculated by logistic or linear regression. However, the statistical analysis must adequately allow for the origin of the data sets from different studies. The results of the pooled reanalyses can be presented like the results of a single combined study ( table 3 ).

Trend test: χ 2 = 66.2; p < 0.0001

RR, relative risk, adjusted for age, study or study center, age at first sexual intercourse,

number of sex partners, number of full-term pregnancies, smoking and screening status;

* Information taken from the publication; CI, confidence interval; N.A., not available;

s., significance at the level α = 5%; n.s., not significant at the level α = 5%

(Shortened and modified from: International Collaboration of Epidemiological Studies of Cervical Cancer: Cervical cancer and hormonal contraceptives: collaborative reanalysis of individual data for 16,573 women with cervical cancer and 35,509 women without cervical cancer from 24 epidemiological studies. Lancet 2007; 370: 1609–21. With the kind permission of Elsevier)

7. Were problems considered in the interpretation of pooled estimates?

Was the heterogeneity between the estimates considered?—There may be marked differences between the estimates in the individual studies. This statistical heterogeneity ( glossary ) between the studies may be caused by differences in study design, study populations (age, gender, ethnic group), methods of recruitment, diagnosis, or methods of measurement ( 17 , 18 ). The methodological heterogeneity between the studies can be visualized in an overview table, in which the most important characteristics of the individual studies are presented ( table 2 ). The heterogeneity can be formally investigated with the help of statistical tests. If there is statistical heterogeneity between the studies, the random effect model, rather than the fixed effect model, should be used for the calculation of the pooled estimate ( 7 , 15 , 16 ). There is, however, no clear definition as to when the statistical heterogeneity between the studies is so large that the pooled effect estimate should not be calculated ( 1 , 19 ). In addition, the heterogeneity between the studies should be examined by subgroup analysis ( glossary ). For example, this might involve combined analysis of only studies with the same characteristics in the study population, such as homogenous age groups, the same ethnic groups or the same histological findings. Moreover, studies with the same characteristics—such as study quality or study size—may be considered separately in subgroup analyses. This may indicate whether the effect of the corresponding risk factors ( glossary ) is different in the different subgroups.

Were sensitivity analyses performed?— Like subgroup analyses, sensitivity analyses ( glossary ) serve to test the stability of the pooled estimate. It is, for example, possible that the pooled effect estimate is mainly determined by one large study. If this study is excluded from the analysis, the pooled effect estimate may change. This must be borne in mind in the discussion and interpretation of the results.

Was a possible publication bias considered?— A publication bias ( glossary ) can be visualized with a so-called funnel plot ( glossary ) ( 7 , 20 – 22 ). Figure 2 shows an example with simulated data. In the upper funnel plot ( Figure 2a ), there is a roughly funnel shaped distribution of the effect estimates of the individual studies around the pooled effect estimates (middle broken line). There is no publication bias here. In the lower funnel plot ( Figure 2b ), the small studies are missing, which in this example show no increased risk. For this reason, there is probably a publication bias, because these studies had not been published.

An external file that holds a picture, illustration, etc.
Object name is Dtsch_Arztebl_Int-106-0456_002.jpg

Visualization of publication bias with funnel plots of simulated data a) No publication bias; b) Publication bias; SE = standard error; OR = odds ratio

8. How were the results interpreted?

In the interpretation of the results, possible limitations should be discussed and considered. For example, the reliability of the results of individual studies can be limited by the inadequate quality of the individual studies—for example, by selection of the study population or from aggregated data ( glossary ).

The method section describes the individual steps for the extraction of the relevant points which must be considered in the systematic summary of scientific articles ( Box ). This checklist can also be used to analyze the quality of systematic review articles or meta-analysis.

Publications on the association between the administration of oral contraceptives and the development of cervical carcinoma were used as examples of the performance of a systematic literature review (B), a meta-analysis of published data (C), and a pooled reanalysis (D). This association has been scientifically studied for a long period.

In 1996, La Vecchia et al. published a systematic review article (B) on this topic, including six studies ( 23 ). Their overview table contained a variety of information on the individual studies. No pooled effect estimate was calculated.

In 2003, Smith et al. ( 14 ) presented a meta-analysis of published data (C) of 28 studies on the same topic. The included studies were first summarized in a descriptive overview, as is common in systematic review articles ( table 2 ). This table shows that the study methods were heterogenous ( glossary ); for example, HPV was detected in different ways ( table 2 ). The heterogeneity was also formally investigated with statistical tests and various subgroup analyses were performed. In contrast to the systematic review article (B) of LaVecchia et al., pooled effect estimates were calculated with the published data ( figure 1 ). The effect estimates for the individual studies and the pooled effect estimates with their confidence intervals ( glossary ) were presented as a forest plot ( figure 1 ).

In 2007, a pooled reanalysis (D) was published for 24 studies on the same topic for which the original data were available ( 24 ). In contrast to the meta-analysis of published data, the pooled effect estimates were calculated from the original data and only the combined results were presented ( table 3 ). This kind of analysis is only possible in a pooled reanalysis, as the original data with precise information on all parameters for each participant are then available. Nevertheless, here too it is necessary to consider that the individual data ( glossary ) are derived from different studies.

Systematic review articles (B) can provide a comprehensive overview of the current state of research ( 1 ). They are also necessary for the development of S2 and S3 guidelines for formal evidence-based research ( 25 ). Meta-analyses of published data (C) are performed to calculate additional pooled effect estimates from the individual studies ( 1 ). Like systematic review articles, they are feasible whether the authors of the original articles are prepared to cooperate or not.

The calculated pooled effect estimates may be of limited validity for various reasons. Firstly, it has not been clearly defined what is the maximum order of heterogeneity between the studies which is negligible and which then allows a meaningful calculation of a pooled effect estimate (1, 19). If the individual studies are too heterogenous, a pooled effect estimate should not be calculated. Secondly, the pooled effect estimate is mostly calculated from aggregated data. Subgroup analyses and the consideration of potential confounders ( glossary ) are often impossible, or only possible to a limited extent ( 1 , 19 ). Thirdly, publication bias is also a problem for the meta-analysis of published data.

In a pooled reanalysis (D), potential confounders and risk factors can be more easily considered ( 7 ), as they are usually only published in an aggregated form. With the individual data, the outcome parameters, risk factors, and confounders used in the analysis can be categorized in a standard manner and properly incorporated in the analysis. Individual data can be removed in accordance with the prospective specifications in the study protocol, without it being necessary to exclude the whole study. The disadvantages of pooled reanalysis are that it demands a great deal of time and money and that it is dependent on the willingness of the authors of the individual studies to cooperate. If not all authors send their individual data, this may result in biased results.

The level of evidence of the type of summary increases from the systematic review to the meta-analysis of published data to the pooled reanalysis. It is important that all three forms of summary should be performed with high quality.

Key messages

  • The various forms of summary can be categorized as systematic review articles, meta-analyses of published data, and pooled reanalyses.
  • Systematic review articles can provide a rapid overview of the status of research on a specific topic.
  • Meta-analyses of published data and pooled reanalyses additionally permit the calculation of pooled effect estimates.
  • Pooled reanalyses allow a detailed evaluation on the basis of individual data.
  • Like any original study, all these types of summary must have an a priori study protocol, laying down in detail the research questions, the hypothesis, the literature search, the inclusion and exclusion criteria, and the analysis strategies.

Acknowledgments

Translated from the original German by Rodney A. Yeates, M.A., Ph.D.

Conflict of interest statement

The authors declare that there is no conflict of interest in the sense of the guidelines of the International Committee of Medical Journal Editors.

IMAGES

  1. Systematic Review Appraisal Training Course

    critical literature review and meta analysis

  2. 3 Systematic Reviews and Meta-Analyses

    critical literature review and meta analysis

  3. What is a Meta-Analysis? The benefits and challenges

    critical literature review and meta analysis

  4. The Differences Between a Systematic Review vs Meta Analysis

    critical literature review and meta analysis

  5. HOW PRECISION HELPS RESEARCHERS WITH SYSTEMATIC LITERATURE REVIEWS

    critical literature review and meta analysis

  6. Systematic Review VS Meta-Analysis

    critical literature review and meta analysis

VIDEO

  1. Literature, Systematic Review & Meta Analysis

  2. Systematic Literature Review & Meta Analysis (Day-3): DR. JASPREET KAUR

  3. BRM Chapter 04 The critical literature review

  4. Critical Literature Review 1

  5. Critical Literature Review 3

  6. Critical Literature Review 2

COMMENTS

  1. How to conduct a meta-analysis in eight steps: a practical guide

    Similar to conducting a literature review, the search process of a meta-analysis should be systematic, reproducible, and transparent, resulting in a sample that includes all relevant studies (Fisch and Block 2018; Gusenbauer and Haddaway 2020).

  2. Understanding and Evaluating Systematic Reviews and Meta-analyses

    Keywords: Bias, meta-analysis, number needed to treat, publication bias, randomized controlled trials, systematic review Introduction A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and ...

  3. Systematic Reviews and Meta-Analysis: A Guide for Beginners

    Meta-analysis is a statistical tool that provides pooled estimates of effect from the data extracted from individual studies in the systematic review. The graphical output of meta-analysis is a forest plot which provides information on individual studies and the pooled effect. Systematic reviews of literature can be undertaken for all types of ...

  4. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement ...

  5. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  6. Meta‐analysis and traditional systematic literature reviews—What, why

    Meta-analysts make critical decisions at each step of the meta-analytic process (Geyskens et al., 2009). ... Review Manager (RevMan) is a web-based software that manages the entire literature review process and meta-analysis. The meta-analyst uploads all studies to RevMan library, where they can be managed and exanimated for inclusion. Like CMA ...

  7. Systematic review and meta-analysis methodology

    Systematic reviews and meta-analyses are being increasingly used to summarize medical literature and identify areas in which research is needed. Systematic reviews limit bias with the use of a reproducible scientific process to search the literature and evaluate the quality of the individual studies. If possible the results are statistically ...

  8. Emotions in self-regulated learning: A critical literature review and

    This review aims to illustrate the role of both trait and state emotions in SRL at Person and Task × Person levels. Moreover, we conducted a meta-analysis to synthesize 23 empirical studies that were published between 2009 and 2020 to seek evidence about the role of emotions in SRL.

  9. Critical Analysis: The Often-Missing Step in Conducting Literature

    The main question to be addressed in this article is "What constitutes a critical analysis within a literature review research project?" ... Spaggiari L. (2020). The synthesis of scientific shreds of evidence: A critical appraisal on systematic review and meta-analysis methodology. Journal of Thoracic Disease, 12(6), 3399-3403.doi:10. ...

  10. Meta‐analysis and traditional systematic literature reviews—What, why

    Meta-analysis is a research method for systematically combining and synthesizing findings from multiple quantitative studies in a research domain. Despite its importance, most literature evaluating meta-analyses are based on data analysis and statistical discussions.

  11. Systematic Reviews and Meta-Analyses: Critical Appraisal

    Systematic Reviews and Meta-Analyses: Critical Appraisal. All relevant studies must undergo a critical appraisal to evaluate the risk of bias, or internal and external validity, of all relevant references. This step often occurs simultaneously with the Data Extraction phase. It is a vital stage of the systematic review process to uphold the ...

  12. (PDF) Literature Reviews and Meta Analysis

    Although recent scholars recommended meta analysis in reviewing literature thoroughly (Durlak, 2010; Gopalakrishnan & Ganeshkumar, 2013), however, SLR has been widely applied by many researchers ...

  13. Principles of Systematic Reviews and Meta-analyses

    Defining features of a systematic review of high quality are a pre-specified researchable question and review protocol, explicit eligibility criteria, comprehensive search for, and selection of, studies, a critical appraisal of the included studies (assessing the risk of bias), and when appropriate, a synthesis of findings with meta-analysis.

  14. Systematic reviews vs meta-analysis: what's the difference?

    A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles ...

  15. Literature Review, Systematic Review and Meta-analysis

    Meta-analysis is a specialised type of systematic review which is quantitative and rigorous, often comparing data and results across multiple similar studies. This is a common approach in medical research where several papers might report the results of trials of a particular treatment, for instance. The meta-analysis then statistical ...

  16. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. ... Even though systematic review and meta-analysis are considered the best evidence for getting a definitive answer to a research question, there are certain ...

  17. Risk factors of critical & mortal COVID-19 cases: A systematic ...

    Risk factors of critical & mortal COVID-19 cases: A systematic literature review and meta-analysis J Infect. 2020 Aug;81 ... We statistically analyzed the risk factors of critical/mortal and non-critical COVID-19 patients with meta-analysis. Results: Thirteen studies were included in Meta-analysis, including a total number of 3027 patients with ...

  18. Emotions in self-regulated learning: A critical literature review and

    This review aims to illustrate the role of both trait and state emotions in SRL at Person and Task × Person levels. Moreover, we conducted a meta-analysis to synthesize 23 empirical studies that were published between 2009 and 2020 to seek evidence about the role of emotions in SRL.

  19. Deductive Qualitative Analysis: Evaluating, Expanding, and Refining

    Deductive qualitative analysis (DQA; Gilgun, 2005) is a specific approach to deductive qualitative research intended to systematically test, refine, or refute theory by integrating deductive and inductive strands of inquiry.The purpose of the present paper is to provide a primer on the basic principles and practices of DQA and to exemplify the methodology using two studies that were conducted ...

  20. Systematic Literature Reviews and Meta-Analyses

    Conclusions. Systematic literature reviews and meta-analyses enable the research findings and treatment effects obtained in different individual studies to be summed up and evaluated. Keywords: literature search, systematic review, meta-analysis, clinical research, epidemiology. Every year, there is a great increase in the number of scientific ...

  21. Artificial intelligence for screening and diagnosis of amyotrophic

    We searched seven databases for literature on the application of AI in the early diagnosis and screening of ALS in humans. ... In the 34 analyzed studies, a meta-prevalence of 47% for ALS was noted. For ALS detection, the pooled sensitivity of AI models was 94.3% (95% CI - 63.2% to 99.4%) with a pooled specificity of 98.9% (95% CI - 92.4% ...