Dahlgren Memorial Library

The Graduate Health & Life Sciences Research Library at Georgetown University Medical Center

Systematic reviews.

  • Should I do a systematic review?
  • Writing the Protocol
  • Building a Systematic Search
  • Where to Search
  • Managing Project Data
  • How can a DML librarian help?

How do I write a protocol?

The protocol serves as a roadmap for your review and specifies the objectives, methods, and outcomes of primary interest of the systematic review. Having a protocol promotes transparency and can be helpful for project management. Some journals require you to submit your protocol along with your manuscript. 

A good way to familiarize yourself with research protocols is to take a look at those registered on PROSPERO. PROSPERO's registration form includes 22 mandatory fields and 18 optional fields which will help you to explain every aspect of your research plan. 

  • PROSPERO - International prospective register of systematic reviews

A protocol ideally includes the following:

  • Databases to be searched and additional sources (particularly for grey literature)
  • Keywords to be used in the search strategy
  • Limits applied to the search
  • Screening process
  • Data to be extracted
  • Summary of data to be reported

Once you have written your protocol, it is advisable to register it. Registering your protocol is a good way to announce that you are working on a review, so that others do not start working on it.

The University of Warwick's protocol template is available below and is a great tool for planning your protocol. 

  • << Previous: Should I do a systematic review?
  • Next: Building a Systematic Search >>
  • Last Updated: Jul 22, 2024 3:29 PM
  • URL: https://guides.dml.georgetown.edu/systematicreviews

The Responsible Use of Electronic Resources policy governs the use of resources provided on these guides. © Dahlgren Memorial Library, Georgetown University Medical Center. Unless otherwise stated, these guides may be used for educational or academic purposes as long as proper attribution is given. Please seek permission for any modifications, adaptations, or for commercial purposes. Email [email protected] to request permission. Proper attribution includes: Written by or adapted from, Dahlgren Memorial Library, URL.

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Living Systematic Review

Carole Mitnick, Molly Franke, Celia Fung, Andrew Lindeborg. Clinical Outcomes of Individuals with COVID-19 and Tuberculosis Disease: a Living Systematic Review . PROSPERO 2020 CRD42020187349 

Systematic Review and Meta-Analysis

Brindle ME, Roberts DJ, Daodu O, Haynes AB, Cauley C, Dixon E, La Flamme C, Bain P, Berry W. Deriving literature-based benchmarks for surgical complications in high-income countries: a protocol for a systematic review and meta-analysis. BMJ Open. 2017 May 9. PMID: 28487456

We require a completed protocol before we will carry out final searches on any knowledge synthesis project.

We encourage you to use this template, which is based on the PRISMA-P checklist (Moher D, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1. PMID: 25554246 .)

  • Countway Protocol Template

Why a Protocol

From the Cochrane Handbook :

“The protocol sets out the context in which the review is being conducted. It presents an opportunity to develop ideas that are foundational for the review.” “Preparing a systematic review is complex and involves many judgements. To minimize the potential for bias in the review process, these judgements should be made as far as possible in ways that do not depend on the findings of the studies included in the review.” “Publication of a protocol for a review that is written without knowledge of the available studies reduces the impact of review authors’ biases, promotes transparency of methods and processes, reduces the potential for duplication, allows peer review of the planned methods before they have been completed, and offers an opportunity for the review team to plan resources and logistics for undertaking the review itself.”

Lasserson TJ, Thomas J, Higgins JPT. Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook.

A protocol is your plan for carrying out your knowledge synthesis. It presents the rationale for carrying out the project and clearly states the aims of the work. The protocol describes the process for selecting research for inclusion, including the provision of explicit criteria for assessing reports for inclusion and for analyzing the included reports. Hence, it is an internal document that helps team members work together more smoothly. But it also is a hedge against bias by clearly stating the rules of the game before any work has begun. A protocol makes it more difficult to alter selection patterns based on perceived results. Beyond acting as a roadmap for your research, protocols, when registered or published in some way, allow others to see your research plan, establishing priority and reducing the risk of duplicate research.

Protocol Reporting Guidelines

  • PRISMA (Preferred Reporting Items for Systematic Reviews) PRISMA-P was published in 2015 aiming to facilitate the development and reporting of systematic review protocols.
  • MECIR (Methodological Expectations of Cochrane Intervention Reviews) Standards for the conduct of new Cochrane Intervention Reviews, and the planning and conduct of updates

Protocol Registries

  • PROSPERO International prospective register of systematic reviews
  • OSF (Open Science Framework) OSF is a free, open platform to support your research and enable collaboration.
  • Cochrane If planning a Cochrane Review, you must publish your protocol with them after your proposal has been accepted.

Additional Resources

  • Writing a review protocol - good practice and common errors This is a two part webinar provided by Cochrane Training intended to provide up to date guidance for review authors wishing to learn more about developing their own protocol.
  • << Previous: Guides and Standards
  • Next: Databases and Sources >>
  • Last Updated: Sep 19, 2024 10:43 AM
  • URL: https://guides.library.harvard.edu/meta-analysis
         


10 Shattuck St, Boston MA 02115 | (617) 432-2136

| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic review research protocol example

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved September 22, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Search Menu
  • Sign in through your institution
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to Submit
  • About Journal of Surgical Protocols and Research Methodologies
  • Editorial Board
  • Advertising & Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, contents of a systematic review/meta-analysis protocol, conflict of interest statement.

  • < Previous

How to write a systematic review or meta-analysis protocol

  • Article contents
  • Figures & tables
  • Supplementary Data

Julien Al Shakarchi, How to write a systematic review or meta-analysis protocol, Journal of Surgical Protocols and Research Methodologies , Volume 2022, Issue 3, July 2022, snac015, https://doi.org/10.1093/jsprm/snac015

  • Permissions Icon Permissions

A protocol is an important document that specifies the research plan for a systematic review/meta analysis. In this paper, we have explained a simple and clear approach to writing a research study protocol for a systematic review or meta-analysis.

A study protocol is an essential part of any research project. It sets out in detail the research methodology to be used for the systematic review or meta-analysis. It assists the research team to stay focused on the question to be answered by the study. Prospero, from the Centre for Reviews and Dissemination at the University of York, is an international prospective register of systematic reviews and authors should consider registering their research to reduce the potential for duplication of work. In this paper, we will explain how to write a research protocol by describing what needs to be included.

Introduction

This section sets out the need for the planned research and the context of the current evidence. It should be supported by an extensive background to the topic with appropriate references to the literature. This should be followed by a brief description of the condition and the target population. A clear explanation for the rationale and objective of the project is also expected to justify the need of the study.

Methods and analysis

A detailed search strategy is necessary to be described in the protocol. It should set out which databases are to be included as well as the specific keywords be searched and publication timeframe. The inclusion/exclusion criteria should be described for the type of studies, participants and interventions. The population, intervention, comparator and outcome (PICO) framework is a useful tool to consider for this section.

The methodology of the data extraction should be detailed in this section and should include how many reviewers will be involved and how any disagreement will be resolved. The methodology to be used for quality and bias assessment of included studies should also be described in this section. Data analysis including statistical methodology needs to be established clearly in this section of the protocol. Finally details of any planned subgroup analyses should also be included.

Ethics and dissemination

Any competing interests of the researchers should also be stated in this section. The authorship of any publication should have a clear and fair criterion which should be described in this section of the protocol. By doing so, it will resolve any issues arising at the publication stage.

Funding statement

It is important to explain who are the sponsors and funders of the study. It should clearly clarify the involvement and potential influence of any party. The protocol should explicitly outline the roles and responsibilities of any funder(s) in study design, data analysis and interpretation, manuscript writing and dissemination of results.

A protocol is an important document that specifies the research plan for a systematic review or meta-analysis. It should be written in detail and researchers should aim to publish their study protocols. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement provides a useful checklist on what should be included in a systematic review [ 1 ]. In this paper, we have explained a simple and clear approach to writing a research study protocol for a systematic review or meta-analysis.

None declared.

Page   MJ , McKenzie   JE , Bossuyt   PM , Boutron   I , Hoffmann   TC , Mulrow   CD , et al.    The PRISMA 2020 statement: an updated guideline for reporting systematic reviews . BMJ   2021 ; 372 : n71 .

Google Scholar

  • data analysis
Month: Total Views:
July 2022 188
August 2022 236
September 2022 108
October 2022 191
November 2022 163
December 2022 186
January 2023 191
February 2023 245
March 2023 267
April 2023 233
May 2023 202
June 2023 197
July 2023 267
August 2023 210
September 2023 218
October 2023 291
November 2023 308
December 2023 233
January 2024 303
February 2024 297
March 2024 347
April 2024 296
May 2024 348
June 2024 285
July 2024 241
August 2024 336
September 2024 172

Email alerts

Citing articles via.

  • Advertising and Corporate Services
  • Journals Career Network
  • JSPRM Twitter

Affiliations

  • Online ISSN 2752-616X
  • Copyright © 2024 Oxford University Press and JSCR Publishing Ltd
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Systematic Reviews: Create a Protocol

  • Getting Started
  • Develop a Research Question
  • Create a Protocol
  • Search for Literature
  • Conduct Screening
  • Appraise & Synthesize
  • Report Results
  • Types of Reviews Chart Page
  • Data Management Page

Step 2: Create a Protocol

systematic review research protocol example

A systematic review protocol states your rationale, hypothesis, and planned methodology. Members of the team then use the protocol as a guide for conducting the research. It is recommended that you register your protocol before conducting your review. Registering your protocol will improve transparency as well as alerting other researchers of your intentions so efforts are not duplicated.

Start Writing Your Protocol with this Template

Why write a protocol.

  • What's the Difference Between Registering & Publishing

Where to Register a Protocol

Standards & guidelines.

UK Libraries has developed a protocol development tool that is freely available to UK faculty, staff, and students. You will find a template with detailed guidance and examples. You will also find links back to various sections of this guide. Please do not use DMPTool to register your protocol, you will need to download the PDF and register it. You can read more about registering further down on this page.

  • DMPTool -- Data Management Planning Tool
  • Starting a Review: A Protocol This template is intended to help UK students, faculty, and researchers begin a systematic review, scoping review, or other evidence synthesis. We have adapted the PRISMA Protocol Extension into this active form. Guidance is provided throughout the template.
  • Developing a Protocol Using DMPTool Use this PDF to develop a protocol .

A systematic review protocol states your rationale, hypothesis, and planned methodology.  Members of the team then use the protocol as a guide for conducting the research.  It is recommended that you register your protocol before conducting your review.  Registering your protocol will improve transparency as well as alerting other researchers of your intentions so efforts are not duplicated.

systematic review research protocol example

Systematic Review

Alexander Flannery, Chad Venn. Healthcare Exposures Preceding Hospitalization with Sepsis: A Systematic Review. PROSPERO 2020 CRD42020216759 https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020216759

systematic review research protocol example

Scoping Review

Osf registration and publication.

Robinson, L. E., Skaggs, P., Scutchfield, F. D., & Bell, S. (2021, December 16). A Scoping Review Examining Providers’ Stigmas and the Effects on Patients with Opioid Use Disorder: A Protocol. https://doi.org/10.17605/OSF.IO/UMJAF

What's the Difference between Registering & Publishing

A systematic review protocol can either be registered or published.  There are only a few journals that publish systematic review protocols.  Registering your protocol in an publicly accessible database is encouraged so that other authors will not complete a review on your topic.  It is a best practice to search for publicly registered reviews on your topic before starting the review process.  Registering your protocol helps to avoid unintended duplication of reviews and increases transparency.

BMJ Open Publication

Lawrence, K. A., Pachner, T. M., Long, M. M., Henderson, S., Schuman, D. L., & Plassman, B. L. (2020). Risk and protective factors of dementia among adults with post-traumatic stress disorder: a systematic review protocol.  BMJ Open, 10 (6), e035517. https://doi.org/10.1136/bmjopen-2019-035517

Skaggs, Peyton; F. Douglas Scutchfield; Robinson, Lauren E.; Bell, Sarah Beth.  A Scoping Review Examining Providers’ Stigmas and the Effects on Patients with Opioid Use Disorder: A Protocol.  https://doi.org/10.17605/OSF.IO/UMJA

  • Campbell Collaboration "The Campbell Collaboration promotes positive social and economic change through the production and use of systematic reviews and other evidence synthesis for evidence-based policy and practice." Disciplines: Business and Management, Crime and Justice, Disability, Education, International Development, Knowledge Translation and Implementation, Methods, Nutrition, and Social Welfare
  • Cochrane "Our mission is to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence. Our work is internationally recognized as the benchmark for high-quality information about the effectiveness of health care." Disciplines: Healthcare
  • PROSPERO An international database of prospectively registered systematic reviews in health and social care. Key features from the review protocol are recorded and maintained as a permanent record. (Does not accept scoping reviews.) Disciplines: Health and Social Care, Welfare, Public Health, Education, Crime, Justice, and International Development
  • Open Science Framework An open source web application that connects and supports the research workflow. Researchers use the OSF to collaborate, document, archive, share, and register research projects, materials, and data. OSF can be used to pre-register a systematic or scoping review protocol and to share documents such as a Zotero library, search strategies, and data extraction forms. Disciplines: Multidisciplinary

Where to Publish a Protocol

  • BMJ Open BMJ Open "will consider publishing without peer review protocols that have formal ethical approval and funding from a recognised, open access advocating research-funding body". Otherwise, protocols are peer reviewed.
  • Systematic Reviews, a BioMed Central journal This open access title publishes protocols of systematic reviews broadly related to health sciences.
  • Cochrane Handbook for Systematic Reviews Chapter 2: Guide to the contents of a Cochrane protocol and review
  • EQUATOR Network The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network is an international initiative that seeks to improve the reliability and value of published health research literature by promoting transparent and accurate reporting and wider use of robust reporting guidelines.
  • Finding What Works in Health Care: Standards for Systematic Reviews Recommended standards for developing the systematic review protocol.
  • JBI Manual for Evidence Synthesis 1.3 The review protocol
  • MECIR (Methodological Expectations for Cochrane Intervention Reviews) Manual Guidelines on reporting protocols for Cochrane Intervention reviews
  • Protocol Guidance from PRISMA Resources from the team that developed PRISMA

Book Cover

Searching the Grey Literature: a Handbook for Searching Reports, Working Papers, and Other Unpublished Research

Book Cover

Umbrella Reviews: Evidence Synthesis with Overviews of Reviews and Meta-Epidemiologic Studies

Book Cover

Cochrane Handbook for Systematic Reviews of Interventions

Book Cover

Comprehensive Systematic Review for Advanced Practice Nursing, Second Edition

  • << Previous: Develop a Research Question
  • Next: Search for Literature >>
  • Last Updated: Sep 4, 2024 9:07 AM
  • URL: https://libguides.uky.edu/systematicreview

Jump to navigation

Home

Cochrane Cochrane Interactive Learning

Cochrane interactive learning, module 2: writing the review protocol, about this module.

Part of the Cochrane Interactive Learning course on Conducting an Intervention Review, this module explains why a review protocol is a crucial step in planning and delivering a systematic review. This module teaches you about the components of a protocol, and how to define eligibility criteria using the PICO format.

45-60 minutes

What you can expect to learn (learning outcomes).

This module will teach you to:

  • Recognize the importance of Cochrane Protocols
  • Identify the eligibility criteria for studies to be included in a Cochrane Review
  • Identify the information that should be included in the background of a Cochrane Review
  • Recognize the key components of a well-written objective
  • Recognize the structure of a protocol

Authors, contributors, and how to cite this module

Module 2 has been written and compiled by Dario Sambunjak, Miranda Cumpston and Chris Watts,  Cochrane Central Executive Team .

A full list of acknowledgements, including our expert advisors from across Cochrane, is available at the end of each module page.

This module should be cited as: Sambunjak D, Cumpston M, Watts C. Module 2: Writing the review protocol. In: Cochrane Interactive Learning: Conducting an intervention review. Cochrane, 2017. Available from https://training.cochrane.org/interactivelearning/module-2-writing-review-protocol .

Update and feedback

The module was last updated on September 2022.

We're pleased to hear your thoughts. If you have any questions, comments or feedback about the content of this module, please contact us .

Duquesne University Logo

Developing a Protocol for Systematic and Scoping Reviews

Protocol templates.

  • Making Your Protocol Available
  • Additional Resources
  • Related Guides
  • Getting Help

Click the tabs below to learn more about template resources for systematic and scoping reviews.

  • Systematic Review Templates
  • Scoping Review Templates

Systematic Review Protocol Templates

The following resources offer templates for authors to develop a systematic review protocol.

  • PRISMA-P for Systematic Review Protocols Developed in 2015, the PRISMA-P (Preferred Reporting Items for Systematic review and Meta-Analysis Protocols) checklist provides guidance on what should be included in an SR protocol. Like other PRISMA models, this should be viewed as the bare minimum of what to include.
  • Campbell Institute The Campbell Collaboration is a source through which systematic reviews can be conducted. Campbell follows the Cochrane Handbook guidelines for systematic reviews as well as their own policies and guidelines in protocol and organization of the review (The Campbell Collaboration, 2020). For authors to publish with Campbell, they must register and be approved prior to conducting the evidence synthesis. Review Campbell's website for more information.
  • Cochrane Handbook Cochrane Reviews offers distinct descriptions and requirements for what is to be included in a protocol when conducting a Cochrane review. This information is available in the Methodological Expectations of Cochrane Intervention Reviews (MECIR). Keep in mind, in order to conduct a Cochrane review, there are further measures authors must take in addition to the procedures for conducting a systematic review (Cumpston & Chandler, 2021). Review the Cochrane website carefully prior to beginning your review process.

Scoping Review Protocol Templates

The following resources offer templates for authors conducting a scoping review.

  • PRISMA-ScR While the Preferred Reporting Items for Systematic Reviews and Meta-Analyses provides a lot of information for authors looking to complete systematic reviews, they also developed a template and information for authors writing scoping reviews (Tricco et. al, 2018). This checklist should be treated as a minimal requirement for authors to follow.
  • Joanna Briggs Institute (JBI) This link downloads as a Microsoft Word document detailing the specific template for completing a scoping review through the Joanna Briggs Institute. The JBI Manual provides information on each section of a scoping review as well as how to distinguish a scoping review from other forms of evidence synthesis (Peters et. al, 2020).
  • Scoping Reviews: JBI Manual Chapter of the JBI Manual covering what authors need to know regarding scoping reviews.
  • << Previous: Home
  • Next: Making Your Protocol Available >>
  • Last Updated: Sep 4, 2024 10:48 AM
  • URL: https://guides.library.duq.edu/protocols

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Population Health
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?
  • Types of Reviews
  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

The Whats and Whys of Protocols

Systematic reviews and scoping reviews should have a protocol which helps to plan and outline the study methodology. The protocol should include:

  • the rationale for the review
  • key questions broken into PICO (or other structured research question) components
  • inclusion/exclusion criteria
  • literature searches for published/unpublished literature
  • data abstraction/data management
  • assessment of methodological quality/risk of bias of individual studies (not required for scoping reviews)
  • data synthesis
  • grading the evidence for each key question

Why should complete a protocol?

  • A protocol is your planning document and roadmap for the project. It allows you to complete a systematic review efficiently and accurately, ensures greater understanding among team members, and makes writing the manuscript far easier.
  • Many journals now require submitted systematic reviews to have registered protocols.
  • The PRISMA Reporting Standard lists information about the systematic review protocol as an "essential element" (PRISMA 2020 Item 24)
  • The Cochrane Handbook, The Institute of Medicine Standards, and others, all list completing a protocol as one of the important steps to a successful systematic review.
  • Best practices in systematic reviews: the importance of protocols & registration
  • Planning a systematic review? Think protocols

Writing a Protocol

Protocol templates:

  • PRISMA for systematic review protocols (PRISMA-P) Checklist and explanation of what should be included in a systematic review protocol.
  • The PROSPERO systematic review protocol template
  • OSF Scoping Review Protocol Template and Guidance Document "The Guidance document is intended to be used in tandem with the Scoping Review Protocol Template. The Guidance document includes tips, examples, and details about each section of the protocol. The Template includes headings and subheadings to use to structure the protocol (e.g., which order to present the information, what level of detail, etc.).”
  • JBI scoping review protocol template

Resources to help authors prepare a protocol for a systematic or scoping review:

  • Institute of Medicine – Standards for Systematic Reviews - Section 2.6
  • The Cochrane Handbook - Section ii.1.4
  • JBI Manual for Evidence Synthesis - Section 1.3 (Systematic reviews) & 11.2 (Scoping reviews)

Where to Register a Protocol

After you write the protocol, you should register it with a review registry. There are numerous review registries available, such as PROSPERO or OSF. Registration is free and open to anyone undertaking systematic reviews. Some journals also publish systematic review protocols.

  • PROSPERO A registry for systematic review protocols
  • How to register with PROSPERO

OSF can be used to pre-register a systematic or scoping review protocol and to share documents such as a citation management library, search strategies, and data extraction forms. Unlike other registries, evidence synthesis author teams do not submit their protocols for review by an editorial board before they are accepted and pre-registered on OSF. Instead, create your own pre-registration.

  • How to create an OSF registration
  • OSF Registrations Form

Scoping reviews may not be registered with PROSPERO.  Currently, they can be registered with the Open Science Framework or Figshare.

Publishing a Protocol

  • BioMed Central Protocols BioMed Central will consider protocols of any type of research for publication, following the standard peer review.
  • BMJ Open BMJ Open "will consider publishing without peer review protocols that have formal ethical approval and funding from a recognized, open access advocating research-funding body". Otherwise, protocols are peer reviewed.
  • JBI Evidence Synthesis Like systematic reviews, scoping review protocols can be published in some journals.
  • Systematic Reviews, a BioMed Central journal This open access title publishes protocols of systematic reviews broadly related to health sciences.
  • << Previous: 2. Develop a Research Question
  • Next: 4. Search the Evidence >>
  • Last Updated: Jun 18, 2024 9:41 AM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Systematic Reviews: Creating a Protocol

  • What Type of Review is Right for You?
  • What is in a Systematic Review
  • Finding and Appraising Systematic Reviews
  • Formulating Your Research Question
  • Inclusion and Exclusion Criteria
  • Creating a Protocol
  • Results and PRISMA Flow Diagram
  • Searching the Published Literature
  • Searching the Gray Literature
  • Methodology and Documentation
  • Managing the Process
  • Scoping Reviews

What is a Protocol?

A protocol is the roadmap for your systematic review. You will develop your protocol at the very beginning of the process, before you begin your searches. Your protocol may change as you go through your review but it is important to create a thorough protocol to help guide your research process. Yes, you can edit the protocol once it has been submitted. Just submit an amendment to the protocol.

What is in a Protocol?

According to the PRISMA Standards:

1. Introduction detailing:

  • Definitions
  • Inclusion and Exclusion criteria
  • Information Sources (Inclusion or exclusion of grey literature, the search strategy, and justification for inclusion or exclusion)
  • Search Strategy
  • Study selection process including how you will resolve disagreements
  • Description of Data Management
  • Data Collection Process
  • Risk of BIAS analysis
  • Data Synthesis

Learn More:

  • Evidence Synthesis Protocol Template This document is based on the PRISMA Statement (evidence-based minimum set of items for reporting in systematic reviews and meta-analyses) extensions for systematic review protocols and scoping reviews, and materials developed by The Campbell Collaboration (as referenced below).
  • Preparation Checklist for Structured Literature Reviews Writing a literature review for a research paper or as part of your thesis? Even if you’re not performing a full evidence synthesis, completing the items on this checklist and keeping them as record of your planned work (like a study protocol) ensures reproducibility, transparency, and reduction of bias.
  • PRISMA-P (Preferred Reporting Items for Systematic review and Meta-Analysis Protocols) 2015 checklist: recommended items to address in a systematic review protocol*
  • Operationalized PRISMA-P Checklist

More on Methods

A good methods section is narrative.

It will describe:

  • Resources including interface
  • date the search ended
  • Describe the search overall, which concepts were included
  • Describe limits
  • Describe additional search strategies

You should also include at least one copy and pasted search. 

The exact search string will be included in the appendix.

Registering your protocol

It is recommended that you register your systematic review protocol prior to conducting your review. This will improve transparency and reproducibility, but will also ensure that other research teams do not duplicate efforts.

A protocol documents the key points of your systematic review. A protocol should include a conceptual discussion of the problem and include the following:

  • Rationale, background
  • Definitions of your subject/topics
  • The potential contribution of the review to clinical decision making
  • Is there enough relevant literature to merit a systematic review/meta-analysis of studies
  • Inclusion/exclusion criteria
  • PICOS of interest (Population, Intervention, Comparison, Outcomes, Study types to be reviewed)
  • Sources you will use to search the literature (& search syntax if possible)
  • Screening methods
  • Data extraction methods
  • Methods to assess for bias
  • Contact details

 If you are working with the Cochrane or Campbell Collaborations, you will publish your protocol with those organizations. If you are working independently, consider registration with:

  • Campbell Collaboration "The Campbell Collaboration promotes positive social and economic change through the production and use of systematic reviews and other evidence synthesis for evidence-based policy and practice." Disciplines: Business and Management, Crime and Justice, Disability, Education, International Development, Knowledge Translation and Implementation, Methods, Nutrition, and Social Welfare
  • Cochrane "Our mission is to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence. Our work is internationally recognized as the benchmark for high-quality information about the effectiveness of health care." Disciplines: Healthcare
  • Collaboration for Environmental Evidence "An open community of stakeholders working towards a sustainable global environment and the conservation of biodiversity. CEE seeks to promote and deliver evidence syntheses on issues of greatest concern to environmental policy and practice as a public service." Disciplines: Environmental issues
  • Open Science Framework An open source web application that connects and supports the research workflow. Researchers use the OSF to collaborate, document, archive, share, and register research projects, materials, and data. OSF can be used to pre-register a systematic review protocol and to share documents such as a Zotero library, search strategies, and data extraction forms. Disciplines: Multidisciplinary
  • PROSPERO An international database of prospectively registered systematic reviews in health and social care. Key features from the review protocol are recorded and maintained as a permanent record. (Does not accept scoping reviews) Disciplines: Health and Social Care, Welfare, Public Health, Education, Crime, Justice, and International Development
  • << Previous: Inclusion and Exclusion Criteria
  • Next: Results and PRISMA Flow Diagram >>
  • Last Updated: Sep 6, 2024 1:05 PM
  • URL: https://guides.lib.lsu.edu/Systematic_Reviews

Provide Website Feedback Accessibility Statement

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

 Cochrane (formerly Cochrane Collaboration)
 JBI (formerly Joanna Briggs Institute)
 National Institute for Health and Care Excellence (NICE)—United Kingdom
 Scottish Intercollegiate Guidelines Network (SIGN) —Scotland
 Agency for Healthcare Research and Quality (AHRQ)—United States

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Review typeTopic assessedElements of research question (mnemonic)
Intervention [ , ]Benefits and harms of interventions used in healthcare. opulation, ntervention, omparator, utcome ( )
Diagnostic test accuracy [ ]How well a diagnostic test performs in diagnosing and detecting a particular disease. opulation, ndex test(s), and arget condition ( )
Qualitative
 Cochrane [ ]Questions are designed to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.

etting, erspective, ntervention or Phenomenon of nterest, omparison, valuation ( )

ample, henomenon of nterest, esign, valuation, esearch type ( )

spective, etting, henomena of interest/Problem, nvironment, omparison (optional), me/timing, indings ( )

 JBI [ ]Questions inform meaningfulness and appropriateness of care and the impact of illness through documentation of stakeholder experiences, preferences, and priorities. opulation, the Phenomena of nterest, and the ntext
Prognostic [ ]Probable course or future outcome(s) of people with a health problem. opulation, ntervention (model), omparator, utcomes, iming, etting ( )
Etiology and risk [ ]The relationship (association) between certain factors (e.g., genetic, environmental) and the development of a disease or condition or other health outcome. opulation or groups at risk, xposure(s), associated utcome(s) (disease, symptom, or health condition of interest), the context/location or the time period and the length of time when relevant ( )
Measurement properties [ , ]What is the most suitable instrument to measure a construct of interest in a specific study population? opulation, nstrument, onstruct, utcomes ( )
Prevalence and incidence [ ]The frequency, distribution and determinants of specific factors, health states or conditions in a defined population: eg, how common is a particular disease or condition in a specific group of individuals?Factor, disease, symptom or health ndition of interest, the epidemiological indicator used to measure its frequency (prevalence, incidence), the ulation or groups at risk as well as the ntext/location and time period where relevant ( )

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

Intervention857296.3Effectiveness43561.5
Diagnostic1761.9Diagnostic Test Accuracy91.3
Overview640.7Umbrella40.6
Methodology410.45Mixed Methods20.3
Qualitative170.19Qualitative15922.5
Prognostic110.12Prevalence and Incidence60.8
Rapid110.12Etiology and Risk71.0
Prototype 80.08Measurement Properties30.4
Economic60.6
Text and Opinion10.14
Scoping436.0
Comprehensive 324.5
Total = 8900Total = 707

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

 Quality of Reporting of Meta-analyses (QUOROM) StatementMoher 1999 [ ]
 Meta-analyses Of Observational Studies in Epidemiology (MOOSE)Stroup 2000 [ ]
 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)Moher 2009 [ ]
 PRISMA 2020 Page 2021 [ ]
 Overview Quality Assessment Questionnaire (OQAQ)Oxman and Guyatt 1991 [ ]
 Systematic Review Critical Appraisal SheetCentre for Evidence-based Medicine 2005 [ ]
 A Measurement Tool to Assess Systematic Reviews (AMSTAR)Shea 2007 [ ]
 AMSTAR-2 Shea 2017 [ ]
 Risk of Bias in Systematic Reviews (ROBIS) Whiting 2016 [ ]

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

Characteristic
ExtensiveExtensive
InterventionIntervention, diagnostic, etiology, prognostic
7 critical, 9 non-critical4
 Total number1629
 Response options

Items # 1, 3, 5, 6, 10, 13, 14, 16: rated or

Items # 2, 4, 7, 8, 9 : rated or

Items # 11 , 12, 15: rated or

24 assessment items: rated

5 items regarding level of concern: rated

 ConstructConfidence based on weaknesses in critical domainsLevel of concern for risk of bias
 CategoriesHigh, moderate, low, critically lowLow, high, unclear

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA for systematic reviews with a focus on health equity [ ]PRISMA-E2012
Reporting systematic reviews in journal and conference abstracts [ ]PRISMA for Abstracts2015; 2020
PRISMA for systematic review protocols [ ]PRISMA-P2015
PRISMA for Network Meta-Analyses [ ]PRISMA-NMA2015
PRISMA for Individual Participant Data [ ]PRISMA-IPD2015
PRISMA for reviews including harms outcomes [ ]PRISMA-Harms2016
PRISMA for diagnostic test accuracy [ ]PRISMA-DTA2018
PRISMA for scoping reviews [ ]PRISMA-ScR2018
PRISMA for acupuncture [ ]PRISMA-A2019
PRISMA for reporting literature searches [ ]PRISMA-S2021

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

Table Table
Methods for study selection#5#2.5All three components must be done in duplicate, and methods fully described.Helps to mitigate CoI and bias; also may improve accuracy.
Methods for data extraction#6#3.1
Methods for RoB assessmentNA#3.5
Study description#8#3.2Research design features, components of research question (eg, PICO), setting, funding sources.Allows readers to understand the individual studies in detail.
Sources of funding#10NAIdentified for all included studies.Can reveal CoI or bias.
Publication bias#15*#4.5Explored, diagrammed, and discussed.Publication and other selective reporting biases are major threats to the validity of systematic reviews.
Author CoI#16NADisclosed, with management strategies described.If CoI is identified, management strategies must be described to ensure confidence in the review.

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

AcronymMeaning
feasible, interesting, novel, ethical, and relevant
specific, measurable, attainable, relevant, timely
time, outcomes, population, intervention, context, study design, plus (effect) moderators

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

 BMJ Open
 BioMed Central
 JMIR Research Protocols
 World Journal of Meta-analysis
 Cochrane
 JBI
 PROSPERO

 Research Registry-

 Registry of Systematic Reviews/Meta-Analyses

 International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY)
 Center for Open Science
 Protocols.io
 Figshare
 Open Science Framework
 Zenodo

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

Aggregate data

Individual

participant data

Weighted average of effect estimates

Pairwise comparisons of effect estimates, CI

Overall effect estimate, CI, value

Evaluation of heterogeneity

Forest plot with summary statistic for average effect estimate
Network Variable The interventions, which are compared directly indirectlyNetwork diagram or graph, tabular presentations
Comparisons of relative effects between any pair of interventionsEffect estimates for intervention pairings
Summary relative effects for pair-wise comparisons with evaluations of inconsistency and heterogeneityForest plot, other methods
Treatment rankings (ie, probability that an intervention is among the best options)Rankogram plot
Summarizing effect estimates from separate studies (without combination that would provide an average effect estimate)Range and distribution of observed effects such as median, interquartile range, range

Box-and-whisker plot, bubble plot

Forest plot (without summary effect estimate)

Combining valuesCombined value, number of studiesAlbatross plot (study sample size against values per outcome)
Vote counting by direction of effect (eg, favors intervention over the comparator)Proportion of studies with an effect in the direction of interest, CI, valueHarvest plot, effect direction plot

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

[ ]
Risk of bias [ ]Large magnitude of effect
Imprecision [ ]Dose–response gradient
Inconsistency [ ]All residual confounding would decrease magnitude of effect (in situations with an effect)
Indirectness [ ]
Publication bias [ ]

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

 ⊕  ⊕  ⊕  ⊕ High: We are very confident that the true effect lies close to that of the estimate of the effect
 ⊕  ⊕  ⊕ Moderate: We are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different
 ⊕  ⊕ Low: Our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect
 ⊕ Very low: We have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

1. The certainty in the evidence (also known as quality of evidence or confidence in the estimates) should be defined consistently with the definitions used by the GRADE Working Group.
2. Explicit consideration should be given to each of the GRADE domains for assessing the certainty in the evidence (although different terminology may be used).
3. The overall certainty in the evidence should be assessed for each important outcome using four or three categories (such as high, moderate, low and/or very low) and definitions for each category that are consistent with the definitions used by the GRADE Working Group.
4. Evidence summaries … should be used as the basis for judgments about the certainty in the evidence.

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

Cochrane , JBICochrane, JBICochraneCochrane, JBIJBIJBIJBICochrane, JBIJBI
 ProtocolPRISMA-P [ ]PRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-P
 Systematic reviewPRISMA 2020 [ ]PRISMA-DTA [ ]PRISMA 2020

eMERGe [ ]

ENTREQ [ ]

PRISMA 2020PRISMA 2020PRISMA 2020PRIOR [ ]PRISMA-ScR [ ]
 Synthesis without MASWiM [ ]PRISMA-DTA [ ]SWiM eMERGe [ ] ENTREQ [ ] SWiM SWiM SWiM PRIOR [ ]

For RCTs: Cochrane RoB2 [ ]

For NRSI:

ROBINS-I [ ]

Other primary research

QUADAS-2[ ]

Factor review QUIPS [ ]

Model review PROBAST [ ]

CASP qualitative checklist [ ]

JBI Critical Appraisal Checklist [ ]

JBI checklist for studies reporting prevalence data [ ]

For NRSI: ROBINS-I [ ]

Other primary research

COSMIN RoB Checklist [ ]AMSTAR-2 [ ] or ROBIS [ ]Not required
GRADE [ ]GRADE adaptation GRADE adaptation

CERQual [ ]

ConQual [ ]

GRADE adaptation Risk factors GRADE adaptation

GRADE (for intervention reviews)

Risk factors

Not applicable

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

A review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question.
The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates and other methods, such as combining values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect.
A statistical technique used to synthesize results when study effect estimates and their variances are available, yielding a quantitative summary of results.
An event or measurement collected for participants in a study (such as quality of life, mortality).
The combination of a point estimate (such as a mean difference, risk ratio or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome.
A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information.
The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.
An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses.

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

PreferredPotentially problematic

Evidence synthesis with meta-analysis

Systematic review with meta-analysis

Meta-analysis
Overview or umbrella review

Systematic review of systematic reviews

Review of reviews

Meta-review

RandomizedExperimental
Non-randomizedObservational
Single case experimental design

Single-subject research

N-of-1 design

Case report or case seriesDescriptive study
Methodological qualityQuality
Certainty of evidence

Quality of evidence

Grade of evidence

Level of evidence

Strength of evidence

Qualitative systematic reviewQualitative synthesis
Synthesis of qualitative data Qualitative synthesis
Synthesis without meta-analysis

Narrative synthesis , narrative summary

Qualitative synthesis

Descriptive synthesis, descriptive summary

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Declarations

The authors declare no competing interests.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Publisher’ s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

A Guide to Writing a Qualitative Systematic Review Protocol to Enhance Evidence-Based Practice in Nursing and Health Care

Affiliations.

  • 1 PhD candidate, School of Nursing and Midwifey, Monash University, and Clinical Nurse Specialist, Adult and Pediatric Intensive Care Unit, Monash Health, Melbourne, Victoria, Australia.
  • 2 Lecturer, School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia.
  • 3 Senior Lecturer, School of Nursing and Midwifery, Monash University, Melbourne, Victoria, Australia.
  • PMID: 26790142
  • DOI: 10.1111/wvn.12134

Background: The qualitative systematic review is a rapidly developing area of nursing research. In order to present trustworthy, high-quality recommendations, such reviews should be based on a review protocol to minimize bias and enhance transparency and reproducibility. Although there are a number of resources available to guide researchers in developing a quantitative review protocol, very few resources exist for qualitative reviews.

Aims: To guide researchers through the process of developing a qualitative systematic review protocol, using an example review question.

Methodology: The key elements required in a systematic review protocol are discussed, with a focus on application to qualitative reviews: Development of a research question; formulation of key search terms and strategies; designing a multistage review process; critical appraisal of qualitative literature; development of data extraction techniques; and data synthesis. The paper highlights important considerations during the protocol development process, and uses a previously developed review question as a working example.

Implications for research: This paper will assist novice researchers in developing a qualitative systematic review protocol. By providing a worked example of a protocol, the paper encourages the development of review protocols, enhancing the trustworthiness and value of the completed qualitative systematic review findings.

Linking evidence to action: Qualitative systematic reviews should be based on well planned, peer reviewed protocols to enhance the trustworthiness of results and thus their usefulness in clinical practice. Protocols should outline, in detail, the processes which will be used to undertake the review, including key search terms, inclusion and exclusion criteria, and the methods used for critical appraisal, data extraction and data analysis to facilitate transparency of the review process. Additionally, journals should encourage and support the publication of review protocols, and should require reference to a protocol prior to publication of the review results.

Keywords: guidelines; meta synthesis; qualitative; systematic review protocol.

© 2016 Sigma Theta Tau International.

PubMed Disclaimer

Similar articles

  • How has the impact of 'care pathway technologies' on service integration in stroke care been measured and what is the strength of the evidence to support their effectiveness in this respect? Allen D, Rixson L. Allen D, et al. Int J Evid Based Healthc. 2008 Mar;6(1):78-110. doi: 10.1111/j.1744-1609.2007.00098.x. Int J Evid Based Healthc. 2008. PMID: 21631815
  • Procedures and methods of benefit assessments for medicines in Germany. Bekkering GE, Kleijnen J. Bekkering GE, et al. Eur J Health Econ. 2008 Nov;9 Suppl 1:5-29. doi: 10.1007/s10198-008-0122-5. Eur J Health Econ. 2008. PMID: 18987905
  • [Procedures and methods of benefit assessments for medicines in Germany]. Bekkering GE, Kleijnen J. Bekkering GE, et al. Dtsch Med Wochenschr. 2008 Dec;133 Suppl 7:S225-46. doi: 10.1055/s-0028-1100954. Epub 2008 Nov 25. Dtsch Med Wochenschr. 2008. PMID: 19034813 German.
  • Evidence-based medicine, systematic reviews, and guidelines in interventional pain management, part I: introduction and general considerations. Manchikanti L. Manchikanti L. Pain Physician. 2008 Mar-Apr;11(2):161-86. Pain Physician. 2008. PMID: 18354710 Review.
  • An example of the use of systematic reviews to answer an effectiveness question. Forbes DA. Forbes DA. West J Nurs Res. 2003 Mar;25(2):179-92. doi: 10.1177/0193945902250036. West J Nurs Res. 2003. PMID: 12666642 Review.
  • Patients' experiences with musculoskeletal spinal pain: A qualitative systematic review protocol. El Chamaa A, Kowalski K, Parikh P, Rushton A. El Chamaa A, et al. PLoS One. 2024 Aug 8;19(8):e0306993. doi: 10.1371/journal.pone.0306993. eCollection 2024. PLoS One. 2024. PMID: 39116059 Free PMC article.
  • Physical Activity Interventions in People with Diabetes: A Systematic Review of The Qualitative Evidence. Vilafranca-Cartagena M, Bonet-Augè A, Colillas-Malet E, Puiggrós-Binefa A, Tort-Nasarre G. Vilafranca-Cartagena M, et al. Healthcare (Basel). 2024 Jul 9;12(14):1373. doi: 10.3390/healthcare12141373. Healthcare (Basel). 2024. PMID: 39057516 Free PMC article. Review.
  • Telemedicine in Advanced Kidney Disease and Kidney Transplant: A Qualitative Meta-Analysis of Studies of Patient Perspectives. Manko CD, Apple BJ, Chang AR, Romagnoli KM, Johannes BL. Manko CD, et al. Kidney Med. 2024 May 24;6(7):100849. doi: 10.1016/j.xkme.2024.100849. eCollection 2024 Jul. Kidney Med. 2024. PMID: 39040545 Free PMC article.
  • Voices of Wisdom: Geriatric Interviews on Self-Management of Type 2 Diabetes in the United States-A Systematic Review and Metasynthesis. Lo DF, Gawash A, Shah KP, Emanuel J, Goodwin B, Shamilov DD, Kumar G, Jean N, White CP. Lo DF, et al. J Diabetes Res. 2024 Jul 13;2024:2673742. doi: 10.1155/2024/2673742. eCollection 2024. J Diabetes Res. 2024. PMID: 39035684 Free PMC article. Review.
  • Factors affecting implementation of mindfulness in hospital settings: A qualitative meta-synthesis of healthcare professionals' experiences. Knudsen RK, Skovbjerg S, Pedersen EL, Nielsen CL, Storkholm MH, Timmermann C. Knudsen RK, et al. Int J Nurs Stud Adv. 2024 Mar 27;6:100192. doi: 10.1016/j.ijnsa.2024.100192. eCollection 2024 Jun. Int J Nurs Stud Adv. 2024. PMID: 38746813 Free PMC article. Review.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Systematic Review Protocols and Protocol Registries

Systematic review protocols.

  • a good systematic review can start with a protocol - it can serve as a road map for your review
  • a protocol specifies the objectives, methods, and outcomes of primary interest of the systematic review
  • a protocol promotes transparency of methods
  • allows your peers to review how you will extract information to quantitavely summarize your outcome data

About Systematic Review Protocol Registries

  • Various protocol registries exists
  • Anyone can register their protocol
  • Registering your protocol is helpful to establish that your group is doing this review
  • Registering increases potential communication with interested researchers
  • Registering may reduce the risk of multiple reviews addressing the same question
  • Registering may provide greater transparency when updating a systematic review

Protocol Reporting Guidelines

  • MECIR (Methodological Expectations for Cochrane Intervention Reviews) Manual  - guidelines on reporting protocols for Cochrane Intervention reviews
  • PRISMA-P  - (PRISMA (Preferred Reporting Items for Systematic Reviews) for systematic review protocols

Systematic Review/Protocol Registries

  • Campbell Collaboration  - produces systematic reviews of the effects of social interventions
  • Cochrane Collaboration  - international organization, produces and disseminates systematic reviews of health care interventions
  • PROSPERO  -international prospective register of systematic reviews

UCI Libraries Mobile Site

  • Langson Library
  • Science Library
  • Grunigen Medical Library
  • Law Library
  • Connect From Off-Campus
  • Accessibility
  • Gateway Study Center

Libaries home page

Email this link

Graduate student research support.

  • Citation Management
  • Literature Reviews
  • Graduate-level Writing Help
  • Research Ethics & Integrity
  • Collaborative Project Management
  • Managing Research Data
  • Data Storage and Backup
  • Data Collection
  • Qualitative Data
  • Quantitative Data
  • Sharing and Archiving Data
  • Digital Humanities
  • Mapping your Data (GIS)
  • Data Visualization
  • Systematic Reviews and Other Evidence Synthesis Methods

Learn more about evidence synthesis methods

What are evidence syntheses, how does a traditional literature review differ from evidence synthesis, types of evidence synthesis.

  • Developing a Scholarly Identity
  • Author Rights & Copyright
  • Open Access & the Publishing Landscape
  • Understanding Peer Review
  • Publishing Ethics & Retractions
  • Preparing your Thesis / Dissertations

Do you want to learn more about systematic reviews or other types of evidence synthesis methods? Check out our detailed guide on this topic, which provides a deeper overview, and reviews the various steps involved in these methods. This guide will also review UCI Libraries' Evidence Synthesis Service, and let you know how our librarians can help. The information below is a quick overview of the methodology.

  • Systematic Reviews & Evidence Synthesis Methods guide

According to the Royal Society, 'evidence synthesis' refers to the process of bringing together information from a range of sources and disciplines to inform debates and decisions on specific issues. They generally include a methodical and comprehensive literature synthesis focused on a well-formulated research question. Their aim is to identify and synthesize all of the scholarly research on a particular topic, including both published and unpublished studies. Evidence syntheses are conducted in an unbiased, reproducible way to provide evidence for practice and policy-making, as well as to identify gaps in the research. Evidence syntheses may also include a meta-analysis, a more quantitative process of synthesizing and visualizing data retrieved from various studies.

Evidence syntheses are much more time-intensive than traditional literature reviews and require a multi-person research team. See this PredicTER tool to get a sense of a systematic review timeline (one type of evidence synthesis). Before embarking on an evidence synthesis, it's important to clearly identify your reasons for conducting one. For a list of types of evidence synthesis projects, see the Types of Evidence Synthesis tab.

One commonly used form of evidence synthesis is a systematic review. This table compares a traditional literature review with a systematic review.

 

Review Question/Topic

Topics may be broad in scope; the goal of the review may be to place one's own research within the existing body of knowledge, or to gather information that supports a particular viewpoint.

Starts with a well-defined research question to be answered by the review. Reviews are conducted with the aim of finding all existing evidence in an unbiased, transparent, and reproducible way.

Searching for Studies

Searches may be ad hoc and based on what the author is already familiar with. Searches are not exhaustive or fully comprehensive.

Attempts are made to find all existing published and unpublished literature on the research question. The process is well-documented and reported.

Study Selection

Often lack clear reasons for why studies were included or excluded from the review.

Reasons for including or excluding studies are explicit and informed by the research question.

Assessing the Quality of Included Studies

Often do not consider study quality or potential biases in study design.

Systematically assesses risk of bias of individual studies and overall quality of the evidence, including sources of heterogeneity between study results.

Synthesis of Existing Research

Conclusions are more qualitative and may not be based on study quality.

Bases conclusion on quality of the studies and provide recommendations for practice or to address knowledge gaps.

Evidence synthesis refers to any method of identifying, selecting, and combining results from multiple studies. For help selecting a methodology, please refer to:

  • A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies For help differentiating between the various types of review, consult this article by Grant & Booth, 2009.
  • Methodology Decision Tree From our colleagues at Cornell, a decision tree with questions leading to various review types.

Types of evidence synthesis include: 

​​Systematic Review

  • Systematically and transparently collect and categorize existing evidence on a broad question of scientific, policy or management importance
  • Compares, evaluates, and synthesizes evidence in a search for the effect of an intervention
  • Time-intensive and often take months to a year or more to complete
  • The most commonly referred to type of evidence synthesis. Sometimes confused as a blanket term for other types of reviews

Systematized Literature Review

  • Not a true evidence synthesis review, but employs certain elements of a systematic review
  • No specific methodology; does not require a protocol or critical appraisal of the evidence
  • Conducted by only 1 or 2 people
  • May be completed in about 2-6 months

​​Literature (Narrative) Review

  • Not a true evidence synthesis review, but a broad term referring to reviews with a wide scope and non-standardized methodology
  • Search strategies, comprehensiveness, and time range covered will vary and do not follow an established protocol

​Scoping Review or Evidence Map

  • Systematically and transparently collect and categorize existing evidence on a broad question of scientific, policy or management importance
  • Seeks to identify research gaps and opportunities for evidence synthesis rather than searching for the effect of an intervention
  • May critically evaluate existing evidence, but does not attempt to synthesize the results in the way a systematic review would.(see EE Journal and CIFOR )
  • May take longer than a systematic review
  • See Arksey and O'Malley (2005)  or Peters et al (2020) for methodological guidance

​Rapid Review

  • Applies Systematic Review methodology within a time-constrained setting
  • Employs methodological "shortcuts" (limiting search terms for example) at the risk of introducing bias
  • Useful for addressing issues needing quick decisions, such as developing policy recommendations
  • See Evidence Summaries: The Evolution of a Rapid Review Approach

Umbrella Review

  • Reviews other systematic reviews on a topic
  • Often defines a broader question than is typical of a traditional systematic review
  • Most useful when there are competing interventions to consider

Meta-analysis

  • Statistical technique for combining the findings from disparate quantitative studies
  • Uses statistical methods to objectively evaluate, synthesize, and summarize results
  • Conducted as an additional step of a systematic review
  • << Previous: Data Visualization
  • Next: Publication Process >>
  • Last Updated: Sep 20, 2024 8:28 AM
  • URL: https://guides.lib.uci.edu/graduate-student-support

Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here

Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.

  • Open access
  • Published: 18 September 2024

Mapping the evaluation of the electronic health system PEC e-SUS APS in Brazil: a scoping review protocol

  • Mariano Felisberto 1 , 2 ,
  • Júlia Meller Dias de Oliveira 1 , 3 ,
  • Eduarda Talita Bramorski Mohr 1 , 2 ,
  • Daniel Henrique Scandolara 1 , 4 ,
  • Ianka Cristina Celuppi 1 , 5 ,
  • Miliane dos Santos Fantonelli 1 ,
  • Raul Sidnei Wazlawick 1 , 6 &
  • Eduardo Monguilhott Dalmarco   ORCID: orcid.org/0000-0002-5220-5396 1 , 7  

Systematic Reviews volume  13 , Article number:  237 ( 2024 ) Cite this article

58 Accesses

Metrics details

The Brazilian Ministry of Health has developed and provided the Citizen’s Electronic Health Record (PEC e-SUS APS), a health information system freely available for utilization by all municipalities. Given the substantial financial investment being made to enhance the quality of health services in the country, it is crucial to understand how users evaluate this product. Consequently, this scoping review aims to map studies that have evaluated the PEC e-SUS APS.

This scoping review is guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) framework, as well as by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Checklist extension for scoping reviews (PRISMA-ScR). The research question was framed based on the “CoCoPop” mnemonic (Condition, Context, Population). The final question posed is, “How has the Citizen’s Electronic Health Record (PEC e-SUS APS) been evaluated?” The search strategy will be executed across various databases (LILACS, PubMed/MEDLINE, Scopus, Web of Science, ACM Digital Library, and IEEE Digital Library), along with gray literature from ProQuest Dissertation and Theses Global and Google Scholar, with assistance from a professional healthcare librarian skilled in supporting systematic reviews. The database search will encompass the period from 2013 to 2024. Articles included will be selected by three independent reviewers in two stages, and the findings will undergo a descriptive analysis and synthesis following a “narrative review” approach. Independent reviewers will chart the data as outlined in the literature.

The implementation process for the PEC e-SUS APS can be influenced by the varying characteristics of the over 5500 Brazilian municipalities. These factors and other challenges encountered by health professionals and managers may prove pivotal for a municipality’s adoption of the PEC e-SUS APS system. With the literature mapping to be obtained from this review, vital insights into how users have evaluated the PEC will be obtained.

Systematic review registration

The protocol has been registered prospectively at the Open Science Framework platform under the number 10.17605/OSF.IO/NPKRU.

Peer Review reports

The Brazilian Unified Health System (SUS) was launched in Brazil in 1998 [ 1 , 2 , 3 ]. Its structure adheres to a triad of principles: integrality, universality, and equity of health services offered to the nation’s population [ 4 ]. In 1990, Primary Health Care (PHC) was established as a national policy under Basic Operational Standard 96, which provided support for the implementation of Family Health and Community Health Agents programs throughout Brazil [ 5 , 6 ]. Currently, PHC has become a central component within the organization of the health care network and is considered the main entry point to the Brazilian health system, extending healthcare provision throughout the entire territory [ 3 , 7 , 8 ].

Examining Brazil’s demographic and epidemiological aspects is crucial to ensure these services reach all citizens. Hence, health policy planning depends on this information, which is typically sourced from healthcare system data [ 9 ]. This data may represent the reality and needs of a specific community, municipality, state, or country and, thus, directly influences health surveillance activities, forming the basis of health service management [ 10 ]. Health information systems aim to generate, organize, and analyze health indicators, thereby producing knowledge about the health status of the population [ 11 ].

To digitize SUS and facilitate health professionals’ efforts in care coordination, the Brazilian Ministry of Health instituted the e-SUS Primary Care Strategy in 2013. Its key objectives were to individualize records, integrate data between official systems, reduce redundancy in data collection, and computerize health units [ 12 ]. It is worth noting that this strategy extends beyond a federal management and national information system context; it touches on the daily routines of professionals, the challenges faced, and the information essential for individual care in territories [ 13 ]. To further facilitate this process, the Ministry introduced the Citizen’s Electronic Health Record (PEC), which is a freely available health information system for municipalities, aiding the computerization of Basic Health Units throughout Brazil [ 1 , 14 ].

The role of software products and intensive computer systems has grown to become essential for a broad array of business and personal operations. Consequently, achieving personal satisfaction, business success, and human security increasingly rely on the quality of these software and systems [ 15 ]. The development and implementation of these technologies are fundamental; however, they require substantial financial resources, and their success hinges on user acceptance [ 16 ]. Therefore, it is critical for those investing in technology to understand what factors affect acceptance and usage, aiding organizations in implementing user-level interventions [ 17 ].

Understanding how users evaluate a software product is critical in a nation of continental proportions like Brazil, especially given the significant financial investment to enhance health services’ quality. Given this context, this scoping review aims to map out the studies that have evaluated the PEC e-SUS APS using various quality models. This will be done using ISO/IEC 25010 as a theoretical foundation to define these models, which present in-depth quality models for computer systems, software products, data quality, and usage.

The protocol and its registration have been adapted based on elements taken from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Checklist extension for scoping reviews (PRISMA-ScR) [ 18 , 19 ]. The adapted protocol was subsequently registered on Open Science Framework under the number https://doi.org/10.17605/OSF.IO/NPKRU . The research question was formulated and structured around the CoCoPop approach (Condition, Context, and Population), as shown in Table  1 .

Inclusion and exclusion criteria

All studies evaluating the PEC e-SUS APS will be considered for the inclusion criteria. Given the myriad aspects of electronic health record systems open to analysis (e.g., user experience, usability, efficiency, accessibility, security, and economic aspects), this review will include studies evaluating the general function and effectiveness of the PEC e-SUS APS, regardless of the language. The exclusion criteria will include studies that will not clearly outline the evaluation method used for the health information system; will not employ an evaluative tool or method; will focus solely on medical records differing from the PEC e-SUS APS; will be published before 2013 (i.e., PEC e-SUS APS was first distributed to municipalities in 2013); will be conducted by authors from the Bridge Laboratory (i.e., the group responsible for the PEC implementation); will be review articles, letters, book chapters, conference abstracts, opinion articles, brief communications, editorials, and clinical guidelines; and if the full text will not found for full reading or correspondence authors will not reply to contact attempts.

Sources of information and search strategy

A comprehensive search strategy will be deployed across various databases: LILACS, PubMed/MEDLINE, Scopus, Web of Science, ACM Digital Library, and IEEE Digital Library. Moreover, the gray literature will also be explored using the ProQuest Dissertation and Theses Global and Google Scholar databases with support from a healthcare librarian experienced in systematic reviews. The search strategy developed for the PubMed/MEDLINE databases is presented in Table  2 .

Furthermore, experts will be contacted for the potential inclusion of more studies, with manual searches of bibliographies from included studies and key journals also conducted. The database search will cover the period from 2013 until 2024. The search will be implemented in March 2024, and the results will be imported into the EndNote Online reference software (Thomson Reuters, USA).

Methods to select the sources of evidence

Three independent reviewers will decide on what will be included in the final studies. In the first stage, the three reviewers will assess the titles and abstracts for eligibility. In the second stage, they will examine the full texts of the articles, applying the same criteria as in the first stage. The reviewers will then cross-validate all the information gathered during both stages. If disagreements occur, an arbitrator, not involved in the initial article selection stage, will be brought in before a final decision is reached. If review-critical data are missing or ambiguous, the study’s corresponding author will be contacted for resolution or clarification. The data mapping process and related entities will involve these same three independent reviewers.

Data extraction and synthesis

A descriptive analysis will synthesize the results, following the narrative review approach of Pawson and Bellamy [ 20 ]. Independent reviewers will chart the data based on the method of Hilary Arksey and Lisa O’Malley (2005), as depicted in Table  3 .

In the event of discrepancies, a consensus discussion will ensue and, if necessary, independent reviewers will be brought in to reach a final decision. Any disagreements will be addressed among the reviewers. The corresponding author will be contacted if any crucial information is unclear or missing. The studies included will be grouped according to the various characteristics and sub-characteristics pertinent to all software products and computer systems, as defined by the ISO/IEC 25010–2011 standard .

Tabular summaries will be employed to present the findings and cover study characteristics, methodologies, and aspects evaluated. Subsequently, a narrative synthesis will be carried out to elucidate the evidence found relating to the review objective.

The success of PEC implementation can be influenced by various characteristics of municipalities, including their location, population density, level of urbanization, municipal management assistance, computerization levels, and technological infrastructure, among others [ 21 ]. These factors, coupled with the challenges confronted by healthcare professionals and managers, may determine a municipality’s adoption of the PEC. Literature emphasizes several barriers or difficulties encountered during implementation and usage, such as inadequate material resources in municipalities, lack of professional technology training, and poor internet connectivity [ 22 , 23 , 24 ].

Considering the myriad software product quality assessment models available, this review will utilize ISO/IEC 25010–2011 as its theoretical foundation. This model provides precise definitions of the attributes that must be evaluated. It is crucial to note that this international standard underwent rigorous evaluation by numerous international organizations before publication, reinforcing its suitability for assessing software product quality.

The literature map derived from this review will provide crucial insights into user evaluations of the PEC. Through these insights, it will be possible to identify the strengths and weaknesses of this software product. This knowledge will empower those responsible for developing and implementing this system to make significant improvements, thereby ensuring a substantial return on investment.

Availability of data and materials

Not applicable.

Avila GS, Cavalcante RB, Almeida NG, Gontijo TL, DE Souza Barbosa S, Brito MJ. Diffusion of the Electronic Citizen's Record in Family Health Teams. REME-Revista Mineira de Enfermagem. 2021. 25(1). https://periodicos.ufmg.br/index.php/reme/article/view/44494 .

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Article   Google Scholar  

Barros RD, Aquino R, Souza LE. Evolution of the structure and results of Primary Health Care in Brazil between 2008 and 2019. Cien Saude Colet. 2022;27:4289–301. https://doi.org/10.1590/1413-812320222711.02272022EN .

Article   PubMed   Google Scholar  

Brasil - Ministério da Saúde. Subchefia para Assuntos Jurídicos. Lei nº 8.080, de 19 de setembro de 1990: Lei Orgânica da Saúde. Dispõe sobre as condições para a promoção, proteção e recuperação da saúde, a organização e o funcionamento dos serviços correspondentes e dá outras providências. Brasília, 1990. http://www.planalto.gov.br/ccivil_03/leis/L8080.htm .

Sousa AN, Shimizu HE. Integrality and comprehensiveness of service provision in Primary Health Care in Brazil (2012-2018). Revista Brasileira de Enfermagem. 2021 https://doi.org/10.1590/0034-7167-2020-0500

Tasca R, Massuda A, Carvalho WM, Buchweitz C, Harzheim E. Recommendations to strengthen primary health care in Brazil. Revista Panamericana de Salud Pública. 2020 https://doi.org/10.37774/9789275726426 .

Brasil - Ministério da Saúde. Portaria nº 2.436, de 21 de setembro de 2017. Aprova a Política Nacional de Atenção Básica, estabelecendo a revisão de diretrizes para a organização da Atenção Básica, no âmbito do Sistema Único de Saúde (SUS). Diário Oficial da União; 2017. https://bvsms.saude.gov.br/bvs/saudelegis/gm/2017/prt2436_22_09_2017.html .

Mendonça MH, Matta GC, Gondim R, Giovanella L. Atenção primária à saúde no Brasil: conceitos, práticas e pesquisa. SciELO-Editora Fiocruz; 2018.

Mendes, E. V. As redes de atenção à saúde. Brasília: Organização Pan-Americana da Saúde, 2011. 549p. http://bvsms.saude.gov.br/bvs/publicacoes/redes_de_atencao_saude.pdf .

Brasil - Ministério da Saúde. Secretaria de Vigilância em Saúde. Guia de vigilância epidemiológica. 6 ed. Brasília, 2005. http://bvsms.saude.gov.br/bvs/publicacoes/Guia_Vig_Epid_novo2.pdf .

Mota E, Carvalho D. Sistemas de informação em saúde. Epidemiología & saúde. Rio de Janeiro: Editora Médica e Científica (MEDSI). 2003.

Brasil - Ministério da Saúde. Secretaria de Atenção à Saúde. Departamento de Atenção Básica. Diretrizes Nacionais de Implantação da Estratégia e-SUS AB. Brasília, 2014. http://bvsms.saude.gov.br/bvs/publicacoes/diretrizes_nacionais_implantacao_estrategia_esus.pdf .

Gaete RAC, Leite TA. Estratégia e-SUS Atenção Básica: o processo de reestruturação do sistema de informação da atenção básica. In: Congresso Brasileiro em Informática em Saúde – CBIS, 14, 2014, Santos. [s.n.], 2014.

Brasil. Nota Técnica 07/2013: Estratégia e-SUS Atenção Básica e Sistema de Informação em Saúde da Atenção Básica-SISAB. 2013. Disponível em: https://www.conass.org.br/biblioteca/wp-content/uploads/2013/01/NT-07-2013-e-SUS-e-SISAB.pdf .

ISO/IEC 25010:2011: Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models. Geneva, Switzerland.: ISO Copyright Office, 2011.

Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quarterly, 2003. 27(3), 425–478. https://doi.org/10.2307/30036540

Pinho C, Franco M, Mendes L. Web portals as tools to support information management in higher education institutions: A systematic literature review. International Journal of Information Management, 2018. 41, 80–92. https://doi.org/10.1016/j.ijinfomgt.2018.04.002

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, Group, P.-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1–9.

Article   PubMed   PubMed Central   Google Scholar  

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, Moher D, Peters MD, Horsley T, Weeks L. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Pawson R, & Bellamy JL. Realist synthesis: an explanatory focus for systematic review. In: POPAY, J. (Ed.). Moving beyond effectiveness in evidence synthesis: Methodological issues in the synthesis of diverse sources of evidence, 2006. 83–94.

Cielo AC, Raiol T, Silva EN, Barreto JO. Implementation of the e-SUS Primary Care Strategy: an analysis based on official data. Revista de Saúde Pública. 2022 https://doi.org/10.11606/s1518-8787.2022056003405 .

Gontijo TL, Lima PKM, Guimarães EAA, Oliveira VC, Quites HFO, Belo VS, et al. Computerization of primary health care: the manager as a change agent. Rev Bras Enferm. 2021;74(2):e20180855. https://doi.org/10.1590/0034-7167-2018-0855 .

Santos LPR, Pereira AG, Graever L, Guimarães RM. e-SUS AB na cidade do Rio de Janeiro: projeto e implantação do sistema de informação em saúde. Cad saúde colet. 2021. 29(spe):199–204. https://doi.org/10.1590/1414-462X202199010232 .

Zacharias FC, Schönholzer TE, Oliveira VC, Gaete RA, Perez G, Fabriz LA, Amaral GG, Pinto IC. Primary Healthcare e-SUS: determinant attributes for the adoption and use of a technological innovation. Cad Saude Publica. 2021;37:e00219520. https://doi.org/10.1590/0102-311X00219520 .

Download references

Acknowledgements

The authors would like to thank Mrs. Karyn Munik Lehmkuhl for her support with the search strategies.

This study will be financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior [Coordination for the Improvement of Higher Education Personnel] – Brazil (CAPES) – Finance Code 001, and by Brazilian Ministry of Health (e-SUS PHC Project Stage 6). The RSW and EMD are productivity fellows in technology development and innovative extension of CNPq.

Author information

Authors and affiliations.

Bridge Laboratory, Federal University of Santa Catarina, Florianópolis, Brazil

Mariano Felisberto, Júlia Meller Dias de Oliveira, Eduarda Talita Bramorski Mohr, Daniel Henrique Scandolara, Ianka Cristina Celuppi, Miliane dos Santos Fantonelli, Raul Sidnei Wazlawick & Eduardo Monguilhott Dalmarco

Graduate Program in Pharmacy, Federal University of Santa Catarina, Florianópolis, Brazil

Mariano Felisberto & Eduarda Talita Bramorski Mohr

Graduate Program in Dentistry, Federal University of Santa Catarina, Florianópolis, Brazil

Júlia Meller Dias de Oliveira

Graduate Program in Engineering, Management, and Knowledge Media, Federal University of Santa Catarina, Florianópolis, Brazil

Daniel Henrique Scandolara

Department of Nursing, Federal University of Santa Catarina, Florianópolis, Brazil

Ianka Cristina Celuppi

Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil

Raul Sidnei Wazlawick

Department of Clinical Analysis, Federal University of Santa Catarina, Florianópolis, Brazil

Eduardo Monguilhott Dalmarco

You can also search for this author in PubMed   Google Scholar

Contributions

All co-authors constructed, read, and approved of the final manuscript.

Corresponding author

Correspondence to Eduardo Monguilhott Dalmarco .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare the presence of financial and political conflicts of interest related to the content of this study protocol. The Laboratório Bridge is involved in the development and maintenance of the PEC e-SUS APS, a health information system, in collaboration with the Brazilian Ministry of Health. This collaboration entails financial and political agreements, as the Ministry of Health is a governmental institution responsible for public health in Brazil. We acknowledge that these conflicts may influence our research and analysis. However, we are committed to reporting the results impartially and transparently in the future, following the ethical and editorial guidelines of the international scientific journal in which this article will be published. Our financial and political interests will not compromise the integrity of the research or the objectivity in presenting the results.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Felisberto, M., de Oliveira, J.M.D., Mohr, E.T.B. et al. Mapping the evaluation of the electronic health system PEC e-SUS APS in Brazil: a scoping review protocol. Syst Rev 13 , 237 (2024). https://doi.org/10.1186/s13643-024-02648-4

Download citation

Received : 09 April 2024

Accepted : 23 August 2024

Published : 18 September 2024

DOI : https://doi.org/10.1186/s13643-024-02648-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Electronic health records
  • Management in health
  • Primary health care
  • Health information systems
  • Diffusion of innovation
  • Scoping review

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic review research protocol example

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 14, Issue 9
  • E-participation in policy-making for health: a scoping review protocol
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Hamid Esmailzadeh 1 , 2 ,
  • Shiva Mafimoradi 3 ,
  • Masoumeh Gholami 4 ,
  • http://orcid.org/0000-0002-0666-7928 Mohammad Javad Mansourzadeh 5 ,
  • Fatemeh Rajabi 6
  • 1 Health Information Management Research Center , Tehran University of Medical Sciences , Tehran , Iran (the Islamic Republic of)
  • 2 University Research and Development Center , Tehran University of Medical Sciences , Tehran , Iran (the Islamic Republic of)
  • 3 Secretariat of Supreme Council of Health and Food Security , Iran Ministry of Health and Medical Education , Tehran , Iran (the Islamic Republic of)
  • 4 School of Public Health , Tehran University of Medical Sciences , Tehran , Iran (the Islamic Republic of)
  • 5 Osteoporosis Research Center, Endocrinology and Metabolism Clinical Sciences Institute , Tehran University of Medical Sciences , Tehran , Iran (the Islamic Republic of)
  • 6 Community Based Participatory Research Center , Tehran University of Medical Sciences , Tehran , Iran (the Islamic Republic of)
  • Correspondence to Dr Shiva Mafimoradi; mafimoradis{at}yahoo.com

Introduction For the general public, e-participation represents a potential solution to the challenges associated with in-person participation in health policy-making processes. By fostering democratic engagement, e-participation can enhance civic legitimacy and trust in public institutions. However, despite its importance, there is currently a gap in the literature regarding a comprehensive synthesis of studies on various aspects of e-participation in the health policy domain. These aspects include levels of participation, underlying mechanisms, barriers, facilitators, values and outcomes. To address this gap, our proposed scoping review aims to systematically investigate and classify the available literature related to e-participation in policy-making for health.

Methods and analysis We will employ the Population, Concept and Context framework developed by Arksey and O’Malley (2005). Our population of interest will consist of participants involved in policy-making for health, including both government organisers of e-participation and participating citizens (the governed). To identify relevant studies, we will systematically search databases such as CINAHL (EBSCO), Academic Search Premier (EBSCO), Social Services Abstracts (ProQuest), Scopus (Elsevier), EMBASE (Elsevier), The Cochrane Database of Systematic Reviews, Campbell Collaboration, JBI Evidence Synthesis and PubMed using a predefined search strategy. Two independent reviewers will conduct a three-tiered screening process for identified articles, with a third reviewer resolving any discrepancies. Data extraction will follow a predefined yet flexible form. The results will be summarised in a narrative format, presented either in tabular or diagrammatic form.

Ethics and dissemination The National Institute of Health Research of the Islamic Republic of Iran’s ethics committee has approved this review study. Our findings will be disseminated through peer-reviewed publications, conference presentations and targeted knowledge-sharing sessions with relevant stakeholders.

  • Health policy
  • Social Interaction
  • Clinical governance
  • PUBLIC HEALTH

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2023-080538

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

A robust design has been established for this protocol, incorporating a well-established review framework. The design includes defining a comprehensive search strategy, developed in consultation with an information specialist and adopting an inductive approach for data collection and charting.

In the context of this study, knowledge users—comprising representatives from public administrations, health policy-making bodies and community-based organisations—will actively participate. Their engagement aims to significantly enhance the scoping review processes and contribute to more robust outcomes.

This review will not incorporate any quality assessment or grading of evidence, as it falls outside the chosen methodology.

The review will be restricted to English-language publications, potentially resulting in relevant studies being overlooked due to language bias.

The inclusion criteria will focus solely on peer-reviewed published literature and international reports, which may limit the comprehensiveness of our findings.

Introduction

Public participation is a concept frequently associated with democratic ideals and empowerment. Within the context of health policy, it represents a deliberate process through which governments actively seek input from the public. This engagement aims to gather diverse perspectives on decisions related to health policy formulation. For civil society, the primary objective of public participation is to hold the government accountable for fulfilling its obligations toward the population. Conversely, governments view participation as a means to enhance stakeholder ownership and responsiveness, particularly during critical situations such as the COVID-19 pandemic. 1 2

Public participation in the policy-making process occurs across multiple levels, each corresponding to the degree of citizens’ influence in shaping the final outcomes. These levels encompass manipulation, therapy, informing, consultation, placation, partnership, delegated power and citizen control. 3 The nature, mechanisms and purpose of participation vary significantly among these levels. At the passive end of the spectrum, citizens provide input through mechanisms 2 such as in-person forums, open-for-all consultations, policy dialogues, focus groups and citizen panels. 4 Moving towards more active engagement, citizens participate directly in decision-making processes 5 6 —for instance, through health councils, assemblies or representation on steering committees. 1 The International Association for Public Participation (IAPP) outlines a five-step framework for participation: information sharing, consultation, collaboration, involvement and empowerment. 7

In the context of participation dynamics, manipulation and therapy (often referred to as non-participation) are employed as substitutes for authentic engagement. Rather than facilitating genuine involvement in programme planning and execution, the primary aim is to empower those in positions of authority to educate or address the needs of participants. As the process unfolds, information sharing and consultation may reach levels of tokenism, granting individuals a nominal voice while lacking the influence necessary to sway decision-makers. Placation represents a more advanced form of tokenism, wherein individuals are allowed to provide advice but ultimate decision-making authority remains with those in power. In contrast, partnership models enable negotiation and trade-offs between participants and decision-makers. Finally, in systems of delegated power and citizen control, citizens hold the majority of decision-making seats or even full managerial authority. 3 8

In the realm of health policy, public participation plays a pivotal role. Its significance has drawn the attention of health planners, policy-makers and activists, gaining prominence in mainstream health discourse following the 1978 Alma Ata declaration. 9 Originally introduced as an international mandate within the context of primary healthcare, public participation has evolved. 10 Civil society organisations and community engagement are now recognised as potent tools for enhancing health services worldwide. 9 Notably, community engagement interventions, as highlighted by O’Mara-Eves et al , yield positive effects across a spectrum of health outcomes—ranging from health behaviours and consequences to self-efficacy, perceived social support and community well-being. 11 These impacts are particularly pronounced at local levels and among marginalised populations. 12

Public e-participation, a subset of broader public participation, emerged as a distinct field of study and practice during the early stages of e-government transformation. It refers to the process of involving the public in (health) policy formulation, decision-making and service design and delivery through information and communication technologies (ICTs) or digital media. The overarching goal is to create a participatory, inclusive and deliberative environment. 1 7 Notably, the adoption of public e-participation has followed different trajectories in developing vs developed countries. By 2005, the term had gained widespread usage, and numerous public e-participation initiatives—such as multifunction local e-participation platforms, I Paid A Bribe, Change.org, and citizens-to-citizens platforms—in different sectors including health had been established worldwide. These initiatives leverage various technologies, including geographic information systems integrated with web or mobile functions, as well as gamification strategies. 13 14

Despite the rapid proliferation of online service and participation platforms, the demand for public e-participation among citizens exhibits considerable variability across contexts, including countries and sectors. This variation is influenced by the institutional features of nation-states, particularly the nature of the relationship between society and the state. Such features significantly impact the policy process and outcomes, affecting the scope and quality of public participation. In general, optimistic expectations placed on public e-participation two decades ago have not been fully realised, even within the health sector. 7 15 16

Several challenges and barriers must be addressed by governments and health authorities seeking to promote public e-participation. These include token participation, the capture of participatory processes by elites and the lack of voice for marginalised groups. Additionally, ‘participation fatigue’ arises from the proliferation of participatory processes that yield little meaningful impact. The high but low-visibility costs of maintaining participation processes, coupled with insufficient resources, further complicate efforts. There is also a lack of capacity within public administration to effectively manage these processes, exacerbated by the digital divide. Clear objectives for e-participation are often absent, and there is inconsistency in stakeholders’ expectations and motivations to participate. Trust issues in government, the internet and participation platforms are prevalent, as is a lack of transparency regarding the relationship between participation mechanisms and the policy-making process. Furthermore, there is often a lack of systematic evaluation, insufficient attention to the legal and regulatory framework and a failure to understand the values of public administration and the political system within a country. 1 7 10 15

While public e-participation platforms leveraging new technologies have proliferated globally since 2000, it remains unclear whether this multiplication has translated into broader or deeper citizen engagement. 3 Existing review studies suggest that electronic platforms designed to support (health) policy-making have often fallen short of achieving meaningful public participation. 17 18

Furthermore, measuring the benefits of e-participation is challenging due to unclear objectives and hard-to-measure outcomes related to citizen education, increased civic engagement and trust in public institutions. 1 7 19 This lack of information hinders a comprehensive understanding of the conditions under which increased investment in specific participatory mechanisms makes sense for governments. 7

Overall, public e-participation research spans multiple disciplines, including public administration, organisation studies, communication and media studies, political science, and information systems research. 19

Despite the existence of various synthesised evidence articles on e-services, such as consultation, health assessment and triage, as well as e-participation in policy and decision-making—including systematic reviews on the diffusion of e-participation in public administrations, 19 the challenges of social media for citizen e-participation, 20 participation tools in urban design, 21 barriers and facilitators of e-consultation/services in healthcare, 22–25 digital shared decision-making in healthcare, 26 systematic mapping on the gamification of e-participation 27 and scoping reviews on patient engagement activities during the COVID-19 pandemic 28 —there is no comprehensive review that investigates and scopes the body of literature on e-participation in policy-making for health by charting and classifying the available literature. Indeed, with the exception of synthesis reviews on e-participation in non-health fields, the synthesis reviews on e-participation in the health field have primarily focused on e-services and patient involvement in treatment plan design. Consequently, it remains challenging for researchers and policy-makers to fully comprehend the current body of knowledge on e-participation in policy-making for health, beyond just e-services.

Given the ambiguous and inconsistent information on various aspects of e-participation, particularly its benefits, costs and outcomes—largely due to the ‘deliberation-to-policy gap’ in the health sector 4 —and the multidisciplinary and fragmented nature of its research, 19 there is insufficient rigorous evidence to offer health authorities practical recommendations. This is especially true regarding the use of e-participation with a focus on specific communities, participation mechanisms and models of engagement.

In this context, our scoping review is both timely and essential, aiming to provide a comprehensive overview of the current status of the field across various levels and approaches. Specifically, we seek to elucidate the stages of policy-making that have been studied, including agenda setting, formulation, adoption, implementation, monitoring and evaluation. Additionally, we aim to clarify the levels of policy-making, encompassing microlevel, mesolevel and macrolevel. In detailing these stages, we also intend to identify the e-mechanisms through which public participation has occurred, ranging from in-person to group-based and representative methods.

As previously described, the demand for public e-participation from citizens appears to be highly variable and less optimistic than anticipated. Additionally, the outcomes and values of public e-participation, particularly direct forms, have not received sufficient attention due to unclear objectives and performance indicators. This scoping review aims to identify the challenges, barriers, facilitators and the direct outcomes and values of public e-participation in the health sector.

In addressing these questions, this scoping review ultimately aims to provide a concept map of public e-participation in policy-making for health. The goal is to guide future national and international research on public e-participation, particularly in the context of developing countries where the diffusion of e-participation in policy-making for health is still in its infancy compared with developed nations.

A preliminary search of Google Scholar, PubMed, PROSPERO, the Cochrane Database of Systematic Reviews and the JBI Evidence Synthesis did not reveal any published or ongoing scoping reviews that address this topic in the manner proposed by our current protocol.

Methods and analysis

Review question(s).

What is the current state of knowledge in the literature regarding public e-participation in policy-making for health, particularly concerning the stakeholders involved (including policy-makers, organisers and the general population)?

Subquestions include:

How is public e-participation defined in the literature within the context of policy-making for health?

What levels of public e-participation (information, consultation, collaboration, involvement and empowerment) have been examined in the context of policy-making for health?

Which ICT-based participatory spaces and mechanisms are used for public e-participation in policy-making for health, and for what purposes and approaches?

At which stages (agenda setting, formulation, adoption, implementation, monitoring and evaluation) and levels of analysis (micro, meso and macro) has public e-participation in policy-making for health been most frequently employed?

What barriers and facilitators to public e-participation are reported in the policy-making for health literature?

What benefits and costs (values) of public e-participation are documented in the policy-making for health literature?

What outcomes (eg, quality of policies or decisions made, improvements in public service quality) of public e-participation are reported in the policy-making for health literature and based on what criteria are these successes measured?

Inclusion criteria

The inclusion criteria, developed using the Participants, Concept and Context framework, 29 are outlined below:

Participants

This review will consider studies examining participants involved in policy-making for health, either as organisers of e-participation (government) or as participating citizens (the governed). Relevant participants include civil society, societal organisations, non-governmental organisations (NGOs), communities, community-based organisations (CBOs), vulnerable groups, patients, politicians, public administrations (including but not limited to parliaments, the cabinet, supreme councils and commissions), healthcare professionals, bureaucrats and civil servants.

The concept to be explored in this mapping activity is public e-participation in policy-making for health. Generally, public e-participation is a social activity mediated by ICT, involving interaction or informed dialogue between citizens, public administration, and politicians in policy-making, or even in service design and delivery. This process encourages participants to share ideas or options and engage in collaborative policy-making. 7 This will serve as our working definition. Public participation is typically described as a spectrum, and we have chosen to use the IAPP framework (inform, consult, involve, collaborate and empower) to determine what qualifies as a public e-participation study. Although the literature presents various levels of public involvement, it is not guaranteed that these levels are applied consistently across different studies. Consequently, a study’s use of the term public e-participation can refer to a range of ICT-based initiatives with diverse purposes. This scoping review aims to clarify what is being studied in the public e-participation literature within the context of policy-making for health.

By ICT, we refer to any communication device, including but not limited to radio, television, cell phones, computer and network hardware, emails, robots, social media, and satellite systems, as well as the various services and applications associated with them, such as video conferencing and distance learning. This scoping review aims to identify the range of ICT-based participatory mechanisms within the field of policy-making for health.

The concept of policy-making for health also requires a clear definition and common language for this scoping review. For the purposes of this review, we will define public policy as a web of decisions, plans, actions or practices adopted and pursued by a government, party, ruler or statesman to achieve specific health or health-related goals within a society. We will define health according to the WHO as ‘a state of complete physical, mental and social well-being, rather than merely the absence of disease or infirmity’. To delineate the scope of our term ‘health’, we will encompass all types of health policies, including public health, mental health and healthcare while excluding aspects of medical care that focus on individual outcomes and the patient–physician relationship. We will include studies in health public policy at any stage of the policy-making cycle, commonly described as agenda setting, formulation, adoption, implementation, monitoring and evaluation. Another important aspect related to this concept is the level of policy-making for health, which in this review includes three specific levels: micro (front-line clinician), meso (regional, eg, district/county, or institutional, eg, hospital) and macro (national).

For the purposes of this review, a barrier to policy-making for health will be defined as any factor that might impede the formation of participation among politicians, public administrations and citizens. Conversely, facilitators are any factors that might enhance participation or aid in the distribution of power.

The concept of the value of public e-participation will be defined as the overall costs and benefits to the organisers of any initiative aimed at involving people in policy-making for health.

Finally, the ambiguous concept of public e-participation outcomes, which addresses the deliberation-to-policy gap, will be defined as the extent to which the desired goals of public e-participation organisers are realised once the participatory mechanisms are concluded.

This review aims to capture public e-participation in policy-making for health across various participatory spaces where health policies and decisions are made with public involvement. By participatory spaces, we refer to any physical or virtual venues where individuals come together to interact. 4 In these spaces, organisers employ various mechanisms, including ICT-based ones, to engage the public.

For the purposes of this review, we will consider a broad array of participatory spaces, regardless of the policy issue or intervention level. These include national sectoral or intersectoral councils, committees, workgroups, parliaments, cabinets, government technical commissions, health ministry managerial councils, health ministry technical deputies, national or provincial health assemblies, policy networks or communities, health CBOs and other existing or designed venues for policy-making for health. These spaces may aim to attract public participation or the participation of all key stakeholders, including the public.

Additionally, we will include all countries with various power structures (democratic, monarchical and autocratic regimes) where public e-participation has taken place. However, we will exclude places where healthcare occurs, such as acute care hospitals, urgent care centres, rehabilitation centres, nursing homes and other long-term care facilities.

Types of sources

This scoping review will consider peer-reviewed academic journal articles, excluding opinion pieces, and will include studies employing qualitative, quantitative and mixed methods of data collection. Additionally, grey literature will be limited to reports from international organisations. Table 1 outlines our inclusion and exclusion criteria.

  • View inline

Screening inclusionary and exclusionary criteria

Patient and public involvement

To enhance the conceptualisation of this review and actively involve knowledge users, we established a consultative committee. Comprising nine individuals from public administrations, two representatives from health policy-making bodies and two members from CBOs, this committee played a crucial role in shaping the review’s purpose and research questions. Their input was instrumental in refining and approving the review protocol, ensuring alignment with their specific needs and concerns. Throughout the review process, the committee will continue to be consulted by the reviewers. The practice and impact of these consultation exercises, which challenge conventional perspectives and existing knowledge, will be thoroughly documented in the final scoping review. The details of the committee members are provided in table 2 .

Summary of stakeholder consultants

Search strategy

The search strategy aims to identify peer-reviewed sources. In collaboration with a research librarian (MJM), we conducted an initial search of databases including Embase (Elsevier), PubMed, Scopus (Elsevier) and Web of Science (Clarivate) to develop the search strategy. We (SM and FR) used primary keywords related to our population, context and relevant concepts. Subsequently, we compiled a list of text words found in the titles and abstracts of relevant articles, along with index terms describing those words. This compilation formed the basis for our comprehensive search strategy (see online supplemental file 1 : search strategy). After finalising the search strategy (SM and MG), we (MG) subjected it to a Peer Review of Electronic Search Strategy (PRESS) (see online supplemental file 2 : PRESS checklist) before adapting it for each relevant database and information source. Additionally, we will screen the reference lists of selected articles for further relevant papers. The study reviewers will contact study authors if information relevant to our planned data extraction is missing. If necessary, we will also search Google and Google Scholar to identify reports produced by international organisations such as the WHO, NGOs and industry.

Supplemental material

Given the absence of translation services, our study will exclusively incorporate research published in English. However, to mitigate language bias, our initial search will encompass articles written in any language. This approach allows us to assess the extent of non-English literature excluded from our analysis. Furthermore, we will include studies published from 2000 up to the search date.

Study/source of evidence selection

All identified records will be compiled and uploaded into the Zotero reference manager ( https://www.zotero.org/ ). Duplicate entries will be systematically removed. Subsequently, two primary reviewers will independently assess all articles across three stages: title screening, abstract screening and full-text screening. We will apply predefined inclusion criteria to identify potentially relevant papers. These relevant papers will be retrieved in full and imported into EndNote. The full text of selected citations will undergo a detailed assessment against the inclusion criteria by two independent reviewers. Any sources that do not meet the inclusion criteria at the full-text stage will be excluded, and the reasons for exclusion will be documented and reported in the scoping review. In cases of disagreement between the reviewers (SM and NR) during the selection process, resolution will occur through discussion or consultation with a third reviewer (HE or FR). Articles meeting the following criteria will be included in our analysis: relevance to e-participation in policy-making for health, association with a full-text peer-reviewed study (excluding abstract-only search results and opinions), and publication in English from 2000 onward.

Our search will focus on studies published from the year 2000 onward, aligning with the proliferation of e-participation platforms that use new technologies. This trend emerged in developed countries during the first decade of the 2000s and in developing countries over the past 10 years. 7

The comprehensive results of our search and the study inclusion process will be fully documented in the final scoping review. Additionally, we will present these findings using a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram. 30 Throughout the review process, we will adhere to the guidance provided by the PRISMA extension for Scoping Reviews. 31

Data extraction

‘Data extraction from materials and papers included in the scoping review will be conducted by two independent reviewers (SM and NR). They will use a data extraction form in Microsoft Excel, employing an inductive approach (refer to online supplemental file 3 : draft of data extraction form). Initially, two interdependent reviewers will pilot the form using the first 10 papers. Subsequently, a consultative committee discussion will address any issues, and the form will be iteratively revised during the data extraction process for each included paper. Detailed modifications will be documented in the full scoping review.

The extracted data will encompass specific details related to the population, e-participation definition, levels, mechanisms, barriers, facilitators, values and outcomes, as well as contextual information. Additionally, we will collect author details, publication year, publication type, country of origin, study objectives, methodology and key findings. In cases of reviewer disagreements, resolution will occur through discussion or consultation with a third reviewer (HE or FR). Furthermore, we will proactively contact authors to request any missing or additional data, as needed.

Data analysis and presentation

The analysis for this scoping review will primarily adopt a summative approach, focusing on the data extracted from the literature. We will present the extracted data in diagrammatic or tabular formats, aligning with the scoping review’s objectives and research questions. Additionally, a descriptive summary will accompany the tabulated and/or charted results, providing context on how these findings relate to the review’s objectives.

To disseminate the review results, we plan to engage with various key stakeholders through workshops, reports and academic publications. Prior to submitting the final review, we will seek input from members of the consultative committee.

Ethics and dissemination

The National Institute of Health Research of the Islamic Republic of Iran ethics committee approved this review study (ethics code number IR.TUMS.NIHR.REC.1400.019). We will disseminate the findings through peer-reviewed publications, conference presentations and practical recommendations to relevant knowledge users (eg, politicians, policy-makers and managers) at general or private meetings at the national level. Additionally, we will integrate the findings into the future research plans of the relevant research centres at TUMS to guide future research endeavours.

Ethics statements

Patient consent for publication.

Not applicable.

Acknowledgments

This review is the protocol of a research study entitled 'Investigating the application of information and communication technology to involve people in the policy-making for health: a scoping review', funded and supported by Tehran University of Medical Sciences grant No 1400-3-126-56722 with ethics code No. IR.TUMS.NIHR.REC.1400.019.

We thank our consultative committee members, Dr Reza Majdzadeh, Dr Habibullah Farid, Dr Ali Akhavan, Dr Azadeh Sayarifard, Dr Maryam Rahbari, Dr Narges Rostamigooran, Dr Mostafa Rezaee, Dr Bohlol Rahimi, Dr Ahmad Rezaee, Dr Hossein Bozarjomehri, Dr Saeed Harasani, Dr S.Mahdi Shariatzadeh, Dr Davoud Pirani for their contribution that greatly improved the protocol. We would also like to show our gratitude to Dr Mohamad-Ismaeel Motlagh the head of the Secretariat of supreme council of health and food security for support during this study.

  • Santoveña-Casal S ,
  • Arnstein SR
  • Rajan D , et al
  • Abelson J ,
  • Kahssay HM ,
  • O’Mara-Eves A ,
  • Brunton G ,
  • Oliver S , et al
  • Paivarinta T
  • Razavi SD ,
  • Kapiriri L ,
  • Abelson J , et al
  • Ueno M , Wakayama University
  • Panopoulou E ,
  • Tambouris E ,
  • Tarabanis K
  • Karaatmaca C , et al
  • Steinbach M ,
  • Sieweke J ,
  • Scholtz B ,
  • Drosinis P ,
  • Lai Y-L , et al
  • Almathami HKY ,
  • Vlahu-Gjorgievska E
  • Chambers D ,
  • Cantrell AJ ,
  • Johnson M , et al
  • Korsbek L ,
  • Austin SF , et al
  • Resek CL , et al
  • Marcinow M ,
  • Sandercock J , et al
  • Peters M , et al
  • McKenzie JE ,
  • Bossuyt PM , et al
  • Tricco AC ,
  • Zarin W , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Contributors SM and FR co-conducted the primary search. SM, MG and MJM codeveloped the search strategy. MJM and MG facilitated the PRESS for the search strategy. SM wrote the manuscript. SM and HE created the data extraction form. HE, FR and SM scrutinise the design of the protocol with the help of consultative committee members within nine core team meetings and one consultative committee meeting. All authors provided feedback on the prefinal version of the protocol and search strategy. SM is responsible for the overall content as the guarantor.

Funding This work was supported by Tehran University of Medical Sciences (TUMS) grant number 1400-3-126-56722.

Competing interests None declared.

Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

biomedicines-logo

Article Menu

systematic review research protocol example

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Breaking the cycle of pain: the role of graded motor imagery and mirror therapy in complex regional pain syndrome.

systematic review research protocol example

1. Introduction

2.1. review question, 2.2. eligibility criteria, 2.3. exclusion criteria, 2.4. search strategy, 2.5. study selection, 2.6. data extraction and data synthesis, 3.1. pain reduction, 3.2. functional improvement, 3.3. swelling reduction, 4. discussion, 4.1. study selection and publication year distribution, 4.2. challenges in conducting rcts in crps, 5. clinical practice implications, 6. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Michael d‘A, S.H. CRPS: What’s in a Name? Taxonomy, Epidemiology, Neurologic, Immune and Autoimmune Considerations. Reg. Anesth. Pain Med. 2019 , 44 , 376–387. [ Google Scholar ] [ CrossRef ]
  • de Mos, M.; de Bruijn, A.G.J.; Huygen, F.J.P.M.; Dieleman, J.P.; Stricker, B.H.C.; Sturkenboom, M.C.J.M. The Incidence of Complex Regional Pain Syndrome: A Population-Based Study. Pain 2007 , 129 , 12–20. [ Google Scholar ] [ CrossRef ]
  • Goebel, A. Complex Regional Pain Syndrome in Adults. Rheumatology 2011 , 50 , 1739–1750. [ Google Scholar ] [ CrossRef ]
  • de Rooij, A.M.; Perez, R.S.G.M.; Huygen, F.J.; van Eijs, F.; van Kleef, M.; Bauer, M.C.R.; van Hilten, J.J.; Marinus, J. Spontaneous Onset of Complex Regional Pain Syndrome. Eur. J. Pain 2010 , 14 , 510–513. [ Google Scholar ] [ CrossRef ]
  • Yvon, A.; Faroni, A.; Reid, A.J.; Lees, V.C. Selective Fiber Degeneration in the Peripheral Nerve of a Patient with Severe Complex Regional Pain Syndrome. Front. Neurosci. 2018 , 12 , 207. [ Google Scholar ] [ CrossRef ]
  • Lanfranchi, E.; Fairplay, T.; Tedeschi, R. A Case Report: Pain in the Hand and Tingling of the Upper Limb May Be a Symptom of a Schwannoma in the Supraclavicular Region. Int. J. Surg. Case Rep. 2023 , 110 , 108664. [ Google Scholar ] [ CrossRef ]
  • Harden, R.N.; Bruehl, S.; Stanton-Hicks, M.; Wilson, P.R. Proposed New Diagnostic Criteria for Complex Regional Pain Syndrome. Pain Med. 2007 , 8 , 326–331. [ Google Scholar ] [ CrossRef ]
  • Harden, N.R.; Bruehl, S.; Perez, R.S.G.M.; Birklein, F.; Marinus, J.; Maihofner, C.; Lubenow, T.; Buvanendran, A.; Mackey, S.; Graciosa, J.; et al. Validation of Proposed Diagnostic Criteria (the “Budapest Criteria”) for Complex Regional Pain Syndrome. Pain 2010 , 150 , 268–274. [ Google Scholar ] [ CrossRef ]
  • Galer, B.S.; Henderson, J.; Perander, J.; Jensen, M.P. Course of Symptoms and Quality of Life Measurement in Complex Regional Pain Syndrome: A Pilot Survey. J Pain Symptom Manag. 2000 , 20 , 286–292. [ Google Scholar ] [ CrossRef ]
  • Lee, J.W.; Lee, S.K.; Choy, W.S. Complex Regional Pain Syndrome Type 1: Diagnosis and Management. J. Hand Surg. Asian Pac. Vol. 2018 , 23 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Lanfranchi, E.; Vandelli, S.; Boccolari, P.; Donati, D.; Platano, D.; Tedeschi, R. Efficacy and Patient Acceptability of 3 Orthosis Models for Radial Nerve Palsy. Hand Surg. Rehabil. 2024 , 43 , 101677. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Shim, H.; Rose, J.; Halle, S.; Shekane, P. Complex Regional Pain Syndrome: A Narrative Review for the Practising Clinician. Br. J. Anaesth. 2019 , 123 , e424–e433. [ Google Scholar ] [ CrossRef ]
  • Schwartzman, R.J.; Alexander, G.M.; Grothusen, J. Pathophysiology of Complex Regional Pain Syndrome. Expert. Rev. Neurother. 2006 , 6 , 669–681. [ Google Scholar ] [ CrossRef ]
  • Woolf, C.J. Central Sensitization: Implications for the Diagnosis and Treatment of Pain. Pain 2011 , 152 , S2–S15. [ Google Scholar ] [ CrossRef ]
  • Di Pietro, F.; McAuley, J.H.; Parkitny, L.; Lotze, M.; Wand, B.M.; Moseley, G.L.; Stanton, T.R. Primary Somatosensory Cortex Function in Complex Regional Pain Syndrome: A Systematic Review and Meta-Analysis. J. Pain 2013 , 14 , 1001–1018. [ Google Scholar ] [ CrossRef ]
  • Tedeschi, R.; Platano, D.; Donati, D.; Giorgi, F. Integrating the Drucebo Effect into PM&R: Enhancing Outcomes through Expectation Management. Am. J. Phys. Med. Rehabil. 2024 . [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bruehl, S.; Harden, R.N.; Galer, B.S.; Saltz, S.; Bertram, M.; Backonja, M.; Gayles, R.; Rudin, N.; Bhugra, M.K.; Stanton-Hicks, M. External Validation of IASP Diagnostic Criteria for Complex Regional Pain Syndrome and Proposed Research Diagnostic Criteria. International Association for the Study of Pain. Pain 1999 , 81 , 147–154. [ Google Scholar ] [ CrossRef ]
  • Giostri, G.S.; Souza, C.D.A. Complex Regional Pain Syndrome. Rev. Bras. Ortop. 2024 , 59 , e497–e503. [ Google Scholar ] [ CrossRef ]
  • Hájek, M.; Chmelař, D.; Tlapák, J.; Klugar, M. The Effectiveness of Hyperbaric Oxygen Treatment in Patients with Complex Regional Pain Syndrome: A Retrospective Case Series. Int. J. Med. Sci. 2024 , 21 , 2021–2030. [ Google Scholar ] [ CrossRef ]
  • Bleckwenn, M.; Weckbecker, K. Did the taking of a blood sample cause a complex regional pain syndrome? MMW Fortschr. Med. 2024 , 166 , 48–50. [ Google Scholar ] [ CrossRef ]
  • Bussa, M.; Mascaro, A.; Cuffaro, L.; Rinaldi, S. Adult Complex Regional Pain Syndrome Type I: A Narrative Review. PMR 2017 , 9 , 707–719. [ Google Scholar ] [ CrossRef ]
  • Mailis-Gagnon, A.; Bennett, G.J. Abnormal Contralateral Pain Responses from an Intradermal Injection of Phenylephrine in a Subset of Patients with Complex Regional Pain Syndrome (CRPS). Pain 2004 , 111 , 378–384. [ Google Scholar ] [ CrossRef ]
  • Goh, E.L.; Chidambaram, S.; Ma, D. Complex Regional Pain Syndrome: A Recent Update. Burn. Trauma. 2017 , 5 , 2. [ Google Scholar ] [ CrossRef ]
  • Tedeschi, R. Reevaluating the Drucebo Effect: Implications for Physiotherapy Practice. J. Psychosoc. Rehabil. Ment. Health 2024 . [ Google Scholar ] [ CrossRef ]
  • Tedeschi, R. Effet Drucebo Dans La Fibromyalgie: Un Nouveau Paradigme En Rhumatologie. Rev. Du Rhum. 2024; in press . [ Google Scholar ] [ CrossRef ]
  • Boccolari, P.; Giurati, D.; Tedeschi, R.; Arcuri, P.; Donati, D. Tailored Rehabilitation for Interphalangeal Joint Rigidity: A Case Report on Novel Noninvasive Techniques. Man. Med. 2024 . [ Google Scholar ] [ CrossRef ]
  • Urits, I.; Shen, A.H.; Jones, M.R.; Viswanath, O.; Kaye, A.D. Complex Regional Pain Syndrome, Current Concepts and Treatment Options. Curr. Pain Headache Rep. 2018 , 22 , 10. [ Google Scholar ] [ CrossRef ]
  • Harden, R.N.; McCabe, C.S.; Goebel, A.; Massey, M.; Suvar, T.; Grieve, S.; Bruehl, S. Complex Regional Pain Syndrome: Practical Diagnostic and Treatment Guidelines, 5th Edition. Pain Med. 2022 , 23 , S1–S53. [ Google Scholar ] [ CrossRef ]
  • Bruehl, S. Complex Regional Pain Syndrome. BMJ 2015 , 351 , h2730. [ Google Scholar ] [ CrossRef ]
  • Nelson, D.V.; Stacey, B.R. Interventional Therapies in the Management of Complex Regional Pain Syndrome. Clin. J. Pain 2006 , 22 , 438–442. [ Google Scholar ] [ CrossRef ]
  • van der Spek, D.P.C.; Dirckx, M.; Mangnus, T.J.P.; Cohen, S.P.; Huygen, F.J.P.M. 10. Complex Regional Pain Syndrome. Pain Pract. 2024; early view . [ Google Scholar ] [ CrossRef ]
  • Vargas, A.J.; Elkhateb, R.; Tobey-Moore, L.; Van Hemert, R.L.; Fuccello, A.; Goree, J.H. Dorsal Root Ganglion Size in Patients With Complex Regional Pain Syndrome of the Lower Extremity: A Retrospective Pilot Study. Neuromodulation 2024 . [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Her, Y.F.; Churchill, R.A. Case Report: Rescue of Relapsed Pain in a Patient with Complex Regional Pain Syndrome Type II by Adding Another Dorsal Root Ganglion Lead. Int. Med. Case Rep. J. 2024 , 17 , 765–769. [ Google Scholar ] [ CrossRef ]
  • Priganc, V.W.; Stralka, S.W. Graded Motor Imagery. J. Hand Ther. 2011 , 24 , 164–168, quiz 169. [ Google Scholar ] [ CrossRef ]
  • Kranczioch, C.; Mathews, S.; Dean, P.J.; Sterr, A. On the Equivalence of Executed and Imagined Movements: Evidence from Lateralized Motor and Nonmotor Potentials. Hum. Brain Mapp. 2009 , 30 , 3275–3286. [ Google Scholar ] [ CrossRef ]
  • Moseley, G.L.; Zalucki, N.; Birklein, F.; Marinus, J.; van Hilten, J.J.; Luomajoki, H. Thinking about Movement Hurts: The Effect of Motor Imagery on Pain and Swelling in People with Chronic Arm Pain. Arthritis Rheum. 2008 , 59 , 623–631. [ Google Scholar ] [ CrossRef ]
  • Cacchio, A.; De Blasis, E.; De Blasis, V.; Santilli, V.; Spacca, G. Mirror Therapy in Complex Regional Pain Syndrome Type 1 of the Upper Limb in Stroke Patients. Neurorehabil. Neural Repair. 2009 , 23 , 792–799. [ Google Scholar ] [ CrossRef ]
  • Pervane Vural, S.; Nakipoglu Yuzer, G.F.; Sezgin Ozcan, D.; Demir Ozbudak, S.; Ozgirgin, N. Effects of Mirror Therapy in Stroke Patients With Complex Regional Pain Syndrome Type 1: A Randomized Controlled Study. Arch. Phys. Med. Rehabil. 2016 , 97 , 575–581. [ Google Scholar ] [ CrossRef ]
  • Sarkar, B.; Goswami, S.; Mukherjee, D.; Basu, S. Efficacy of Motor Imagery through Mirror Visual Feedback Therapy in Complex Regional Pain Syndrome: A Comparative Study. Indian. J. Pain 2017 , 31 , 164. [ Google Scholar ] [ CrossRef ]
  • Strauss, N.L.; Goldfarb, C.A. Surgical Correction of Clinodactyly: Two Straightforward Techniques. Tech. Hand Up. Extrem. Surg. 2010 , 14 , 54–57. [ Google Scholar ] [ CrossRef ]
  • Tedeschi, R.; Platano, D.; Donati, D.; Giorgi, F. Harnessing Mirror Neurons: A New Frontier in Parkinson’s Disease Rehabilitation—A Scoping Review of the Literature. J. Clin. Med. 2024 , 13 , 4539. [ Google Scholar ] [ CrossRef ]
  • Donati, D.; Vita, F.; Tedeschi, R.; Galletti, S.; Biglia, A.; Gistri, T.; Arcuri, P.; Origlio, F.; Castagnini, F.; Faldini, C.; et al. Ultrasound-Guided Infiltrative Treatment Associated with Early Rehabilitation in Adhesive Capsulitis Developed in Post-COVID-19 Syndrome. Medicina 2023 , 59 , 1211. [ Google Scholar ] [ CrossRef ]
  • McCabe, C. Mirror Visual Feedback Therapy. A Practical Approach. J. Hand Ther. 2011 , 24 , 170–178, quiz 179. [ Google Scholar ] [ CrossRef ]
  • McCabe, C.S.; Haigh, R.C.; Blake, D.R. Mirror Visual Feedback for the Treatment of Complex Regional Pain Syndrome (Type 1). Curr. Pain Headache Rep. 2008 , 12 , 103–107. [ Google Scholar ] [ CrossRef ]
  • Moseley, G.L. Graded Motor Imagery Is Effective for Long-Standing Complex Regional Pain Syndrome: A Randomised Controlled Trial. Pain 2004 , 108 , 192–198. [ Google Scholar ] [ CrossRef ]
  • Moseley, G.L. Is Successful Rehabilitation of Complex Regional Pain Syndrome Due to Sustained Attention to the Affected Limb? A Randomised Clinical Trial. Pain 2005 , 114 , 54–61. [ Google Scholar ] [ CrossRef ]
  • Moseley, G.L. Graded Motor Imagery for Pathologic Pain: A Randomized Controlled Trial. Neurology 2006 , 67 , 2129–2134. [ Google Scholar ] [ CrossRef ]
  • Tedeschi, R. Unlocking the Power of Motor Imagery: A Comprehensive Review on Its Application in Alleviating Foot Pain. Acta Neurol. Belg. 2024 . [ Google Scholar ] [ CrossRef ]
  • Peters: Joanna Briggs Institute Reviewer’s Manual, JBI—Google Scholar. Available online: https://scholar-google-com.ezproxy.unibo.it/scholar_lookup?hl=en&publication_year=2020&author=MDJ+Peters&author=C+Godfrey&author=P+McInerney&author=Z+Munn&author=AC+Tricco&author=H+Khalil&title=Joanna+Briggs+Institute+Reviewer%27s+Manual%2C+JBI (accessed on 9 June 2022).
  • Tedeschi, R. An Overview and Critical Analysis of the Graston Technique for Foot-Related Conditions: A Scoping Review. Man. Med. 2024 . [ Google Scholar ] [ CrossRef ]
  • Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018 , 169 , 467–473. [ Google Scholar ] [ CrossRef ]
  • Tedeschi, R.; Giorgi, F. What Is Known about the RegentK Regenerative Treatment for Ruptured Anterior Cruciate Ligament? A Scoping Review. Man. Med. 2023 , 61 , 181–187. [ Google Scholar ] [ CrossRef ]
  • Santandrea, S.; Benassi, M.; Tedeschi, R. Comparison of Short-Stretch Bandage and Long-Stretch Bandage for Post-Traumatic Hand Edema. Int. J. Surg. Case Rep. 2023 , 111 , 108854. [ Google Scholar ] [ CrossRef ]
  • Forouzanfar, T.; Weber, W.E.J.; Kemler, M.; van Kleef, M. What Is a Meaningful Pain Reduction in Patients with Complex Regional Pain Syndrome Type 1? Clin. J. Pain 2003 , 19 , 281–285. [ Google Scholar ] [ CrossRef ]
  • Strauss, S.; Barby, S.; Härtner, J.; Pfannmöller, J.P.; Neumann, N.; Moseley, G.L.; Lotze, M. Graded Motor Imagery Modifies Movement Pain, Cortical Excitability and Sensorimotor Function in Complex Regional Pain Syndrome. Brain Commun. 2021 , 3 , fcab216. [ Google Scholar ] [ CrossRef ]
  • Kindl, G.-K.; Reinhold, A.-K.; Escolano-Lozano, F.; Degenbeck, J.; Birklein, F.; Rittner, H.L.; Teichmüller, K. Monitoring Everyday Upper Extremity Function in Patients with Complex Regional Pain Syndrome: A Secondary, Retrospective Analysis from ncRNAPain. Pain Res. Manag. 2024 , 2024 , 9993438. [ Google Scholar ] [ CrossRef ]
  • Candan, B.; Gungor, S. Temperature Difference between the Affected and Unaffected Limbs in Complex Regional Pain Syndrome. Pain Manag. 2024 , 14 , 293–303. [ Google Scholar ] [ CrossRef ]
  • Turner, J.A.; Loeser, J.D.; Deyo, R.A.; Sanders, S.B. Spinal Cord Stimulation for Patients with Failed Back Surgery Syndrome or Complex Regional Pain Syndrome: A Systematic Review of Effectiveness and Complications. Pain 2004 , 108 , 137–147. [ Google Scholar ] [ CrossRef ]
  • Kemler, M.A.; Barendse, G.A.; van Kleef, M.; de Vet, H.C.; Rijks, C.P.; Furnée, C.A.; van den Wildenberg, F.A. Spinal Cord Stimulation in Patients with Chronic Reflex Sympathetic Dystrophy. N. Engl. J. Med. 2000 , 343 , 618–624. [ Google Scholar ] [ CrossRef ]
  • Atalay, N.S.; Ercidogan, O.; Akkaya, N.; Sahin, F. Prednisolone in Complex Regional Pain Syndrome. Pain Physician 2014 , 17 , 179–185. [ Google Scholar ] [ CrossRef ]
  • de Jong, J.R.; Vlaeyen, J.W.S.; de Gelder, J.M.; Patijn, J. Pain-Related Fear, Perceived Harmfulness of Activities, and Functional Limitations in Complex Regional Pain Syndrome Type I. J. Pain 2011 , 12 , 1209–1218. [ Google Scholar ] [ CrossRef ]
  • Ayad, A.E.; Agiza, N.A.; Elrifay, A.H.; Mortada, A.M.; Girgis, M.Y.; Varrassi, G. Lumbar Sympathetic Block to Treat CRPS in an 18-Month-Old Girl: A Breaking Barriers Case Report and Review of Literature. Pain Ther. 2024 . [ Google Scholar ] [ CrossRef ]
  • Chua, M.; Ratnagandhi, A.; Seth, I.; Lim, B.; Cevik, J.; Rozen, W.M. The Evidence for Perioperative Anesthetic Techniques in the Prevention of New-Onset or Recurrent Complex Regional Pain Syndrome in Hand Surgery. J. Pers. Med. 2024 , 14 , 825. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Boccolari, P.; Pantaleoni, F.; Donati, D.; Tedeschi, R. Non-Surgical Treatment of Oblique Diaphyseal Fractures of the Fourth and Fifth Metacarpals in a Professional Athlete: A Case Report. Int. J. Surg. Case Rep. 2024 , 115 , 109256. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Boccolari, P.; Pantaleoni, F.; Tedeschi, R.; Donati, D. The Mechanics of the Collateral Ligaments in the Metacarpophalangeal Joints: A Scoping Review. Morphologie 2024 , 108 , 100770. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Boccolari, P.; Tedeschi, R.; Platano, D.; Donati, D. Review of Contemporary Non-Surgical Management Techniques for Metacarpal Fractures: Anatomy and Rehabilitation Strategies. Orthoplastic Surg. 2024 , 15 , 21–23. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Author(s)TitleYearMethodsResultsOutcomes Achieved
Moseley [ ]Graded motor imagery is effective for long-standing complex regional pain syndrome2004Single-blind, randomized controlled trial with crossover. 13 participants with CRPS Type I.Significant reduction in pain and swelling post-GMI. Sustained functional improvement.Pain reduction (NPS), decrease in swelling (circumference measurement), improved task-specific function (NRS).
Moseley [ ]Graded motor imagery for pathologic pain2005Parallel-group, single-blind RCT with 3 arms. 21 participants with CRPS Type I.Pain reduction was greater in the GMI group compared to control groups. Functional improvements also noted.Pain reduction (NPS), functional improvement (task-specific NRS).
Moseley [ ]Graded motor imagery for pathologic pain: A randomized controlled trial2006Parallel-group, single-blind RCT with 2 arms. 37 participants with CRPS Type I.Significant pain reduction and improvement in function maintained at 6-month follow-up.Pain reduction (VAS), functional improvement (task-specific NRS), maintained at follow-up.
Cacchio et al. [ ]Mirror therapy for functional improvement outcome in patients with post-stroke CRPS2009Single-blind RCT, 48 post-stroke CRPS patients, 2 groups: mirror therapy vs. placebo mirror therapy.Mirror therapy group showed significant improvement in pain and function compared to placebo.Pain reduction (VAS), functional improvement (WMFT), and quality of movement (MAL-QOM).
Vural et al. [ ]Effectiveness of mirror therapy on pain and hand function in stroke patients with CRPS2016Single-blind RCT with 2 arms. 30 post-stroke CRPS patients.Mirror therapy resulted in significant pain reduction and improvement in hand function.Pain reduction (VAS), improved hand function (FMA hand-wrist subsection).
Sarkar et al. [ ]Effect of graded motor imagery and mirror therapy on pain and functional outcome in CRPS2017Single-blind RCT with 3 arms. 30 participants with CRPS Type I.Both GMI and MT groups showed significant reductions in pain and functional improvements compared to placebo.Pain reduction (NPRS), decreased swelling, functional improvement (FMA, task-specific NRS).
Author(s)Sample SizeAge (Mean ± SD)Gender (M/F)Duration of CRPS (Mean ± SD)Type of CRPS
Moseley [ ]1335 ± 15 years02-nov6 monthsType I
Moseley [ ]2136 ± 8 yearsgiu-1512 ± 6 monthsType I
Moseley [ ]3745 ± 14 yearsnov-2614 ± 10 monthsType I
Cacchio et al. [ ]4857.9 ± 9.9 yearsnov-376 monthsPost-stroke CRPS
Vural et al. [ ]3068.9 ± 10.5 years15/15Not reportedPost-stroke CRPS
Sarkar et al. [ ]30Not reportedNot reportedNot reportedType I
Author(s)InterventionDuration of InterventionComparatorFollow-Up
Moseley [ ]Graded Motor Imagery (GMI)6 weeks (daily sessions)Pharmacological treatment6 weeks and 12–18 weeks post-intervention
Moseley [ ]Graded Motor Imagery (GMI)6 weeks (daily sessions)Pharmacological treatment6 weeks and 12 weeks post-intervention
Moseley [ ]Graded Motor Imagery (GMI)6 weeks (daily sessions)Pharmacological treatment6 weeks and 24 weeks post-intervention
Cacchio et al. [ ]Mirror Therapy (MT)4 weeks (5 sessions/week)Placebo mirror therapy4 weeks and 24 weeks post-intervention
Vural et al. [ ]Mirror Therapy (MT)4 weeks (5 sessions/week)Conventional rehabilitationNo follow-up
Sarkar et al. [ ]Graded Motor Imagery (GMI) and Mirror Therapy (MT)4 weeks (twice daily sessions)Placebo mirror therapyNo follow-up
Author(s)PEDro Score (Out of 10)RoB-2: Bias Due to RandomizationRoB-2: Bias Due to Deviations from Intended InterventionsRoB-2: Bias Due to Missing Outcome DataRoB-2: Bias in Measurement of the OutcomeRoB-2: Bias in Selection of the Reported ResultOverall RoB-2 Judgment
Moseley (2004) [ ]6LowLowLowLowLowLow
Moseley (2005) [ ]6LowLowLowLowLowLow
Moseley (2006) [ ]5LowLowLowLowLowLow
Cacchio et al. (2009) [ ]7LowLowLowLowLowLow
Vural et al. (2016) [ ]6LowLowLowLowLowLow
Sarkar et al. (2017) [ ]4Some concernsHighLowLowLowHigh
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Donati, D.; Boccolari, P.; Giorgi, F.; Berti, L.; Platano, D.; Tedeschi, R. Breaking the Cycle of Pain: The Role of Graded Motor Imagery and Mirror Therapy in Complex Regional Pain Syndrome. Biomedicines 2024 , 12 , 2140. https://doi.org/10.3390/biomedicines12092140

Donati D, Boccolari P, Giorgi F, Berti L, Platano D, Tedeschi R. Breaking the Cycle of Pain: The Role of Graded Motor Imagery and Mirror Therapy in Complex Regional Pain Syndrome. Biomedicines . 2024; 12(9):2140. https://doi.org/10.3390/biomedicines12092140

Donati, Danilo, Paolo Boccolari, Federica Giorgi, Lisa Berti, Daniela Platano, and Roberto Tedeschi. 2024. "Breaking the Cycle of Pain: The Role of Graded Motor Imagery and Mirror Therapy in Complex Regional Pain Syndrome" Biomedicines 12, no. 9: 2140. https://doi.org/10.3390/biomedicines12092140

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Systematic review protocol

    systematic review research protocol example

  2. How to do a systematic review

    systematic review research protocol example

  3. research protocol template

    systematic review research protocol example

  4. Review protocol for systematic review. Source: authors.

    systematic review research protocol example

  5. -Protocol of the systematic review

    systematic review research protocol example

  6. systematic review protocol.

    systematic review research protocol example

VIDEO

  1. Statistical Procedure in Meta-Essentials

  2. Part 1: Reasons for a systematic review protocol

  3. Developing a Systematic Review Protocol

  4. Systematic review_01

  5. Introduction to Protocols and Protocol Registration

  6. 2. How to write a protocol for Systematic Review and Meta-analysis with workshop

COMMENTS

  1. Guides: Systematic Reviews: Writing the Protocol

    The protocol serves as a roadmap for your review and specifies the objectives, methods, and outcomes of primary interest of the systematic review. Having a protocol promotes transparency and can be helpful for project management. Some journals require you to submit your protocol along with your manuscript.

  2. Review Protocols

    Examples. Living Systematic Review. Carole Mitnick, Molly Franke, Celia Fung, Andrew Lindeborg. ... et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1. PMID: 25554246.) Countway Protocol Template. Why a Protocol ... Beyond acting as a roadmap for your research ...

  3. PDF Writing your Protocol for a Cochrane Review

    Writing the review protocol), and there is a webinar on Common errors and best practice ... unambiguous eligibility criteria are a fundamental pre-requisite for a systematic review. This is particularly important when non-randomized studies are considered. Some labels ... for example when an age cut-off is used in the reviews eligibility ...

  4. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  5. Systematic Reviews: Step 2: Develop a Protocol

    Develop and refine your research plan according to systematic review best practices ; ... Many elements of a systematic review will need to be detailed in advance in the protocol. An example of items included in the protocol are: ... Alternatively, some journals publish systematic review protocols. If you plan to publish your protocol in a ...

  6. How to write a systematic review or meta-analysis protocol

    The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement provides a useful checklist on what should be included in a systematic review . In this paper, we have explained a simple and clear approach to writing a research study protocol for a systematic review or meta-analysis. CONFLICT OF INTEREST STATEMENT

  7. Creating the Systematic Review Protocol

    Creating a systematic review protocol is an important step in the planning process for your review. A review protocol is beneficial for a number of reasons: It helps to ensure that all team members are on the same page when it comes to the research question, inclusion/exclusion criteria, etc.

  8. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  9. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  10. Systematic Reviews: Create a Protocol

    Step 2: Create a Protocol. A systematic review protocol states your rationale, hypothesis, and planned methodology. Members of the team then use the protocol as a guide for conducting the research. It is recommended that you register your protocol before conducting your review. Registering your protocol will improve transparency as well as ...

  11. Module 2: Writing the review protocol

    This module will teach you to: Recognize the importance of Cochrane Protocols. Identify the eligibility criteria for studies to be included in a Cochrane Review. Identify the information that should be included in the background of a Cochrane Review. Recognize the key components of a well-written objective. Recognize the structure of a protocol.

  12. Developing a Protocol for Systematic and Scoping Reviews

    The following resources offer templates for authors to develop a systematic review protocol. PRISMA-P for Systematic Review Protocols Developed in 2015, the PRISMA-P (Preferred Reporting Items for Systematic review and Meta-Analysis Protocols) checklist provides guidance on what should be included in an SR protocol.

  13. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  14. LibGuides: Systematic Reviews: 3. Write and Register a Protocol

    A protocol is your planning document and roadmap for the project. It allows you to complete a systematic review efficiently and accurately, ensures greater understanding among team members, and makes writing the manuscript far easier. Many journals now require submitted systematic reviews to have registered protocols.

  15. Appendix 1: Systematic Review Protocol Example: Smoking Cessation

    extraction tool has been developed specifically for quantitative research data extraction based on the work of the Cochrane Collaboration and the Centre for Reviews and Dissemination (Appendix 3). Qualitative research data and expert opinion will be extracted using the data extraction tools developed for the Systematic Review Protocol Example 175

  16. Ten Steps to Conduct a Systematic Review

    The systematic review process is a rigorous and methodical approach to synthesizing and evaluating existing research on a specific topic. The 10 steps we followed, from defining the research question to interpreting the results, ensured a comprehensive and unbiased review of the available literature.

  17. Research Guides: Systematic Reviews: Creating a Protocol

    This will improve transparency and reproducibility, but will also ensure that other research teams do not duplicate efforts. A protocol documents the key points of your systematic review. A protocol should include a conceptual discussion of the problem and include the following: Rationale, background. Definitions of your subject/topics.

  18. Guidance to best tools and practices for systematic reviews

    We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team's awareness of how to prevent research and resource waste [84, 130] and to stimulate careful contemplation of the scope of the review . Authors ...

  19. A Guide to Writing a Qualitative Systematic Review Protocol to Enhance

    The paper highlights important considerations during the protocol development process, and uses a previously developed review question as a working example. Implications for Research. This paper will assist novice researchers in developing a qualitative systematic review protocol. By providing a worked example of a protocol, the paper ...

  20. Guidelines for writing a systematic review

    Example; Systematic review: The most robust review method, usually with the involvement of more than one author, intends to systematically search for and appraise literature with pre-existing inclusion criteria. (Salem et al., 2023) Rapid review: Utilises Systematic Review methods but may be time limited. (Randles and Finnegan, 2022) Meta-analysis

  21. A Guide to Writing a Qualitative Systematic Review Protocol to Enhance

    The paper highlights important considerations during the protocol development process, and uses a previously developed review question as a working example. Implications for research: This paper will assist novice researchers in developing a qualitative systematic review protocol. By providing a worked example of a protocol, the paper ...

  22. Systematic Review Protocols and Protocol Registries

    Systematic Review Protocols. a good systematic review can start with a protocol - it can serve as a road map for your review. a protocol specifies the objectives, methods, and outcomes of primary interest of the systematic review. a protocol promotes transparency of methods. allows your peers to review how you will extract information to ...

  23. Research Guides: Graduate Student Research Support: Systematic Reviews

    Traditional Literature Review: Systematic Review: Review Question/Topic. Topics may be broad in scope; the goal of the review may be to place one's own research within the existing body of knowledge, or to gather information that supports a particular viewpoint. Starts with a well-defined research question to be answered by the review.

  24. Mapping the evaluation of the electronic health system PEC e-SUS APS in

    The protocol and its registration have been adapted based on elements taken from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Checklist extension for scoping reviews (PRISMA-ScR) [18, 19].The adapted protocol was subsequently registered on Open Science Framework under the ...

  25. E-participation in policy-making for health: a scoping review protocol

    This review is the protocol of a research study entitled 'Investigating the application of information and communication technology to involve people in the policy-making for health: a scoping review', funded and supported by Tehran University of Medical Sciences grant No 1400-3-126-56722 with ethics code No. IR.TUMS.NIHR.REC.1400.019.

  26. Breaking the Cycle of Pain: The Role of Graded Motor Imagery and ...

    Background: Complex Regional Pain Syndrome (CRPS) is a chronic condition characterized by severe pain and functional impairment. Graded Motor Imagery (GMI) and Mirror Therapy (MT) have emerged as potential non-invasive treatments; this review evaluates the effectiveness of these therapies in reducing pain, improving function, and managing swelling in CRPS patients. Methods: A systematic review ...

  27. Business, Conflict, and Peace: A Systematic Literature Review and

    Finally, systems analysis research presents a unique line of inquiry that can extend the organizational-level conclusions drawn from our review. Research on conflict systems uses macro-level processes as the unit of analysis, theorizing an organization's peace and conflict effects based on a network of interconnected processes that underpin ...

  28. A systematic review and narrative analysis of the evidence for

    Dissociative Identity Disorder (DID) is a highly disabling diagnosis, characterized by the presence of two or more personality states which impacts global functioning, with a substantial risk of suicide. The International Society for the Study of Trauma and Dissociation (ISSTD) published guidelines for treating DID in 2011 that noted individual Psychodynamically Informed Psychotherapy (PDIP ...