Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

research methodology literature review example

Try for free

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved September 27, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Research Method

Home » Literature Review – Types Writing Guide and Examples

Literature Review – Types Writing Guide and Examples

Table of Contents

Literature Review

Literature Review

Definition:

A literature review is a comprehensive and critical analysis of the existing literature on a particular topic or research question. It involves identifying, evaluating, and synthesizing relevant literature, including scholarly articles, books, and other sources, to provide a summary and critical assessment of what is known about the topic.

Types of Literature Review

Types of Literature Review are as follows:

  • Narrative literature review : This type of review involves a comprehensive summary and critical analysis of the available literature on a particular topic or research question. It is often used as an introductory section of a research paper.
  • Systematic literature review: This is a rigorous and structured review that follows a pre-defined protocol to identify, evaluate, and synthesize all relevant studies on a specific research question. It is often used in evidence-based practice and systematic reviews.
  • Meta-analysis: This is a quantitative review that uses statistical methods to combine data from multiple studies to derive a summary effect size. It provides a more precise estimate of the overall effect than any individual study.
  • Scoping review: This is a preliminary review that aims to map the existing literature on a broad topic area to identify research gaps and areas for further investigation.
  • Critical literature review : This type of review evaluates the strengths and weaknesses of the existing literature on a particular topic or research question. It aims to provide a critical analysis of the literature and identify areas where further research is needed.
  • Conceptual literature review: This review synthesizes and integrates theories and concepts from multiple sources to provide a new perspective on a particular topic. It aims to provide a theoretical framework for understanding a particular research question.
  • Rapid literature review: This is a quick review that provides a snapshot of the current state of knowledge on a specific research question or topic. It is often used when time and resources are limited.
  • Thematic literature review : This review identifies and analyzes common themes and patterns across a body of literature on a particular topic. It aims to provide a comprehensive overview of the literature and identify key themes and concepts.
  • Realist literature review: This review is often used in social science research and aims to identify how and why certain interventions work in certain contexts. It takes into account the context and complexities of real-world situations.
  • State-of-the-art literature review : This type of review provides an overview of the current state of knowledge in a particular field, highlighting the most recent and relevant research. It is often used in fields where knowledge is rapidly evolving, such as technology or medicine.
  • Integrative literature review: This type of review synthesizes and integrates findings from multiple studies on a particular topic to identify patterns, themes, and gaps in the literature. It aims to provide a comprehensive understanding of the current state of knowledge on a particular topic.
  • Umbrella literature review : This review is used to provide a broad overview of a large and diverse body of literature on a particular topic. It aims to identify common themes and patterns across different areas of research.
  • Historical literature review: This type of review examines the historical development of research on a particular topic or research question. It aims to provide a historical context for understanding the current state of knowledge on a particular topic.
  • Problem-oriented literature review : This review focuses on a specific problem or issue and examines the literature to identify potential solutions or interventions. It aims to provide practical recommendations for addressing a particular problem or issue.
  • Mixed-methods literature review : This type of review combines quantitative and qualitative methods to synthesize and analyze the available literature on a particular topic. It aims to provide a more comprehensive understanding of the research question by combining different types of evidence.

Parts of Literature Review

Parts of a literature review are as follows:

Introduction

The introduction of a literature review typically provides background information on the research topic and why it is important. It outlines the objectives of the review, the research question or hypothesis, and the scope of the review.

Literature Search

This section outlines the search strategy and databases used to identify relevant literature. The search terms used, inclusion and exclusion criteria, and any limitations of the search are described.

Literature Analysis

The literature analysis is the main body of the literature review. This section summarizes and synthesizes the literature that is relevant to the research question or hypothesis. The review should be organized thematically, chronologically, or by methodology, depending on the research objectives.

Critical Evaluation

Critical evaluation involves assessing the quality and validity of the literature. This includes evaluating the reliability and validity of the studies reviewed, the methodology used, and the strength of the evidence.

The conclusion of the literature review should summarize the main findings, identify any gaps in the literature, and suggest areas for future research. It should also reiterate the importance of the research question or hypothesis and the contribution of the literature review to the overall research project.

The references list includes all the sources cited in the literature review, and follows a specific referencing style (e.g., APA, MLA, Harvard).

How to write Literature Review

Here are some steps to follow when writing a literature review:

  • Define your research question or topic : Before starting your literature review, it is essential to define your research question or topic. This will help you identify relevant literature and determine the scope of your review.
  • Conduct a comprehensive search: Use databases and search engines to find relevant literature. Look for peer-reviewed articles, books, and other academic sources that are relevant to your research question or topic.
  • Evaluate the sources: Once you have found potential sources, evaluate them critically to determine their relevance, credibility, and quality. Look for recent publications, reputable authors, and reliable sources of data and evidence.
  • Organize your sources: Group the sources by theme, method, or research question. This will help you identify similarities and differences among the literature, and provide a structure for your literature review.
  • Analyze and synthesize the literature : Analyze each source in depth, identifying the key findings, methodologies, and conclusions. Then, synthesize the information from the sources, identifying patterns and themes in the literature.
  • Write the literature review : Start with an introduction that provides an overview of the topic and the purpose of the literature review. Then, organize the literature according to your chosen structure, and analyze and synthesize the sources. Finally, provide a conclusion that summarizes the key findings of the literature review, identifies gaps in knowledge, and suggests areas for future research.
  • Edit and proofread: Once you have written your literature review, edit and proofread it carefully to ensure that it is well-organized, clear, and concise.

Examples of Literature Review

Here’s an example of how a literature review can be conducted for a thesis on the topic of “ The Impact of Social Media on Teenagers’ Mental Health”:

  • Start by identifying the key terms related to your research topic. In this case, the key terms are “social media,” “teenagers,” and “mental health.”
  • Use academic databases like Google Scholar, JSTOR, or PubMed to search for relevant articles, books, and other publications. Use these keywords in your search to narrow down your results.
  • Evaluate the sources you find to determine if they are relevant to your research question. You may want to consider the publication date, author’s credentials, and the journal or book publisher.
  • Begin reading and taking notes on each source, paying attention to key findings, methodologies used, and any gaps in the research.
  • Organize your findings into themes or categories. For example, you might categorize your sources into those that examine the impact of social media on self-esteem, those that explore the effects of cyberbullying, and those that investigate the relationship between social media use and depression.
  • Synthesize your findings by summarizing the key themes and highlighting any gaps or inconsistencies in the research. Identify areas where further research is needed.
  • Use your literature review to inform your research questions and hypotheses for your thesis.

For example, after conducting a literature review on the impact of social media on teenagers’ mental health, a thesis might look like this:

“Using a mixed-methods approach, this study aims to investigate the relationship between social media use and mental health outcomes in teenagers. Specifically, the study will examine the effects of cyberbullying, social comparison, and excessive social media use on self-esteem, anxiety, and depression. Through an analysis of survey data and qualitative interviews with teenagers, the study will provide insight into the complex relationship between social media use and mental health outcomes, and identify strategies for promoting positive mental health outcomes in young people.”

Reference: Smith, J., Jones, M., & Lee, S. (2019). The effects of social media use on adolescent mental health: A systematic review. Journal of Adolescent Health, 65(2), 154-165. doi:10.1016/j.jadohealth.2019.03.024

Reference Example: Author, A. A., Author, B. B., & Author, C. C. (Year). Title of article. Title of Journal, volume number(issue number), page range. doi:0000000/000000000000 or URL

Applications of Literature Review

some applications of literature review in different fields:

  • Social Sciences: In social sciences, literature reviews are used to identify gaps in existing research, to develop research questions, and to provide a theoretical framework for research. Literature reviews are commonly used in fields such as sociology, psychology, anthropology, and political science.
  • Natural Sciences: In natural sciences, literature reviews are used to summarize and evaluate the current state of knowledge in a particular field or subfield. Literature reviews can help researchers identify areas where more research is needed and provide insights into the latest developments in a particular field. Fields such as biology, chemistry, and physics commonly use literature reviews.
  • Health Sciences: In health sciences, literature reviews are used to evaluate the effectiveness of treatments, identify best practices, and determine areas where more research is needed. Literature reviews are commonly used in fields such as medicine, nursing, and public health.
  • Humanities: In humanities, literature reviews are used to identify gaps in existing knowledge, develop new interpretations of texts or cultural artifacts, and provide a theoretical framework for research. Literature reviews are commonly used in fields such as history, literary studies, and philosophy.

Role of Literature Review in Research

Here are some applications of literature review in research:

  • Identifying Research Gaps : Literature review helps researchers identify gaps in existing research and literature related to their research question. This allows them to develop new research questions and hypotheses to fill those gaps.
  • Developing Theoretical Framework: Literature review helps researchers develop a theoretical framework for their research. By analyzing and synthesizing existing literature, researchers can identify the key concepts, theories, and models that are relevant to their research.
  • Selecting Research Methods : Literature review helps researchers select appropriate research methods and techniques based on previous research. It also helps researchers to identify potential biases or limitations of certain methods and techniques.
  • Data Collection and Analysis: Literature review helps researchers in data collection and analysis by providing a foundation for the development of data collection instruments and methods. It also helps researchers to identify relevant data sources and identify potential data analysis techniques.
  • Communicating Results: Literature review helps researchers to communicate their results effectively by providing a context for their research. It also helps to justify the significance of their findings in relation to existing research and literature.

Purpose of Literature Review

Some of the specific purposes of a literature review are as follows:

  • To provide context: A literature review helps to provide context for your research by situating it within the broader body of literature on the topic.
  • To identify gaps and inconsistencies: A literature review helps to identify areas where further research is needed or where there are inconsistencies in the existing literature.
  • To synthesize information: A literature review helps to synthesize the information from multiple sources and present a coherent and comprehensive picture of the current state of knowledge on the topic.
  • To identify key concepts and theories : A literature review helps to identify key concepts and theories that are relevant to your research question and provide a theoretical framework for your study.
  • To inform research design: A literature review can inform the design of your research study by identifying appropriate research methods, data sources, and research questions.

Characteristics of Literature Review

Some Characteristics of Literature Review are as follows:

  • Identifying gaps in knowledge: A literature review helps to identify gaps in the existing knowledge and research on a specific topic or research question. By analyzing and synthesizing the literature, you can identify areas where further research is needed and where new insights can be gained.
  • Establishing the significance of your research: A literature review helps to establish the significance of your own research by placing it in the context of existing research. By demonstrating the relevance of your research to the existing literature, you can establish its importance and value.
  • Informing research design and methodology : A literature review helps to inform research design and methodology by identifying the most appropriate research methods, techniques, and instruments. By reviewing the literature, you can identify the strengths and limitations of different research methods and techniques, and select the most appropriate ones for your own research.
  • Supporting arguments and claims: A literature review provides evidence to support arguments and claims made in academic writing. By citing and analyzing the literature, you can provide a solid foundation for your own arguments and claims.
  • I dentifying potential collaborators and mentors: A literature review can help identify potential collaborators and mentors by identifying researchers and practitioners who are working on related topics or using similar methods. By building relationships with these individuals, you can gain valuable insights and support for your own research and practice.
  • Keeping up-to-date with the latest research : A literature review helps to keep you up-to-date with the latest research on a specific topic or research question. By regularly reviewing the literature, you can stay informed about the latest findings and developments in your field.

Advantages of Literature Review

There are several advantages to conducting a literature review as part of a research project, including:

  • Establishing the significance of the research : A literature review helps to establish the significance of the research by demonstrating the gap or problem in the existing literature that the study aims to address.
  • Identifying key concepts and theories: A literature review can help to identify key concepts and theories that are relevant to the research question, and provide a theoretical framework for the study.
  • Supporting the research methodology : A literature review can inform the research methodology by identifying appropriate research methods, data sources, and research questions.
  • Providing a comprehensive overview of the literature : A literature review provides a comprehensive overview of the current state of knowledge on a topic, allowing the researcher to identify key themes, debates, and areas of agreement or disagreement.
  • Identifying potential research questions: A literature review can help to identify potential research questions and areas for further investigation.
  • Avoiding duplication of research: A literature review can help to avoid duplication of research by identifying what has already been done on a topic, and what remains to be done.
  • Enhancing the credibility of the research : A literature review helps to enhance the credibility of the research by demonstrating the researcher’s knowledge of the existing literature and their ability to situate their research within a broader context.

Limitations of Literature Review

Limitations of Literature Review are as follows:

  • Limited scope : Literature reviews can only cover the existing literature on a particular topic, which may be limited in scope or depth.
  • Publication bias : Literature reviews may be influenced by publication bias, which occurs when researchers are more likely to publish positive results than negative ones. This can lead to an incomplete or biased picture of the literature.
  • Quality of sources : The quality of the literature reviewed can vary widely, and not all sources may be reliable or valid.
  • Time-limited: Literature reviews can become quickly outdated as new research is published, making it difficult to keep up with the latest developments in a field.
  • Subjective interpretation : Literature reviews can be subjective, and the interpretation of the findings can vary depending on the researcher’s perspective or bias.
  • Lack of original data : Literature reviews do not generate new data, but rather rely on the analysis of existing studies.
  • Risk of plagiarism: It is important to ensure that literature reviews do not inadvertently contain plagiarism, which can occur when researchers use the work of others without proper attribution.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Conclusion

Research Paper Conclusion – Writing Guide and...

Table of Contents

Table of Contents – Types, Formats, Examples

Research Results

Research Results Section – Writing Guide and...

Research Topic

Research Topics – Ideas and Examples

Informed Consent in Research

Informed Consent in Research – Types, Templates...

Tables in Research Paper

Tables in Research Paper – Types, Creating Guide...

  • Methodology
  • Research Methodology

Literature Review as a Research Methodology: An overview and guidelines

Chnar Mustafa Mohammed at Erbil polytechnic university

  • Erbil polytechnic university
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • JANET ATIENO AUMA

Joy Sheelah Baraero Era

  • Serap Kalfaoğlu

Manna Dey

  • Rizky Amelia

Yayuk Herawati

  • Tutuk Indriyani
  • Andi Ibrahim Yunus
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

research methodology literature review example

What is a Literature Review? How to Write It (with Examples)

literature review

A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing how your work contributes to the ongoing conversation in the field. Learning how to write a literature review is a critical tool for successful research. Your ability to summarize and synthesize prior research pertaining to a certain topic demonstrates your grasp on the topic of study, and assists in the learning process. 

Table of Contents

What is the purpose of literature review , a. habitat loss and species extinction: , b. range shifts and phenological changes: , c. ocean acidification and coral reefs: , d. adaptive strategies and conservation efforts: .

  • Choose a Topic and Define the Research Question: 
  • Decide on the Scope of Your Review: 
  • Select Databases for Searches: 
  • Conduct Searches and Keep Track: 
  • Review the Literature: 
  • Organize and Write Your Literature Review: 
  • How to write a literature review faster with Paperpal? 

Frequently asked questions 

What is a literature review .

A well-conducted literature review demonstrates the researcher’s familiarity with the existing literature, establishes the context for their own research, and contributes to scholarly conversations on the topic. One of the purposes of a literature review is also to help researchers avoid duplicating previous work and ensure that their research is informed by and builds upon the existing body of knowledge.

research methodology literature review example

A literature review serves several important purposes within academic and research contexts. Here are some key objectives and functions of a literature review: 2  

1. Contextualizing the Research Problem: The literature review provides a background and context for the research problem under investigation. It helps to situate the study within the existing body of knowledge. 

2. Identifying Gaps in Knowledge: By identifying gaps, contradictions, or areas requiring further research, the researcher can shape the research question and justify the significance of the study. This is crucial for ensuring that the new research contributes something novel to the field.

Find academic papers related to your research topic faster. Try Research on Paperpal

3. Understanding Theoretical and Conceptual Frameworks: Literature reviews help researchers gain an understanding of the theoretical and conceptual frameworks used in previous studies. This aids in the development of a theoretical framework for the current research. 

4. Providing Methodological Insights: Another purpose of literature reviews is that it allows researchers to learn about the methodologies employed in previous studies. This can help in choosing appropriate research methods for the current study and avoiding pitfalls that others may have encountered. 

5. Establishing Credibility: A well-conducted literature review demonstrates the researcher’s familiarity with existing scholarship, establishing their credibility and expertise in the field. It also helps in building a solid foundation for the new research. 

6. Informing Hypotheses or Research Questions: The literature review guides the formulation of hypotheses or research questions by highlighting relevant findings and areas of uncertainty in existing literature. 

Literature review example 

Let’s delve deeper with a literature review example: Let’s say your literature review is about the impact of climate change on biodiversity. You might format your literature review into sections such as the effects of climate change on habitat loss and species extinction, phenological changes, and marine biodiversity. Each section would then summarize and analyze relevant studies in those areas, highlighting key findings and identifying gaps in the research. The review would conclude by emphasizing the need for further research on specific aspects of the relationship between climate change and biodiversity. The following literature review template provides a glimpse into the recommended literature review structure and content, demonstrating how research findings are organized around specific themes within a broader topic. 

Literature Review on Climate Change Impacts on Biodiversity:  

Climate change is a global phenomenon with far-reaching consequences, including significant impacts on biodiversity. This literature review synthesizes key findings from various studies: 

Climate change-induced alterations in temperature and precipitation patterns contribute to habitat loss, affecting numerous species (Thomas et al., 2004). The review discusses how these changes increase the risk of extinction, particularly for species with specific habitat requirements. 

Observations of range shifts and changes in the timing of biological events (phenology) are documented in response to changing climatic conditions (Parmesan & Yohe, 2003). These shifts affect ecosystems and may lead to mismatches between species and their resources. 

The review explores the impact of climate change on marine biodiversity, emphasizing ocean acidification’s threat to coral reefs (Hoegh-Guldberg et al., 2007). Changes in pH levels negatively affect coral calcification, disrupting the delicate balance of marine ecosystems. 

Recognizing the urgency of the situation, the literature review discusses various adaptive strategies adopted by species and conservation efforts aimed at mitigating the impacts of climate change on biodiversity (Hannah et al., 2007). It emphasizes the importance of interdisciplinary approaches for effective conservation planning. 

Strengthen your literature review with factual insights. Try Research on Paperpal for free!

How to write a good literature review 

Writing a literature review involves summarizing and synthesizing existing research on a particular topic. A good literature review format should include the following elements. 

Introduction: The introduction sets the stage for your literature review, providing context and introducing the main focus of your review. 

  • Opening Statement: Begin with a general statement about the broader topic and its significance in the field. 
  • Scope and Purpose: Clearly define the scope of your literature review. Explain the specific research question or objective you aim to address. 
  • Organizational Framework: Briefly outline the structure of your literature review, indicating how you will categorize and discuss the existing research. 
  • Significance of the Study: Highlight why your literature review is important and how it contributes to the understanding of the chosen topic. 
  • Thesis Statement: Conclude the introduction with a concise thesis statement that outlines the main argument or perspective you will develop in the body of the literature review. 

Body: The body of the literature review is where you provide a comprehensive analysis of existing literature, grouping studies based on themes, methodologies, or other relevant criteria. 

  • Organize by Theme or Concept: Group studies that share common themes, concepts, or methodologies. Discuss each theme or concept in detail, summarizing key findings and identifying gaps or areas of disagreement. 
  • Critical Analysis: Evaluate the strengths and weaknesses of each study. Discuss the methodologies used, the quality of evidence, and the overall contribution of each work to the understanding of the topic. 
  • Synthesis of Findings: Synthesize the information from different studies to highlight trends, patterns, or areas of consensus in the literature. 
  • Identification of Gaps: Discuss any gaps or limitations in the existing research and explain how your review contributes to filling these gaps. 
  • Transition between Sections: Provide smooth transitions between different themes or concepts to maintain the flow of your literature review. 
Write and Cite as yo u go with Paperpal Research. Start now for free!

Conclusion: The conclusion of your literature review should summarize the main findings, highlight the contributions of the review, and suggest avenues for future research. 

  • Summary of Key Findings: Recap the main findings from the literature and restate how they contribute to your research question or objective. 
  • Contributions to the Field: Discuss the overall contribution of your literature review to the existing knowledge in the field. 
  • Implications and Applications: Explore the practical implications of the findings and suggest how they might impact future research or practice. 
  • Recommendations for Future Research: Identify areas that require further investigation and propose potential directions for future research in the field. 
  • Final Thoughts: Conclude with a final reflection on the importance of your literature review and its relevance to the broader academic community. 

what is a literature review

Conducting a literature review 

Conducting a literature review is an essential step in research that involves reviewing and analyzing existing literature on a specific topic. It’s important to know how to do a literature review effectively, so here are the steps to follow: 1  

Choose a Topic and Define the Research Question:  

  • Select a topic that is relevant to your field of study. 
  • Clearly define your research question or objective. Determine what specific aspect of the topic do you want to explore? 

Decide on the Scope of Your Review:  

  • Determine the timeframe for your literature review. Are you focusing on recent developments, or do you want a historical overview? 
  • Consider the geographical scope. Is your review global, or are you focusing on a specific region? 
  • Define the inclusion and exclusion criteria. What types of sources will you include? Are there specific types of studies or publications you will exclude? 

Select Databases for Searches:  

  • Identify relevant databases for your field. Examples include PubMed, IEEE Xplore, Scopus, Web of Science, and Google Scholar. 
  • Consider searching in library catalogs, institutional repositories, and specialized databases related to your topic. 

Conduct Searches and Keep Track:  

  • Develop a systematic search strategy using keywords, Boolean operators (AND, OR, NOT), and other search techniques. 
  • Record and document your search strategy for transparency and replicability. 
  • Keep track of the articles, including publication details, abstracts, and links. Use citation management tools like EndNote, Zotero, or Mendeley to organize your references. 

Review the Literature:  

  • Evaluate the relevance and quality of each source. Consider the methodology, sample size, and results of studies. 
  • Organize the literature by themes or key concepts. Identify patterns, trends, and gaps in the existing research. 
  • Summarize key findings and arguments from each source. Compare and contrast different perspectives. 
  • Identify areas where there is a consensus in the literature and where there are conflicting opinions. 
  • Provide critical analysis and synthesis of the literature. What are the strengths and weaknesses of existing research? 

Organize and Write Your Literature Review:  

  • Literature review outline should be based on themes, chronological order, or methodological approaches. 
  • Write a clear and coherent narrative that synthesizes the information gathered. 
  • Use proper citations for each source and ensure consistency in your citation style (APA, MLA, Chicago, etc.). 
  • Conclude your literature review by summarizing key findings, identifying gaps, and suggesting areas for future research. 

Whether you’re exploring a new research field or finding new angles to develop an existing topic, sifting through hundreds of papers can take more time than you have to spare. But what if you could find science-backed insights with verified citations in seconds? That’s the power of Paperpal’s new Research feature!  

How to write a literature review faster with Paperpal?  

Paperpal, an AI writing assistant, integrates powerful academic search capabilities within its writing platform. With the Research | Cite feature, you get 100% factual insights, with citations backed by 250M+ verified research articles, directly within your writing interface. It also allows you auto-cite references in 10,000+ styles and save relevant references in your Citation Library. By eliminating the need to switch tabs to find answers to all your research questions, Paperpal saves time and helps you stay focused on your writing.   

Here’s how to use the Research feature:  

  • Ask a question: Get started with a new document on paperpal.com. Click on the “Research | Cite” feature and type your question in plain English. Paperpal will scour over 250 million research articles, including conference papers and preprints, to provide you with accurate insights and citations. 

Paperpal Research Feature

  • Review and Save: Paperpal summarizes the information, while citing sources and listing relevant reads. You can quickly scan the results to identify relevant references and save these directly to your built-in citations library for later access. 
  • Cite with Confidence: Paperpal makes it easy to incorporate relevant citations and references in 10,000+ styles into your writing, ensuring your arguments are well-supported by credible sources. This translates to a polished, well-researched literature review. 

research methodology literature review example

The literature review sample and detailed advice on writing and conducting a review will help you produce a well-structured report. But remember that a good literature review is an ongoing process, and it may be necessary to revisit and update it as your research progresses. By combining effortless research with an easy citation process, Paperpal Research streamlines the literature review process and empowers you to write faster and with more confidence. Try Paperpal Research now and see for yourself.  

A literature review is a critical and comprehensive analysis of existing literature (published and unpublished works) on a specific topic or research question and provides a synthesis of the current state of knowledge in a particular field. A well-conducted literature review is crucial for researchers to build upon existing knowledge, avoid duplication of efforts, and contribute to the advancement of their field. It also helps researchers situate their work within a broader context and facilitates the development of a sound theoretical and conceptual framework for their studies.

Literature review is a crucial component of research writing, providing a solid background for a research paper’s investigation. The aim is to keep professionals up to date by providing an understanding of ongoing developments within a specific field, including research methods, and experimental techniques used in that field, and present that knowledge in the form of a written report. Also, the depth and breadth of the literature review emphasizes the credibility of the scholar in his or her field.  

Before writing a literature review, it’s essential to undertake several preparatory steps to ensure that your review is well-researched, organized, and focused. This includes choosing a topic of general interest to you and doing exploratory research on that topic, writing an annotated bibliography, and noting major points, especially those that relate to the position you have taken on the topic. 

Literature reviews and academic research papers are essential components of scholarly work but serve different purposes within the academic realm. 3 A literature review aims to provide a foundation for understanding the current state of research on a particular topic, identify gaps or controversies, and lay the groundwork for future research. Therefore, it draws heavily from existing academic sources, including books, journal articles, and other scholarly publications. In contrast, an academic research paper aims to present new knowledge, contribute to the academic discourse, and advance the understanding of a specific research question. Therefore, it involves a mix of existing literature (in the introduction and literature review sections) and original data or findings obtained through research methods. 

Literature reviews are essential components of academic and research papers, and various strategies can be employed to conduct them effectively. If you want to know how to write a literature review for a research paper, here are four common approaches that are often used by researchers.  Chronological Review: This strategy involves organizing the literature based on the chronological order of publication. It helps to trace the development of a topic over time, showing how ideas, theories, and research have evolved.  Thematic Review: Thematic reviews focus on identifying and analyzing themes or topics that cut across different studies. Instead of organizing the literature chronologically, it is grouped by key themes or concepts, allowing for a comprehensive exploration of various aspects of the topic.  Methodological Review: This strategy involves organizing the literature based on the research methods employed in different studies. It helps to highlight the strengths and weaknesses of various methodologies and allows the reader to evaluate the reliability and validity of the research findings.  Theoretical Review: A theoretical review examines the literature based on the theoretical frameworks used in different studies. This approach helps to identify the key theories that have been applied to the topic and assess their contributions to the understanding of the subject.  It’s important to note that these strategies are not mutually exclusive, and a literature review may combine elements of more than one approach. The choice of strategy depends on the research question, the nature of the literature available, and the goals of the review. Additionally, other strategies, such as integrative reviews or systematic reviews, may be employed depending on the specific requirements of the research.

The literature review format can vary depending on the specific publication guidelines. However, there are some common elements and structures that are often followed. Here is a general guideline for the format of a literature review:  Introduction:   Provide an overview of the topic.  Define the scope and purpose of the literature review.  State the research question or objective.  Body:   Organize the literature by themes, concepts, or chronology.  Critically analyze and evaluate each source.  Discuss the strengths and weaknesses of the studies.  Highlight any methodological limitations or biases.  Identify patterns, connections, or contradictions in the existing research.  Conclusion:   Summarize the key points discussed in the literature review.  Highlight the research gap.  Address the research question or objective stated in the introduction.  Highlight the contributions of the review and suggest directions for future research.

Both annotated bibliographies and literature reviews involve the examination of scholarly sources. While annotated bibliographies focus on individual sources with brief annotations, literature reviews provide a more in-depth, integrated, and comprehensive analysis of existing literature on a specific topic. The key differences are as follows: 

  Annotated Bibliography  Literature Review 
Purpose  List of citations of books, articles, and other sources with a brief description (annotation) of each source.  Comprehensive and critical analysis of existing literature on a specific topic. 
Focus  Summary and evaluation of each source, including its relevance, methodology, and key findings.  Provides an overview of the current state of knowledge on a particular subject and identifies gaps, trends, and patterns in existing literature. 
Structure  Each citation is followed by a concise paragraph (annotation) that describes the source’s content, methodology, and its contribution to the topic.  The literature review is organized thematically or chronologically and involves a synthesis of the findings from different sources to build a narrative or argument. 
Length  Typically 100-200 words  Length of literature review ranges from a few pages to several chapters 
Independence  Each source is treated separately, with less emphasis on synthesizing the information across sources.  The writer synthesizes information from multiple sources to present a cohesive overview of the topic. 

References 

  • Denney, A. S., & Tewksbury, R. (2013). How to write a literature review.  Journal of criminal justice education ,  24 (2), 218-234. 
  • Pan, M. L. (2016).  Preparing literature reviews: Qualitative and quantitative approaches . Taylor & Francis. 
  • Cantero, C. (2019). How to write a literature review.  San José State University Writing Center . 

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 22+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Empirical Research: A Comprehensive Guide for Academics 
  • How to Write a Scientific Paper in 10 Steps 
  • How Long Should a Chapter Be?
  • How to Use Paperpal to Generate Emails & Cover Letters?

6 Tips for Post-Doc Researchers to Take Their Career to the Next Level

Self-plagiarism in research: what it is and how to avoid it, you may also like, machine translation vs human translation: which is reliable..., what is academic integrity, and why is it..., how to make a graphical abstract, academic integrity vs academic dishonesty: types & examples, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , the ai revolution: authors’ role in upholding academic..., the future of academia: how ai tools are..., how to write a research proposal: (with examples..., how to write your research paper in apa....

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

research methodology literature review example

Literature Review Example/Sample

Detailed Walkthrough + Free Literature Review Template

If you’re working on a dissertation or thesis and are looking for an example of a strong literature review chapter , you’ve come to the right place.

In this video, we walk you through an A-grade literature review from a dissertation that earned full distinction . We start off by discussing the five core sections of a literature review chapter by unpacking our free literature review template . This includes:

  • The literature review opening/ introduction section
  • The theoretical framework (or foundation of theory)
  • The empirical research
  • The research gap
  • The closing section

We then progress to the sample literature review (from an A-grade Master’s-level dissertation) to show how these concepts are applied in the literature review chapter. You can access the free resources mentioned in this video below.

PS – If you’re working on a dissertation, be sure to also check out our collection of dissertation and thesis examples here .

FAQ: Literature Review Example

Literature review example: frequently asked questions, is the sample literature review real.

Yes. The literature review example is an extract from a Master’s-level dissertation for an MBA program. It has not been edited in any way.

Can I replicate this literature review for my dissertation?

As we discuss in the video, every literature review will be slightly different, depending on the university’s unique requirements, as well as the nature of the research itself. Therefore, you’ll need to tailor your literature review to suit your specific context.

You can learn more about the basics of writing a literature review here .

Where can I find more examples of literature reviews?

The best place to find more examples of literature review chapters would be within dissertation/thesis databases. These databases include dissertations, theses and research projects that have successfully passed the assessment criteria for the respective university, meaning that you have at least some sort of quality assurance. 

The Open Access Thesis Database (OATD) is a good starting point. 

How do I get the literature review template?

You can access our free literature review chapter template here .

Is the template really free?

Yes. There is no cost for the template and you are free to use it as you wish. 

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Literature Review Bootcamp . If you want to work smart, you don't want to miss this .

Omoregie Kester

What will it take for you to guide me in my Ph.D research work?

Gloria

Thank you so much for all this information. I am unable to download the literature review template and the excel worksheet. When I click the button it takes me to the top of the page. I would really love to use this template, thank you again!

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

research methodology literature review example

  • Print Friendly
  • View  PDF
  • Download full issue

Elsevier

Method Article How-to conduct a systematic literature review: A quick guide for computer science research

  • • Clearly defined strategies to follow for a systematic literature review in computer science research, and
  • • Algorithmic method to tackle a systematic literature review.

Graphical abstract

Image, graphical abstract

  • Download: Download high-res image (138KB)
  • Download: Download full-size image
  • Previous article in issue
  • Next article in issue

Method name

Data availability.

  • No data was used for the research described in the article.

Cited by (0)

  • Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

55k Accesses

27 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research methodology literature review example

Harvey Cushing/John Hay Whitney Medical Library

  • Collections
  • Research Help

YSN Doctoral Programs: Steps in Conducting a Literature Review

  • Biomedical Databases
  • Global (Public Health) Databases
  • Soc. Sci., History, and Law Databases
  • Grey Literature
  • Trials Registers
  • Data and Statistics
  • Public Policy
  • Google Tips
  • Recommended Books
  • Steps in Conducting a Literature Review

What is a literature review?

A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.  That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment.  Rely heavily on the guidelines your instructor has given you.

Why is it important?

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Discovers relationships between research studies/ideas.
  • Identifies major themes, concepts, and researchers on a topic.
  • Identifies critical gaps and points of disagreement.
  • Discusses further research questions that logically come out of the previous studies.

APA7 Style resources

Cover Art

APA Style Blog - for those harder to find answers

1. Choose a topic. Define your research question.

Your literature review should be guided by your central research question.  The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

  • Make sure your research question is not too broad or too narrow.  Is it manageable?
  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor and your class mates.

2. Decide on the scope of your review

How many studies do you need to look at? How comprehensive should it be? How many years should it cover? 

  • This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search. 

Where to find databases:

  • use the tabs on this guide
  • Find other databases in the Nursing Information Resources web page
  • More on the Medical Library web page
  • ... and more on the Yale University Library web page

4. Conduct your searches to find the evidence. Keep track of your searches.

  • Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
  • Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
  • Ask your librarian for help at any time.
  • Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.

Review the literature

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
  • Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?

Tips: 

  • Review the abstracts carefully.  
  • Keep careful notes so that you may track your thought processes during the research process.
  • Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
  • << Previous: Recommended Books
  • Last Updated: Jun 20, 2024 9:08 AM
  • URL: https://guides.library.yale.edu/YSNDoctoral

research methodology literature review example

Which review is that? A guide to review types

  • Which review is that?
  • Review Comparison Chart
  • Decision Tool
  • Critical Review
  • Integrative Review
  • Narrative Review
  • State of the Art Review
  • Narrative Summary
  • Systematic Review
  • Meta-analysis
  • Comparative Effectiveness Review
  • Diagnostic Systematic Review
  • Network Meta-analysis
  • Prognostic Review
  • Psychometric Review
  • Review of Economic Evaluations
  • Systematic Review of Epidemiology Studies
  • Living Systematic Reviews
  • Umbrella Review
  • Review of Reviews
  • Rapid Review
  • Rapid Evidence Assessment
  • Rapid Realist Review
  • Qualitative Evidence Synthesis
  • Qualitative Interpretive Meta-synthesis
  • Qualitative Meta-synthesis
  • Qualitative Research Synthesis
  • Framework Synthesis - Best-fit Framework Synthesis
  • Meta-aggregation
  • Meta-ethnography
  • Meta-interpretation
  • Meta-narrative Review
  • Meta-summary
  • Thematic Synthesis
  • Mixed Methods Synthesis
  • Narrative Synthesis
  • Bayesian Meta-analysis
  • EPPI-Centre Review
  • Critical Interpretive Synthesis
  • Realist Synthesis - Realist Review
  • Scoping Review
  • Mapping Review
  • Systematised Review
  • Concept Synthesis
  • Expert Opinion - Policy Review
  • Technology Assessment Review

Methodological Review

  • Systematic Search and Review

A methodological review is a type of systematic secondary research (i.e., research synthesis) which focuses on summarising the state-of-the-art methodological practices of research in a substantive field or topic" (Chong et al, 2021).

Methodological reviews "can be performed to examine any methodological issues relating to the design, conduct and review of research studies and also evidence syntheses". Munn et al, 2018)

Further Reading/Resources

Clarke, M., Oxman, A. D., Paulsen, E., Higgins, J. P. T., & Green, S. (2011). Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Cochrane Handbook for systematic reviews of interventions . Full Text PDF

Aguinis, H., Ramani, R. S., & Alabduljader, N. (2023). Best-Practice Recommendations for Producers, Evaluators, and Users of Methodological Literature Reviews. Organizational Research Methods, 26(1), 46-76. https://doi.org/10.1177/1094428120943281 Full Text

Jha, C. K., & Kolekar, M. H. (2021). Electrocardiogram data compression techniques for cardiac healthcare systems: A methodological review. IRBM . Full Text

References Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC medical research methodology , 18 (1), 1-9. Full Text Chong, S. W., & Reinders, H. (2021). A methodological review of qualitative research syntheses in CALL: The state-of-the-art. System , 103 , 102646. Full Text

  • << Previous: Technology Assessment Review
  • Next: Systematic Search and Review >>
  • Last Updated: Aug 19, 2024 1:08 PM
  • URL: https://unimelb.libguides.com/whichreview

Library Homepage

Research Methods and Design

  • Action Research
  • Case Study Design

Literature Review

  • Quantitative Research Methods
  • Qualitative Research Methods
  • Mixed Methods Study
  • Indigenous Research and Ethics This link opens in a new window
  • Identifying Empirical Research Articles This link opens in a new window
  • Research Ethics and Quality
  • Data Literacy
  • Get Help with Writing Assignments

A literature review is a discussion of the literature (aka. the "research" or "scholarship") surrounding a certain topic. A good literature review doesn't simply summarize the existing material, but provides thoughtful synthesis and analysis. The purpose of a literature review is to orient your own work within an existing body of knowledge. A literature review may be written as a standalone piece or be included in a larger body of work.

You can read more about literature reviews, what they entail, and how to write one, using the resources below. 

Am I the only one struggling to write a literature review?

Dr. Zina O'Leary explains the misconceptions and struggles students often have with writing a literature review. She also provides step-by-step guidance on writing a persuasive literature review.

An Introduction to Literature Reviews

Dr. Eric Jensen, Professor of Sociology at the University of Warwick, and Dr. Charles Laurie, Director of Research at Verisk Maplecroft, explain how to write a literature review, and why researchers need to do so. Literature reviews can be stand-alone research or part of a larger project. They communicate the state of academic knowledge on a given topic, specifically detailing what is still unknown.

This is the first video in a whole series about literature reviews. You can find the rest of the series in our SAGE database, Research Methods:

Videos

Videos covering research methods and statistics

Identify Themes and Gaps in Literature (with real examples) | Scribbr

Finding connections between sources is key to organizing the arguments and structure of a good literature review. In this video, you'll learn how to identify themes, debates, and gaps between sources, using examples from real papers.

4 Tips for Writing a Literature Review's Intro, Body, and Conclusion | Scribbr

While each review will be unique in its structure--based on both the existing body of both literature and the overall goals of your own paper, dissertation, or research--this video from Scribbr does a good job simplifying the goals of writing a literature review for those who are new to the process. In this video, you’ll learn what to include in each section, as well as 4 tips for the main body illustrated with an example.

Cover Art

  • Literature Review This chapter in SAGE's Encyclopedia of Research Design describes the types of literature reviews and scientific standards for conducting literature reviews.
  • UNC Writing Center: Literature Reviews This handout from the Writing Center at UNC will explain what literature reviews are and offer insights into the form and construction of literature reviews in the humanities, social sciences, and sciences.
  • Purdue OWL: Writing a Literature Review The overview of literature reviews comes from Purdue's Online Writing Lab. It explains the basic why, what, and how of writing a literature review.

Organizational Tools for Literature Reviews

One of the most daunting aspects of writing a literature review is organizing your research. There are a variety of strategies that you can use to help you in this task. We've highlighted just a few ways writers keep track of all that information! You can use a combination of these tools or come up with your own organizational process. The key is choosing something that works with your own learning style.

Citation Managers

Citation managers are great tools, in general, for organizing research, but can be especially helpful when writing a literature review. You can keep all of your research in one place, take notes, and organize your materials into different folders or categories. Read more about citations managers here:

  • Manage Citations & Sources

Concept Mapping

Some writers use concept mapping (sometimes called flow or bubble charts or "mind maps") to help them visualize the ways in which the research they found connects.

research methodology literature review example

There is no right or wrong way to make a concept map. There are a variety of online tools that can help you create a concept map or you can simply put pen to paper. To read more about concept mapping, take a look at the following help guides:

  • Using Concept Maps From Williams College's guide, Literature Review: A Self-guided Tutorial

Synthesis Matrix

A synthesis matrix is is a chart you can use to help you organize your research into thematic categories. By organizing your research into a matrix, like the examples below, can help you visualize the ways in which your sources connect. 

  • Walden University Writing Center: Literature Review Matrix Find a variety of literature review matrix examples and templates from Walden University.
  • Writing A Literature Review and Using a Synthesis Matrix An example synthesis matrix created by NC State University Writing and Speaking Tutorial Service Tutors. If you would like a copy of this synthesis matrix in a different format, like a Word document, please ask a librarian. CC-BY-SA 3.0
  • << Previous: Case Study Design
  • Next: Quantitative Research Methods >>
  • Last Updated: May 7, 2024 9:51 AM

CityU Home - CityU Catalog

Creative Commons License

Methodological Approaches to Literature Review

  • Living reference work entry
  • First Online: 09 May 2023
  • Cite this living reference work entry

research methodology literature review example

  • Dennis Thomas 2 ,
  • Elida Zairina 3 &
  • Johnson George 4  

807 Accesses

1 Citations

The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature. This chapter discusses the methodological approaches to conducting a literature review and offers an overview of different types of reviews. There are various types of reviews, including narrative reviews, scoping reviews, and systematic reviews with reporting strategies such as meta-analysis and meta-synthesis. Review authors should consider the scope of the literature review when selecting a type and method. Being focused is essential for a successful review; however, this must be balanced against the relevance of the review to a broad audience.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

research methodology literature review example

Reviewing Literature for and as Research

research methodology literature review example

Discussion and Conclusion

research methodology literature review example

Systematic Reviews in Educational Research: Methodology, Perspectives and Application

Akobeng AK. Principles of evidence based medicine. Arch Dis Child. 2005;90(8):837–40.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alharbi A, Stevenson M. Refining Boolean queries to identify relevant studies for systematic review updates. J Am Med Inform Assoc. 2020;27(11):1658–66.

Article   PubMed   PubMed Central   Google Scholar  

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Article   Google Scholar  

Aromataris E MZE. JBI manual for evidence synthesis. 2020.

Google Scholar  

Aromataris E, Pearson A. The systematic review: an overview. Am J Nurs. 2014;114(3):53–8.

Article   PubMed   Google Scholar  

Aromataris E, Riitano D. Constructing a search strategy and searching for evidence. A guide to the literature search for a systematic review. Am J Nurs. 2014;114(5):49–56.

Babineau J. Product review: covidence (systematic review software). J Canad Health Libr Assoc Canada. 2014;35(2):68–71.

Baker JD. The purpose, process, and methods of writing a literature review. AORN J. 2016;103(3):265–9.

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.

Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6(1):1–12.

Brown D. A review of the PubMed PICO tool: using evidence-based practice in health education. Health Promot Pract. 2020;21(4):496–8.

Cargo M, Harris J, Pantoja T, et al. Cochrane qualitative and implementation methods group guidance series – paper 4: methods for assessing evidence on intervention implementation. J Clin Epidemiol. 2018;97:59–69.

Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.

Article   CAS   PubMed   Google Scholar  

Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med. 1997;127(5):380–7.

Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Cummings SR, Browner WS, Hulley SB, editors. Designing Clinical Research: An Epidemiological Approach. 4th ed. Philadelphia (PA): P Lippincott Williams & Wilkins; 2007. p. 14–22.

Eriksen MB, Frandsen TF. The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review. JMLA. 2018;106(4):420.

Ferrari R. Writing narrative style literature reviews. Medical Writing. 2015;24(4):230–5.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane qualitative and implementation methods group guidance series – paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. J Clin Epidemiol. 2018;97:79–85.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.

Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–17.

Gregory AT, Denniss AR. An introduction to writing narrative and systematic reviews; tasks, tips and traps for aspiring authors. Heart Lung Circ. 2018;27(7):893–8.

Harden A, Thomas J, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. J Clin Epidemiol. 2018;97:70–8.

Harris JL, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. J Clin Epidemiol. 2018;97:39–48.

Higgins J, Thomas J. In: Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3, updated February 2022). Available from www.training.cochrane.org/handbook.: Cochrane; 2022.

International prospective register of systematic reviews (PROSPERO). Available from https://www.crd.york.ac.uk/prospero/ .

Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. J R Soc Med. 2003;96(3):118–21.

Landhuis E. Scientific literature: information overload. Nature. 2016;535(7612):457–8.

Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI; 2020. Available from https://synthesismanual.jbi.global . https://doi.org/10.46658/JBIMES-20-03 .

Chapter   Google Scholar  

Lorenzetti DL, Topfer L-A, Dennett L, Clement F. Value of databases other than medline for rapid health technology assessments. Int J Technol Assess Health Care. 2014;30(2):173–8.

Moher D, Liberati A, Tetzlaff J, Altman DG, the PRISMA Group. Preferred reporting items for (SR) and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;6:264–9.

Mulrow CD. Systematic reviews: rationale for systematic reviews. BMJ. 1994;309(6954):597–9.

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.

Munthe-Kaas HM, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol. 2019;19(1):1–13.

Murphy CM. Writing an effective review article. J Med Toxicol. 2012;8(2):89–90.

NHMRC. Guidelines for guidelines: assessing risk of bias. Available at https://nhmrc.gov.au/guidelinesforguidelines/develop/assessing-risk-bias . Last published 29 August 2019. Accessed 29 Aug 2022.

Noyes J, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 1: introduction. J Clin Epidemiol. 2018b;97:35–8.

Noyes J, Booth A, Flemming K, et al. Cochrane qualitative and implementation methods group guidance series – paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol. 2018a;97:49–58.

Noyes J, Booth A, Moore G, Flemming K, Tunçalp Ö, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Glob Health. 2019;4(Suppl 1):e000893.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Healthcare. 2015;13(3):141–6.

Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–42.

Article   PubMed Central   Google Scholar  

Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7(1):1–7.

Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Brit Med J. 2017;358

Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Br Med J. 2016;355

Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.

Tawfik GM, Dila KAS, Mohamed MYF, et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health. 2019;47(1):1–9.

The Critical Appraisal Program. Critical appraisal skills program. Available at https://casp-uk.net/ . 2022. Accessed 29 Aug 2022.

The University of Melbourne. Writing a literature review in Research Techniques 2022. Available at https://students.unimelb.edu.au/academic-skills/explore-our-resources/research-techniques/reviewing-the-literature . Accessed 29 Aug 2022.

The Writing Center University of Winconsin-Madison. Learn how to write a literature review in The Writer’s Handbook – Academic Professional Writing. 2022. Available at https://writing.wisc.edu/handbook/assignments/reviewofliterature/ . Accessed 29 Aug 2022.

Thompson SG, Sharp SJ. Explaining heterogeneity in meta-analysis: a comparison of methods. Stat Med. 1999;18(20):2693–708.

Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):15.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Yoneoka D, Henmi M. Clinical heterogeneity in random-effect meta-analysis: between-study boundary estimate problem. Stat Med. 2019;38(21):4131–45.

Yuan Y, Hunt RH. Systematic reviews: the good, the bad, and the ugly. Am J Gastroenterol. 2009;104(5):1086–92.

Download references

Author information

Authors and affiliations.

Centre of Excellence in Treatable Traits, College of Health, Medicine and Wellbeing, University of Newcastle, Hunter Medical Research Institute Asthma and Breathing Programme, Newcastle, NSW, Australia

Dennis Thomas

Department of Pharmacy Practice, Faculty of Pharmacy, Universitas Airlangga, Surabaya, Indonesia

Elida Zairina

Centre for Medicine Use and Safety, Monash Institute of Pharmaceutical Sciences, Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, Parkville, VIC, Australia

Johnson George

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Johnson George .

Section Editor information

College of Pharmacy, Qatar University, Doha, Qatar

Derek Charles Stewart

Department of Pharmacy, University of Huddersfield, Huddersfield, United Kingdom

Zaheer-Ud-Din Babar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this entry

Cite this entry.

Thomas, D., Zairina, E., George, J. (2023). Methodological Approaches to Literature Review. In: Encyclopedia of Evidence in Pharmaceutical Public Health and Health Services Research in Pharmacy. Springer, Cham. https://doi.org/10.1007/978-3-030-50247-8_57-1

Download citation

DOI : https://doi.org/10.1007/978-3-030-50247-8_57-1

Received : 22 February 2023

Accepted : 22 February 2023

Published : 09 May 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-50247-8

Online ISBN : 978-3-030-50247-8

eBook Packages : Springer Reference Biomedicine and Life Sciences Reference Module Biomedical and Life Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What is a Literature Review? | Guide, Template, & Examples

What is a Literature Review? | Guide, Template, & Examples

Published on 22 February 2022 by Shona McCombes . Revised on 7 June 2022.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research.

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarise sources – it analyses, synthesises, and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

Why write a literature review, examples of literature reviews, step 1: search for relevant literature, step 2: evaluate and select sources, step 3: identify themes, debates and gaps, step 4: outline your literature review’s structure, step 5: write your literature review, frequently asked questions about literature reviews, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a dissertation or thesis, you will have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position yourself in relation to other researchers and theorists
  • Show how your dissertation addresses a gap or contributes to a debate

You might also have to write a literature review as a stand-alone assignment. In this case, the purpose is to evaluate the current state of research and demonstrate your knowledge of scholarly debates around a topic.

The content will look slightly different in each case, but the process of conducting a literature review follows the same steps. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Prevent plagiarism, run a free check.

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research objectives and questions .

If you are writing a literature review as a stand-alone assignment, you will have to choose a focus and develop a central question to direct your search. Unlike a dissertation research question, this question has to be answerable without collecting original data. You should be able to answer it based only on a review of existing publications.

Make a list of keywords

Start by creating a list of keywords related to your research topic. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list if you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can use boolean operators to help narrow down your search:

Read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

To identify the most important publications on your topic, take note of recurring citations. If the same authors, books or articles keep appearing in your reading, make sure to seek them out.

You probably won’t be able to read absolutely everything that has been written on the topic – you’ll have to evaluate which sources are most relevant to your questions.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models and methods? Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • How does the publication contribute to your understanding of the topic? What are its key insights and arguments?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible, and make sure you read any landmark studies and major theories in your field of research.

You can find out how many times an article has been cited on Google Scholar – a high citation count means the article has been influential in the field, and should certainly be included in your literature review.

The scope of your review will depend on your topic and discipline: in the sciences you usually only review recent literature, but in the humanities you might take a long historical perspective (for example, to trace how a concept has changed in meaning over time).

Remember that you can use our template to summarise and evaluate sources you’re thinking about using!

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It’s important to keep track of your sources with references to avoid plagiarism . It can be helpful to make an annotated bibliography, where you compile full reference information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

You can use our free APA Reference Generator for quick, correct, consistent citations.

To begin organising your literature review’s argument and structure, you need to understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly-visual platforms like Instagram and Snapchat – this is a gap that you could address in your own research.

There are various approaches to organising the body of a literature review. You should have a rough idea of your strategy before you start writing.

Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarising sources in order.

Try to analyse patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organise your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text, your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

If you are writing the literature review as part of your dissertation or thesis, reiterate your central problem or research question and give a brief summary of the scholarly context. You can emphasise the timeliness of the topic (“many recent studies have focused on the problem of x”) or highlight a gap in the literature (“while there has been much research on x, few researchers have taken y into consideration”).

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, make sure to follow these tips:

  • Summarise and synthesise: give an overview of the main points of each source and combine them into a coherent whole.
  • Analyse and interpret: don’t just paraphrase other researchers – add your own interpretations, discussing the significance of findings in relation to the literature as a whole.
  • Critically evaluate: mention the strengths and weaknesses of your sources.
  • Write in well-structured paragraphs: use transitions and topic sentences to draw connections, comparisons and contrasts.

In the conclusion, you should summarise the key findings you have taken from the literature and emphasise their significance.

If the literature review is part of your dissertation or thesis, reiterate how your research addresses gaps and contributes new knowledge, or discuss how you have drawn on existing theories and methods to build a framework for your research. This can lead directly into your methodology section.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your  dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, June 07). What is a Literature Review? | Guide, Template, & Examples. Scribbr. Retrieved 27 September 2024, from https://www.scribbr.co.uk/thesis-dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a theoretical framework | a step-by-step guide, what is a research methodology | steps & tips.

Marshall University

SOC 200 - Sims: How to Write a Lit Review

  • What are Literature Reviews?
  • How to Write a Lit Review
  • How to Choose a Topic
  • Finding the Literature

How to write a literature review

Below are the steps you should follow when crafting a lit review for your class assignment.

  • It's preferable if you can select a topic that you find interesting, because this will make the work seem less like work. 
  • It's also important to select a topic that many researchers have already explored. This way, you'll actually have "literature" to "review."
  • Sometimes, doing a very general search and reading other literature reviews can reveal a topic or avenue of research to you. 
  • It's important to gain an understanding of your topic's research history, in order to properly comprehend how and why the current (emerging) research exists.
  • One trick is to look at the References (aka Bibliographies aka Works Cited pages) of any especially relevant articles, in order to expand your search for those same sources. This is because there is often overlap between works, and if you're paying attention, one source can point you to several others.
  • One method is to start with the most recently-published research and then use their citations to identify older research, allowing you to piece together a timeline and work backwards. 
  • Chronologically : discuss the literature in order of its writing/publication. This will demonstrate a change in trends over time, and/or detail a history of controversy in the field, and/or illustrate developments in the field.
  • Thematically : group your sources by subject or theme. This will show the variety of angels from which your topic has been studied. This method works well if you are trying to identify a sub-topic that has so far been overlooked by other researchers.
  • Methodologically : group your sources by methodology. For example, divide the literature into categories like qualitative versus quantitative, or by population or geographical region, etc. 
  • Theoretically : group your sources by theoretical lens. Your textbook should have a section(s) dedicated to the various theories in your field. If you're unsure, you should ask your professor.
  • Are there disagreements on some issues, and consensus on others?
  • How does this impact the path of research and discovery?
  • Many articles will have a Limitations section, or a Discussion section, wherein suggestions are provided for next steps to further the research.
  • These are goldmines for helping you see a possible outlook of the situation. 
  • Identifying any gaps in the literature that are of a particular interest to your research goals will help you justify why your own research should be performed. 
  • Be selective about which points from the source you use. The information should be the most important and the most relevant. 
  • Use direct quotes sparingly, and don't rely too heavily on summaries and paraphrasing. You should be drawing conclusions about how the literature relates to your own analysis or the other literature. 
  • Synthesize your sources. The goal is not to make a list of summaries, but to show how the sources relate to one another and to your own analysis. 
  • At the end, make suggestions for future research. What subjects, populations, methodologies, or theoretical lenses warrant further exploration? What common flaws or biases did you identify that could be corrected in future studies? 
  • Common citation styles for sociology classes include APA and ASA.

Understanding how a literature review is structured will help you as you craft your own. 

Below is information and example articles that you should review, in order to comprehend why they are written a certain way.

Below are some very good examples of Literature Reviews:

Cyberbullying: How Physical Intimidation Influences the Way People are Bullied

Use of Propofol and Emergence Agitation in Children

Eternity and Immortality in Spinoza's 'Ethics'

As you read these, take note of the sections that comprise the main structure of each one:

  • Introduction 
  • Summarize sources
  • Synthesize sources

Below are some articles that provide very good examples of an "Introduction" section, which includes a "Review of the Literature."

  • Sometimes, there is both an Introduction section, and a separate Review of the Literature section (oftentimes, it simply depends on the publication)

Krimm, H., & Lund, E. (2021). Efficacy of online learning modules for teaching dialogic reading strategies and phonemic awareness.  Language, Speech & Hearing Services in Schools,  52 (4), 1020-1030.  https://doi.org/10.1044/2021_LSHSS-21-00011

research methodology literature review example

Melfsen, S., Jans, T., Romanos, M., & Walitza, S. (2022). Emotion regulation in selective mutism: A comparison group study in children and adolescents with selective mutism.  Journal of Psychiatric Research,  151 , 710-715.  https://doi.org/10.1016/j.jpsychires.2022.05.040

Citation Resources

  • MU Library's Citing Sources page
  • Purdue OWL's APA Guide
  • APA Citation Style - Quick Guide
  • Purdue OWL's ASA Guide
  • ASA Citation Style - Quick Tips

Suggested Reading

  • How to: Conduct a Lit Review (from Central Michigan University)
  • Purdue OWL Writing Lab's Advice for Writing a Lit Review

How to Read a Scholarly Article

 read:.

  • Things to consider when reading a scholarly article This helpful guide, from Meriam Library at California State University in Chico, explains what a scholarly article is and provides tips for reading them.

  Watch:

  • How to read a scholarly article (YouTube) This tutorial, from Western University, quickly and efficiently describes how to read a scholarly article.
  • << Previous: What are Literature Reviews?
  • Next: How to Choose a Topic >>
  • Last Updated: Sep 27, 2024 3:57 PM
  • URL: https://libguides.marshall.edu/soc200-sims

Research Methods

  • Getting Started
  • Literature Review Research
  • Research Design
  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Literature Review

  • What is a Literature Review?
  • What is NOT a Literature Review?
  • Purposes of a Literature Review
  • Types of Literature Reviews
  • Literature Reviews vs. Systematic Reviews
  • Systematic vs. Meta-Analysis

Literature Review  is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.

Also, we can define a literature review as the collected body of scholarly works related to a topic:

  • Summarizes and analyzes previous research relevant to a topic
  • Includes scholarly books and articles published in academic journals
  • Can be an specific scholarly paper or a section in a research paper

The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic

  • Help gather ideas or information
  • Keep up to date in current trends and findings
  • Help develop new questions

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Helps focus your own research questions or problems
  • Discovers relationships between research studies/ideas.
  • Suggests unexplored ideas or populations
  • Identifies major themes, concepts, and researchers on a topic.
  • Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
  • Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
  • Indicates potential directions for future research.

All content in this section is from Literature Review Research from Old Dominion University 

Keep in mind the following, a literature review is NOT:

Not an essay 

Not an annotated bibliography  in which you summarize each article that you have reviewed.  A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.

Not a research paper   where you select resources to support one side of an issue versus another.  A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.

A literature review serves several purposes. For example, it

  • provides thorough knowledge of previous studies; introduces seminal works.
  • helps focus one’s own research topic.
  • identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
  • suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
  • identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
  • helps the researcher avoid repetition of earlier research.
  • suggests unexplored populations.
  • determines whether past studies agree or disagree; identifies controversy in the literature.
  • tests assumptions; may help counter preconceived ideas and remove unconscious bias.

As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature."  Educational Researcher  36 (April 2007): 139-147.

All content in this section is from The Literature Review created by Dr. Robert Larabee USC

Robinson, P. and Lowe, J. (2015),  Literature reviews vs systematic reviews.  Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393

research methodology literature review example

What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California

Diagram for "What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters"

Systematic review or meta-analysis?

A  systematic review  answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.

A  meta-analysis  is the use of statistical methods to summarize the results of these studies.

Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:

  • clearly stated objectives with pre-defined eligibility criteria for studies
  • explicit, reproducible methodology
  • a systematic search that attempts to identify all studies
  • assessment of the validity of the findings of the included studies (e.g. risk of bias)
  • systematic presentation, and synthesis, of the characteristics and findings of the included studies

Not all systematic reviews contain meta-analysis. 

Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.  More information on meta-analyses can be found in  Cochrane Handbook, Chapter 9 .

A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies.  It is a systematic review that uses quantitative methods to synthesize and summarize the results.

An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings.  Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted.  In that case, an integrative review is an appropriate strategy. 

Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.

  • << Previous: Getting Started
  • Next: Research Design >>
  • Last Updated: Jul 15, 2024 10:34 AM
  • URL: https://guides.lib.udel.edu/researchmethods

Auraria Library red logo

Research Methods: Literature Reviews

  • Annotated Bibliographies
  • Literature Reviews
  • Scoping Reviews
  • Systematic Reviews
  • Scholarship of Teaching and Learning
  • Persuasive Arguments
  • Subject Specific Methodology

A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal. A literature review helps the author learn about the history and nature of their topic, and identify research gaps and problems.

Steps & Elements

Problem formulation

  • Determine your topic and its components by asking a question
  • Research: locate literature related to your topic to identify the gap(s) that can be addressed
  • Read: read the articles or other sources of information
  • Analyze: assess the findings for relevancy
  • Evaluating: determine how the article are relevant to your research and what are the key findings
  • Synthesis: write about the key findings and how it is relevant to your research

Elements of a Literature Review

  • Summarize subject, issue or theory under consideration, along with objectives of the review
  • Divide works under review into categories (e.g. those in support of a particular position, those against, those offering alternative theories entirely)
  • Explain how each work is similar to and how it varies from the others
  • Conclude which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of an area of research

Writing a Literature Review Resources

  • How to Write a Literature Review From the Wesleyan University Library
  • Write a Literature Review From the University of California Santa Cruz Library. A Brief overview of a literature review, includes a list of stages for writing a lit review.
  • Literature Reviews From the University of North Carolina Writing Center. Detailed information about writing a literature review.
  • Undertaking a literature review: a step-by-step approach Cronin, P., Ryan, F., & Coughan, M. (2008). Undertaking a literature review: A step-by-step approach. British Journal of Nursing, 17(1), p.38-43

research methodology literature review example

Literature Review Tutorial

  • << Previous: Annotated Bibliographies
  • Next: Scoping Reviews >>
  • Last Updated: Jul 8, 2024 3:13 PM
  • URL: https://guides.auraria.edu/researchmethods

1100 Lawrence Street Denver, CO 80204 303-315-7700 Ask Us Directions

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.11(5); 2022 Oct

Logo of pmeded

State-of-the-art literature review methodology: A six-step approach for knowledge synthesis

Erin s. barry.

1 Department of Anesthesiology, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD USA

2 School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands

Jerusalem Merkebu

3 Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Bethesda, MD USA

Lara Varpio

Associated data, introduction.

Researchers and practitioners rely on literature reviews to synthesize large bodies of knowledge. Many types of literature reviews have been developed, each targeting a specific purpose. However, these syntheses are hampered if the review type’s paradigmatic roots, methods, and markers of rigor are only vaguely understood. One literature review type whose methodology has yet to be elucidated is the state-of-the-art (SotA) review. If medical educators are to harness SotA reviews to generate knowledge syntheses, we must understand and articulate the paradigmatic roots of, and methods for, conducting SotA reviews.

We reviewed 940 articles published between 2014–2021 labeled as SotA reviews. We (a) identified all SotA methods-related resources, (b) examined the foundational principles and techniques underpinning the reviews, and (c) combined our findings to inductively analyze and articulate the philosophical foundations, process steps, and markers of rigor.

In the 940 articles reviewed, nearly all manuscripts (98%) lacked citations for how to conduct a SotA review. The term “state of the art” was used in 4 different ways. Analysis revealed that SotA articles are grounded in relativism and subjectivism.

This article provides a 6-step approach for conducting SotA reviews. SotA reviews offer an interpretive synthesis that describes: This is where we are now. This is how we got here. This is where we could be going. This chronologically rooted narrative synthesis provides a methodology for reviewing large bodies of literature to explore why and how our current knowledge has developed and to offer new research directions.

Supplementary Information

The online version of this article (10.1007/s40037-022-00725-9) contains supplementary material, which is available to authorized users.

Literature reviews play a foundational role in scientific research; they support knowledge advancement by collecting, describing, analyzing, and integrating large bodies of information and data [ 1 , 2 ]. Indeed, as Snyder [ 3 ] argues, all scientific disciplines require literature reviews grounded in a methodology that is accurate and clearly reported. Many types of literature reviews have been developed, each with a unique purpose, distinct methods, and distinguishing characteristics of quality and rigor [ 4 , 5 ].

Each review type offers valuable insights if rigorously conducted [ 3 , 6 ]. Problematically, this is not consistently the case, and the consequences can be dire. Medical education’s policy makers and institutional leaders rely on knowledge syntheses to inform decision making [ 7 ]. Medical education curricula are shaped by these syntheses. Our accreditation standards are informed by these integrations. Our patient care is guided by these knowledge consolidations [ 8 ]. Clearly, it is important for knowledge syntheses to be held to the highest standards of rigor. And yet, that standard is not always maintained. Sometimes scholars fail to meet the review’s specified standards of rigor; other times the markers of rigor have never been explicitly articulated. While we can do little about the former, we can address the latter. One popular literature review type whose methodology has yet to be fully described, vetted, and justified is the state-of-the-art (SotA) review.

While many types of literature reviews amalgamate bodies of literature, SotA reviews offer something unique. By looking across the historical development of a body of knowledge, SotA reviews delves into questions like: Why did our knowledge evolve in this way? What other directions might our investigations have taken? What turning points in our thinking should we revisit to gain new insights? A SotA review—a form of narrative knowledge synthesis [ 5 , 9 ]—acknowledges that history reflects a series of decisions and then asks what different decisions might have been made.

SotA reviews are frequently used in many fields including the biomedical sciences [ 10 , 11 ], medicine [ 12 – 14 ], and engineering [ 15 , 16 ]. However, SotA reviews are rarely seen in medical education; indeed, a bibliometrics analysis of literature reviews published in 14 core medical education journals between 1999 and 2019 reported only 5 SotA reviews out of the 963 knowledge syntheses identified [ 17 ]. This is not to say that SotA reviews are absent; we suggest that they are often unlabeled. For instance, Schuwirth and van der Vleuten’s article “A history of assessment in medical education” [ 14 ] offers a temporally organized overview of the field’s evolving thinking about assessment. Similarly, McGaghie et al. published a chronologically structured review of simulation-based medical education research that “reviews and critically evaluates historical and contemporary research on simulation-based medical education” [ 18 , p. 50]. SotA reviews certainly have a place in medical education, even if that place is not explicitly signaled.

This lack of labeling is problematic since it conceals the purpose of, and work involved in, the SotA review synthesis. In a SotA review, the author(s) collects and analyzes the historical development of a field’s knowledge about a phenomenon, deconstructs how that understanding evolved, questions why it unfolded in specific ways, and posits new directions for research. Senior medical education scholars use SotA reviews to share their insights based on decades of work on a topic [ 14 , 18 ]; their junior counterparts use them to critique that history and propose new directions [ 19 ]. And yet, SotA reviews are generally not explicitly signaled in medical education. We suggest that at least two factors contribute to this problem. First, it may be that medical education scholars have yet to fully grasp the unique contributions SotA reviews provide. Second, the methodology and methods of SotA reviews are poorly reported making this form of knowledge synthesis appear to lack rigor. Both factors are rooted in the same foundational problem: insufficient clarity about SotA reviews. In this study, we describe SotA review methodology so that medical educators can explicitly use this form of knowledge synthesis to further advance the field.

We developed a four-step research design to meet this goal, illustrated in Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 40037_2022_725_Fig1_HTML.jpg

Four-step research design process used for developing a State-of-the-Art literature review methodology

Step 1: Collect SotA articles

To build our initial corpus of articles reporting SotA reviews, we searched PubMed using the strategy (″state of the art review″[ti] OR ″state of the art review*″) and limiting our search to English articles published between 2014 and 2021. We strategically focused on PubMed, which includes MEDLINE, and is considered the National Library of Medicine’s premier database of biomedical literature and indexes health professions education and practice literature [ 20 ]. We limited our search to 2014–2021 to capture modern use of SotA reviews. Of the 960 articles identified, nine were excluded because they were duplicates, erratum, or corrigendum records; full text copies were unavailable for 11 records. All articles identified ( n  = 940) constituted the corpus for analysis.

Step 2: Compile all methods-related resources

EB, JM, or LV independently reviewed the 940 full-text articles to identify all references to resources that explained, informed, described, or otherwise supported the methods used for conducting the SotA review. Articles that met our criteria were obtained for analysis.

To ensure comprehensive retrieval, we also searched Scopus and Web of Science. Additionally, to find resources not indexed by these academic databases, we searched Google (see Electronic Supplementary Material [ESM] for the search strategies used for each database). EB also reviewed the first 50 items retrieved from each search looking for additional relevant resources. None were identified. Via these strategies, nine articles were identified and added to the collection of methods-related resources for analysis.

Step 3: Extract data for analysis

In Step 3, we extracted three kinds of information from the 940 articles papers identified in Step 1. First, descriptive data on each article were compiled (i.e., year of publication and the academic domain targeted by the journal). Second, each article was examined and excerpts collected about how the term state-of-the-art review was used (i.e., as a label for a methodology in-and-of itself; as an adjective qualifying another type of literature review; as a term included in the paper’s title only; or in some other way). Finally, we extracted excerpts describing: the purposes and/or aims of the SotA review; the methodology informing and methods processes used to carry out the SotA review; outcomes of analyses; and markers of rigor for the SotA review.

Two researchers (EB and JM) coded 69 articles and an interrater reliability of 94.2% was achieved. Any discrepancies were discussed. Given the high interrater reliability, the two authors split the remaining articles and coded independently.

Step 4: Construct the SotA review methodology

The methods-related resources identified in Step 2 and the data extractions from Step 3 were inductively analyzed by LV and EB to identify statements and research processes that revealed the ontology (i.e., the nature of reality that was reflected) and the epistemology (i.e., the nature of knowledge) underpinning the descriptions of the reviews. These authors studied these data to determine if the synthesis adhered to an objectivist or a subjectivist orientation, and to synthesize the purposes realized in these papers.

To confirm these interpretations, LV and EB compared their ontology, epistemology, and purpose determinations against two expectations commonly required of objectivist synthesis methods (e.g., systematic reviews): an exhaustive search strategy and an appraisal of the quality of the research data. These expectations were considered indicators of a realist ontology and objectivist epistemology [ 21 ] (i.e., that a single correct understanding of the topic can be sought through objective data collection {e.g., systematic reviews [ 22 ]}). Conversely, the inverse of these expectations were considered indicators of a relativist ontology and subjectivist epistemology [ 21 ] (i.e., that no single correct understanding of the topic is available; there are multiple valid understandings that can be generated and so a subjective interpretation of the literature is sought {e.g., narrative reviews [ 9 ]}).

Once these interpretations were confirmed, LV and EB reviewed and consolidated the methods steps described in these data. Markers of rigor were then developed that aligned with the ontology, epistemology, and methods of SotA reviews.

Of the 940 articles identified in Step 1, 98% ( n  = 923) lacked citations or other references to resources that explained, informed, or otherwise supported the SotA review process. Of the 17 articles that included supporting information, 16 cited Grant and Booth’s description [ 4 ] consisting of five sentences describing the overall purpose of SotA reviews, three sentences noting perceived strengths, and four sentences articulating perceived weaknesses. This resource provides no guidance on how to conduct a SotA review methodology nor markers of rigor. The one article not referencing Grant and Booth used “an adapted comparative effectiveness research search strategy that was adapted by a health sciences librarian” [ 23 , p. 381]. One website citation was listed in support of this strategy; however, the page was no longer available in summer 2021. We determined that the corpus was uninformed by a cardinal resource or a publicly available methodology description.

In Step 2 we identified nine resources [ 4 , 5 , 24 – 28 ]; none described the methodology and/or processes of carrying out SotA reviews. Nor did they offer explicit descriptions of the ontology or epistemology underpinning SotA reviews. Instead, these resources provided short overview statements (none longer than one paragraph) about the review type [ 4 , 5 , 24 – 28 ]. Thus, we determined that, to date, there are no available methodology papers describing how to conduct a SotA review.

Step 3 revealed that “state of the art” was used in 4 different ways across the 940 articles (see Fig.  2 for the frequency with which each was used). In 71% ( n  = 665 articles), the phrase was used only in the title, abstract, and/or purpose statement of the article; the phrase did not appear elsewhere in the paper and no SotA methodology was discussed. Nine percent ( n  = 84) used the phrase as an adjective to qualify another literature review type and so relied entirely on the methodology of a different knowledge synthesis approach (e.g., “a state of the art systematic review [ 29 ]”). In 5% ( n  = 52) of the articles, the phrase was not used anywhere within the article; instead, “state of the art” was the type of article within a journal. In the remaining 15% ( n  = 139), the phrase denoted a specific methodology (see ESM for all methodology articles). Via Step 4’s inductive analysis, the following foundational principles of SotA reviews were developed: (1) the ontology, (2) epistemology, and (3) purpose of SotA reviews.

An external file that holds a picture, illustration, etc.
Object name is 40037_2022_725_Fig2_HTML.jpg

Four ways the term “state of the art” is used in the corpus and how frequently each is used

Ontology of SotA reviews: Relativism

SotA reviews rest on four propositions:

  • The literature addressing a phenomenon offers multiple perspectives on that topic (i.e., different groups of researchers may hold differing opinions and/or interpretations of data about a phenomenon).
  • The reality of the phenomenon itself cannot be completely perceived or understood (i.e., due to limitations [e.g., the capabilities of current technologies, a research team’s disciplinary orientation] we can only perceive a limited part of the phenomenon).
  • The reality of the phenomenon is a subjective and inter-subjective construction (i.e., what we understand about a phenomenon is built by individuals and so their individual subjectivities shape that understanding).
  • The context in which the review was conducted informs the review (e.g., a SotA review of literature about gender identity and sexual function will be synthesized differently by researchers in the domain of gender studies than by scholars working in sex reassignment surgery).

As these propositions suggest, SotA scholars bring their experiences, expectations, research purposes, and social (including academic) orientations to bear on the synthesis work. In other words, a SotA review synthesizes the literature based on a specific orientation to the topic being addressed. For instance, a SotA review written by senior scholars who are experts in the field of medical education may reflect on the turning points that have shaped the way our field has evolved the modern practices of learner assessment, noting how the nature of the problem of assessment has moved: it was first a measurement problem, then a problem that embraced human judgment but needed assessment expertise, and now a whole system problem that is to be addressed from an integrated—not a reductionist—perspective [ 12 ]. However, if other scholars were to examine this same history from a technological orientation, learner assessment could be framed as historically constricted by the media available through which to conduct assessment, pointing to how artificial intelligence is laying the foundation for the next wave of assessment in medical education [ 30 ].

Given these foundational propositions, SotA reviews are steeped in a relativist ontology—i.e., reality is socially and experientially informed and constructed, and so no single objective truth exists. Researchers’ interpretations reflect their conceptualization of the literature—a conceptualization that could change over time and that could conflict with the understandings of others.

Epistemology of SotA reviews: Subjectivism

SotA reviews embrace subjectivism. The knowledge generated through the review is value-dependent, growing out of the subjective interpretations of the researcher(s) who conducted the synthesis. The SotA review generates an interpretation of the data that is informed by the expertise, experiences, and social contexts of the researcher(s). Furthermore, the knowledge developed through SotA reviews is shaped by the historical point in time when the review was conducted. SotA reviews are thus steeped in the perspective that knowledge is shaped by individuals and their community, and is a synthesis that will change over time.

Purpose of SotA reviews

SotA reviews create a subjectively informed summary of modern thinking about a topic. As a chronologically ordered synthesis, SotA reviews describe the history of turning points in researchers’ understanding of a phenomenon to contextualize a description of modern scientific thinking on the topic. The review presents an argument about how the literature could be interpreted; it is not a definitive statement about how the literature should or must be interpreted. A SotA review explores: the pivotal points shaping the historical development of a topic, the factors that informed those changes in understanding, and the ways of thinking about and studying the topic that could inform the generation of further insights. In other words, the purpose of SotA reviews is to create a three-part argument: This is where we are now in our understanding of this topic. This is how we got here. This is where we could go next.

The SotA methodology

Based on study findings and analyses, we constructed a six-stage SotA review methodology. This six-stage approach is summarized and guiding questions are offered in Tab.  1 .

The six-stage approach to conducting a State-of-the-Art review

StageGuiding QuestionsExamples of thoughts as related to interprofessional education (IPE)
Determine initial research question and field of inquiry

– What is (are) the research question(s) to be addressed?

– What field of knowledge and/or practice will the search address?

How has thinking about IPE evolved? What are the modern ways of thinking about and doing IPE?
Determine timeframe

– Engage in a broad-scope overview around the topic to be addressed

– What historical markers help demarcate the timeframe of now?

– What timeframe can be justified to mark the beginning of the review?

In 2010, the World Health Organization defined IPE [ ]. This is a sentinel moment that could be considered the start of modern, state-of-the art thinking on IPE
Finalize research question(s) to reflect timeframe

– Do the broad-scope overview and historical markers change your research question(s)?

– Does this information require you to adjust your research question(s)?

What is the state-of-the-art way of conceptualizing and realizing IPE?
Develop search strategy to find relevant manuscripts

– How far back on your timeframe do you need to go to report “this is how we got here”?

– How could a librarian consultation enhance your search strategy?

Given the Stage 2 finding, the search strategy will focus on (i) identifying changes in conceptualizing and realizing IPE pre-2010; and (ii) describing how IPE has been conceptualized and realized post-2010
Analyses

– Read the articles to become familiar with the literature

– What are the similarities across articles?

– What are the assumptions underpinning changes in understanding the topic over time?

– What are the gaps and assumptions in current knowledge?

– Which articles support/contradict your thinking?

– Does the literature reflect the premise you set out to study?

Analysis will identify pivotal moments in the IPE literature, focusing on what came before the 2010 definition, what came after 2010, and what future IPE researchers might consider

This is how we got here & This is where we are now:

– What is the history that gave rise to the modern way of thinking?

– Which theories have shaped insights and understandings?

This is where we could be going

– What are the future directions of research?

– Do certain authors dominate the literature?

– Are there any marginalized points of view that should be considered?

Reflexivity– Provide a reflexivity descriptionA robust reflexivity description is provided to explain how researcher subjectivities shaped interpretations of the IPE literature

Stage 1: Determine initial research question and field of inquiry

In Stage 1, the researcher(s) creates an initial description of the topic to be summarized and so must determine what field of knowledge (and/or practice) the search will address. Knowledge developed through the SotA review process is shaped by the context informing it; thus, knowing the domain in which the review will be conducted is part of the review’s foundational work.

Stage 2: Determine timeframe

This stage involves determining the period of time that will be defined as SotA for the topic being summarized. The researcher(s) should engage in a broad-scope overview of the literature, reading across the range of literature available to develop insights into the historical development of knowledge on the topic, including the turning points that shape the current ways of thinking about a topic. Understanding the full body of literature is required to decide the dates or events that demarcate the timeframe of now in the first of the SotA’s three-part argument: where we are now . Stage 2 is complete when the researcher(s) can explicitly justify why a specific year or event is the right moment to mark the beginning of state-of-the-art thinking about the topic being summarized.

Stage 3: Finalize research question(s) to reflect timeframe

Based on the insights developed in Stage 2, the researcher(s) will likely need to revise their initial description of the topic to be summarized. The formal research question(s) framing the SotA review are finalized in Stage 3. The revised description of the topic, the research question(s), and the justification for the timeline start year must be reported in the review article. These are markers of rigor and prerequisites for moving to Stage 4.

Stage 4: Develop search strategy to find relevant articles

In Stage 4, the researcher(s) develops a search strategy to identify the literature that will be included in the SotA review. The researcher(s) needs to determine which literature databases contain articles from the domain of interest. Because the review describes how we got here , the review must include literature that predates the state-of-the-art timeframe, determined in Stage 2, to offer this historical perspective.

Developing the search strategy will be an iterative process of testing and revising the search strategy to enable the researcher(s) to capture the breadth of literature required to meet the SotA review purposes. A librarian should be consulted since their expertise can expedite the search processes and ensure that relevant resources are identified. The search strategy must be reported (e.g., in the manuscript itself or in a supplemental file) so that others may replicate the process if they so choose (e.g., to construct a different SotA review [and possible different interpretations] of the same literature). This too is a marker of rigor for SotA reviews: the search strategies informing the identification of literature must be reported.

Stage 5: Analyses

The literature analysis undertaken will reflect the subjective insights of the researcher(s); however, the foundational premises of inductive research should inform the analysis process. Therefore, the researcher(s) should begin by reading the articles in the corpus to become familiar with the literature. This familiarization work includes: noting similarities across articles, observing ways-of-thinking that have shaped current understandings of the topic, remarking on assumptions underpinning changes in understandings, identifying important decision points in the evolution of understanding, and taking notice of gaps and assumptions in current knowledge.

The researcher(s) can then generate premises for the state-of-the-art understanding of the history that gave rise to modern thinking, of the current body of knowledge, and of potential future directions for research. In this stage of the analysis, the researcher(s) should document the articles that support or contradict their premises, noting any collections of authors or schools of thinking that have dominated the literature, searching for marginalized points of view, and studying the factors that contributed to the dominance of particular ways of thinking. The researcher(s) should also observe historical decision points that could be revisited. Theory can be incorporated at this stage to help shape insights and understandings. It should be highlighted that not all corpus articles will be used in the SotA review; instead, the researcher(s) will sample across the corpus to construct a timeline that represents the seminal moments of the historical development of knowledge.

Next, the researcher(s) should verify the thoroughness and strength of their interpretations. To do this, the researcher(s) can select different articles included in the corpus and examine if those articles reflect the premises the researcher(s) set out. The researcher(s) may also seek out contradictory interpretations in the literature to be sure their summary refutes these positions. The goal of this verification work is not to engage in a triangulation process to ensure objectivity; instead, this process helps the researcher(s) ensure the interpretations made in the SotA review represent the articles being synthesized and respond to the interpretations offered by others. This is another marker of rigor for SotA reviews: the authors should engage in and report how they considered and accounted for differing interpretations of the literature, and how they verified the thoroughness of their interpretations.

Stage 6: Reflexivity

Given the relativist subjectivism of a SotA review, it is important that the manuscript offer insights into the subjectivity of the researcher(s). This reflexivity description should articulate how the subjectivity of the researcher(s) informed interpretations of the data. These reflections will also influence the suggested directions offered in the last part of the SotA three-part argument: where we could go next. This is the last marker of rigor for SotA reviews: researcher reflexivity must be considered and reported.

SotA reviews have much to offer our field since they provide information on the historical progression of medical education’s understanding of a topic, the turning points that guided that understanding, and the potential next directions for future research. Those future directions may question the soundness of turning points and prior decisions, and thereby offer new paths of investigation. Since we were unable to find a description of the SotA review methodology, we inductively developed a description of the methodology—including its paradigmatic roots, the processes to be followed, and the markers of rigor—so that scholars can harness the unique affordances of this type of knowledge synthesis.

Given their chronology- and turning point-based orientation, SotA reviews are inherently different from other types of knowledge synthesis. For example, systematic reviews focus on specific research questions that are narrow in scope [ 32 , 33 ]; in contrast, SotA reviews present a broader historical overview of knowledge development and the decisions that gave rise to our modern understandings. Scoping reviews focus on mapping the present state of knowledge about a phenomenon including, for example, the data that are currently available, the nature of that data, and the gaps in knowledge [ 34 , 35 ]; conversely, SotA reviews offer interpretations of the historical progression of knowledge relating to a phenomenon centered on significant shifts that occurred during that history. SotA reviews focus on the turning points in the history of knowledge development to suggest how different decisions could give rise to new insights. Critical reviews draw on literature outside of the domain of focus to see if external literature can offer new ways of thinking about the phenomenon of interest (e.g., drawing on insights from insects’ swarm intelligence to better understand healthcare team adaptation [ 36 ]). SotA reviews focus on one domain’s body of literature to construct a timeline of knowledge development, demarcating where we are now, demonstrating how this understanding came to be via different turning points, and offering new research directions. Certainly, SotA reviews offer a unique kind of knowledge synthesis.

Our six-stage process for conducting these reviews reflects the subjectivist relativism that underpins the methodology. It aligns with the requirements proposed by others [ 24 – 27 ], what has been written about SotA reviews [ 4 , 5 ], and the current body of published SotA reviews. In contrast to existing guidance [ 4 , 5 , 20 – 23 ], our description offers a detailed reporting of the ontology, epistemology, and methodology processes for conducting the SotA review.

This explicit methodology description is essential since many academic journals list SotA reviews as an accepted type of literature review. For instance, Educational Research Review [ 24 ], the American Academy of Pediatrics [ 25 ], and Thorax all lists SotA reviews as one of the types of knowledge syntheses they accept [ 27 ]. However, while SotA reviews are valued by academia, guidelines or specific methodology descriptions for researchers to follow when conducting this type of knowledge synthesis are conspicuously absent. If academics in general, and medical education more specifically, are to take advantage of the insights that SotA reviews can offer, we need to rigorously engage in this synthesis work; to do that, we need clear descriptions of the methodology underpinning this review. This article offers such a description. We hope that more medical educators will conduct SotA reviews to generate insights that will contribute to further advancing our field’s research and scholarship.

Acknowledgements

We thank Rhonda Allard for her help with the literature review and compiling all available articles. We also want to thank the PME editors who offered excellent development and refinement suggestions that greatly improved this manuscript.

Conflict of interest

E.S. Barry, J. Merkebu and L. Varpio declare that they have no competing interests.

The opinions and assertions contained in this article are solely those of the authors and are not to be construed as reflecting the views of the Uniformed Services University of the Health Sciences, the Department of Defense, or the Henry M. Jackson Foundation for the Advancement of Military Medicine.

  • UWF Libraries

Literature Review: Conducting & Writing

  • Sample Literature Reviews
  • Steps for Conducting a Lit Review
  • Finding "The Literature"
  • Organizing/Writing
  • APA Style This link opens in a new window
  • Chicago: Notes Bibliography This link opens in a new window
  • MLA Style This link opens in a new window

Sample Lit Reviews from Communication Arts

Have an exemplary literature review.

Note: These are sample literature reviews from a class that were given to us by an instructor when APA 6th edition was still in effect. These were excellent papers from her class, but it does not mean they are perfect or contain no errors. Thanks to the students who let us post!

  • Literature Review Sample 1
  • Literature Review Sample 2
  • Literature Review Sample 3

Have you written a stellar literature review you care to share for teaching purposes?

Are you an instructor who has received an exemplary literature review and have permission from the student to post?

Please contact Britt McGowan at [email protected] for inclusion in this guide. All disciplines welcome and encouraged.

  • << Previous: MLA Style
  • Next: Get Help! >>
  • Last Updated: Sep 11, 2024 1:37 PM
  • URL: https://libguides.uwf.edu/litreview

RMIT University

Teaching and Research guides

Literature reviews.

  • Introduction
  • Plan your search
  • Where to search
  • Refine and update your search
  • Finding grey literature
  • Writing the review
  • Referencing

Research methods overview

Finding literature on research methodologies, sage research methods online.

  • Get material not at RMIT
  • Further help

What are research methods?

Research methodology is the specific strategies, processes, or techniques utilised in the collection of information that is created and analysed.

The methodology section of a research paper, or thesis, enables the reader to critically evaluate the study’s validity and reliability by addressing how the data was collected or generated, and how it was analysed.

Types of research methods

There are three main types of research methods which use different designs for data collection.  

(1) Qualitative research

Qualitative research gathers data about lived experiences, emotions or behaviours, and the meanings individuals attach to them. It assists in enabling researchers to gain a better understanding of complex concepts, social interactions or cultural phenomena. This type of research is useful in the exploration of how or why things have occurred, interpreting events and describing actions.

Examples of qualitative research designs include:

  • focus groups
  • observations
  • document analysis
  • oral history or life stories  

(2) Quantitative research

Quantitative research gathers numerical data which can be ranked, measured or categorised through statistical analysis. It assists with uncovering patterns or relationships, and for making generalisations. This type of research is useful for finding out how many, how much, how often, or to what extent.

Examples of quantitative research designs include:

  • surveys or questionnaires
  • observation
  • document screening
  • experiments  

(3) Mixed method research

Mixed Methods research integrates both Qualitative research and Quantitative research. It provides a holistic approach combining and analysing the statistical data with deeper contextualised insights. Using Mixed Methods also enables triangulation, or verification, of the data from two or more sources.

Sometimes in your literature review, you might need to discuss and evaluate relevant research methodologies in order to justify your own choice of research methodology.

When searching for literature on research methodologies it is important to search across a range of sources. No single information source will supply all that you need. Selecting appropriate sources will depend upon your research topic.

Developing a robust search strategy will help reduce irrelevant results. It is good practice to plan a strategy before you start to search.

Search tips

(1) free text keywords.

Free text searching is the use of natural language words to conduct your search. Use selective free text keywords such as: phenomenological, "lived experience", "grounded theory", "life experiences", "focus groups", interview, quantitative, survey, validity, variance, correlation and statistical.

To locate books on your desired methodology, try LibrarySearch . Remember to use  refine  options such as books, ebooks, subject, and publication date.  

(2) Subject headings in Databases

Databases categorise their records using subject terms, or a controlled vocabulary (thesaurus). These subject headings may be useful to use, in addition to utilising free text keywords in a database search.

Subject headings will differ across databases, for example, the PubMed database uses 'Qualitative Research' whilst the CINHAL database uses 'Qualitative Studies.'  

(3) Limiting search results

Databases enable sets of results to be limited or filtered by specific fields, look for options such as Publication Type, Article Type, etc. and apply them to your search.  

(4) Browse the Library shelves

To find books on  research methods  browse the Library shelves at call number  001.42

  • SAGE Research Methods Online SAGE Research Methods Online (SRMO) is a research tool supported by a newly devised taxonomy that links content and methods terms. It provides the most comprehensive picture available today of research methods (quantitative, qualitative and mixed methods) across the social and behavioural sciences.

SAGE Research Methods Overview  (2:07 min) by SAGE Publishing  ( YouTube ) 

  • << Previous: Referencing
  • Next: Get material not at RMIT >>

Creative Commons license: CC-BY-NC.

  • Last Updated: Sep 14, 2024 4:19 PM
  • URL: https://rmit.libguides.com/literature-review
  • Open access
  • Published: 25 September 2024

Sample size recalculation in three-stage clinical trials and its evaluation

  • Björn Bokelmann   ORCID: orcid.org/0000-0001-9049-2755 1 ,
  • Geraldine Rauch   ORCID: orcid.org/0000-0002-2451-1660 1 , 3 ,
  • Jan Meis   ORCID: orcid.org/0000-0001-5407-7220 2 ,
  • Meinhard Kieser   ORCID: orcid.org/0000-0003-2402-4333 2 &
  • Carolin Herrmann   ORCID: orcid.org/0000-0003-2384-7303 1  

BMC Medical Research Methodology volume  24 , Article number:  214 ( 2024 ) Cite this article

74 Accesses

1 Altmetric

Metrics details

In clinical trials, the determination of an adequate sample size is a challenging task, mainly due to the uncertainty about the value of the effect size and nuisance parameters. One method to deal with this uncertainty is a sample size recalculation. Thereby, an interim analysis is performed based on which the sample size for the remaining trial is adapted. With few exceptions, previous literature has only examined the potential of recalculation in two-stage trials.

In our research, we address sample size recalculation in three-stage trials, i.e. trials with two pre-planned interim analyses. We show how recalculation rules from two-stage trials can be modified to be applicable to three-stage trials. We also illustrate how a performance measure, recently suggested for two-stage trial recalculation (the conditional performance score) can be applied to evaluate recalculation rules in three-stage trials, and we describe performance evaluation in those trials from the global point of view. To assess the potential of recalculation in three-stage trials, we compare, in a simulation study, two-stage group sequential designs with three-stage group sequential designs as well as multiple three-stage designs with recalculation.

While we observe a notable favorable effect in terms of power and expected sample size by using three-stage designs compared to two-stage designs, the benefits of recalculation rules appear less clear and are dependent on the performance measures applied.

Conclusions

Sample size recalculation is also applicable in three-stage designs. However, the extent to which recalculation brings benefits depends on which trial characteristics are most important to the applicants.

Peer Review reports

Introduction

Choosing an adequate sample size is a crucial task when planning a clinical trial. One needs to recruit enough patients to obtain statistically significant evidence for a treatment effect. At the same time, there are multiple reasons why one should not recruit more patients than required: The cost and duration of the trial both grow with the number of patients. Additionally, the number of patients being exposed to trial-related risks should be kept at a minimum. Hence, it is necessary to choose a number of patients, which is neither too large nor too small. The number of patients required to obtain statistically significant evidence for a treatment effect depends on the size of the treatment effect and the endpoint’s variance. Unfortunately, these parameters are unknown when planning a trial. We call this problem in the following the problem of effect size uncertainty . The sample size needs to be chosen based on assumed endpoint distribution parameters. If the assumptions are correct, the chosen sample size will be adequate. If the assumptions are wrong, two possible mistakes in sample size planning can be made: First, the sample size could be chosen to low. In this case, the trial has a smaller power then aspired, which is called underpowered in the following. This mistake occurs when the assumed treatment effect is larger than the actual treatment effect or when the assumed variance is lower than the actual endpoint variance. Second, the sample size could be chosen too high. In this case, the power is high, but it would have been possible to achieve sufficient power even with fewer patients. We call such a trial oversized . Oversizing happens when the assumed treatment effect is lower than the actual treatment effect or when the assumed variance is larger than the actual variance. The fundamental problem in sample size planning is that due to the problem of effect size uncertainty there is always the risk of underpowering and oversizing.

To deal with the problem of effect size uncertainty, two methods are developed, which are to some extent robust against underpowering and oversizing. The first method is sequential testing. Trials with sequential testing unblind the data at different stages of the trial and offer the option to reject the null hypothesis \(H_0\) at these stages. Once \(H_0\) is rejected, the trial stops. The simplest design with sequential testing is the group sequential design, where the sample sizes of all stages are specified before the beginning of the trial. Such a design provides a remedy to effect size uncertainty, because for large effect sizes \(H_0\) likely gets rejected at an early stage, where only a small number of patients has already been recruited. For small effect sizes, the design still offers the option to reject \(H_0\) at a later stage, thereby offering the opportunity to recruit enough patients to achieve a high power. A common approach to deal with effect size uncertainty is to specify the sample sizes of a group sequential design such that a targeted power would be achieved for the smallest clinically relevant effect size [ 1 ]. The second method providing robustness to effect size uncertainty is sample size recalculation (cf. e.g. [ 2 ] for an overview). It extends the method of group sequential designs in so far that the stage-wise sample sizes do not need to be specified before the trial but can be determined based on interim results from the previous stages or other studies that were published in the meantime. In this way, effect estimates from previous stages or other recent trials can be obtained, and based on these effect estimates sample sizes for the remainder of the trial can be determined. Note that unblinded (adaptive) group sequential trial designs naturally come with the shortcoming of unblinding. However, they still find their frequent application as they can be very appealing when having a large insecurity about the underlying parameter values or a high interest in shortening the trial duration. Here, it is important that as few people as possible are unblinded and that the procedure for the sample size update is only available for the statistician responsible for the interim analysis. This allows for fewer conclusions about the effect observed at interim.

One topic of high interest is the evaluation of sample size recalculation rules. There are two perspectives mentioned in the literature in the literature. The conditional perspective deems a recalculation rule good, if it ensures stable and high values of the conditional power (rejection probability conditional on the interim result) as well as no clear oversizing, despite the problem of effect size uncertainty [ 3 , 4 ]. An evaluation criterion following the conditional perspective is the conditional performance score proposed by Herrmann et al. [ 3 ]. In contrast, the global perspective rather measures the benefit of sample size recalculation in terms of global power (rejection probability before the beginning of the trial) and sample size. According to the global perspective, a recalculation rule should ensure a certain robustness against underpowering (in terms of the global power) and oversizing with regard to the effect size uncertainty [ 5 ].

Hence, there are two different perspectives on evaluating sample size recalculation rules. Moreover, there are also different approaches to defining recalculation rules, all of which are plausible in their own way. The observed conditional power approach is motivated by the conditional perspective on recalculation rules. This recalculation approach uses interim results of a trial to estimate the treatment effect and to choose then a sample size for the remainder of the trial which guarantees a certain targeted conditional power (e.g. 80% or 90%). Another approach motivated from the conditional perspective is the promising zone approach, which works according to a similar principle as the observed conditional power approach but has some additional case distinctions for the choice of the sample size [ 6 ]. There are various studies in recent years proposing recalculation rules which yield optimal performance regarding the respectively applied global performance measures [ 7 , 8 , 9 , 10 ]. Currently, there is no agreement which is the most favorable recalculation rule to apply.

In this work, we do not aim to provide a solution to the choice of the most favorable sample size recalculation rule. Instead, we aim to fill another gap in the literature about sample size recalculation: With very few exceptions [ 1 , 11 ] the literature on recalculation only focuses on the case of two-stage trials. However, clinical trial designs do not need to be restricted to two stages. Three-stage trials can offer benefits in terms of expected sample size compared to two- or one-stage trials [ 1 ] and can add even further flexibility than two-stage designs. The extent of the benefit, however, depends on the explicit trial designs, i.e., the time interval between the final patient (of a stage) reaching the final visit and the decision to stop, where patients are still enrolled in the trial. As the problem of effect size uncertainty is also relevant for three-stage trials, it is worthwhile to examine the potential benefits of recalculation for these designs. In this paper, we apply concepts from recent research on recalculation in two-stage trials to the case of recalculation in three-stage trials. In detail, this paper offers the following new contributions to the literature: We show how conditional and global performance measures can be applied to the case of recalculation in three-stage trials. Regarding the conditional performance measures, we extend the conditional performance score [ 3 ] to the case of three-stage trials with sample size recalculation. Regarding the global performance measures, we apply a performance measure which calculates a trade-off between (global) power and sample size and which is inspired by the approach by Jennison & Turnbull [ 7 ]. Having developed appropriate performance measures, we then demonstrate how recalculation rules can be extended to the case of three-stage trials. We demonstrate the application of the performance measures and the respective recalculation rules in a simulation study. Given the empirical results, we assess the potential benefits of applying a three-stage design instead of a two-stage design and of applying recalculation instead of a simple group sequential approach. In the discussion of our study, we elaborate on the different options of recalculation in three stage trials.

Notation and setting

Three-stage trials.

In this paper, we consider the case of comparing an intervention group (I) with a control group (C), with endpoint distributions given by

This means, we assume normally distributed endpoints with a common variance. We do not assume that the variance is known.

To test the alternative hypothesis \(H_1:\mu _{I}>\mu _{C}\) against the null hypothesis \(H_0:\mu _{I}\le \mu _{C}\) , we apply a two-sample t-test statistic, defined by

where \(\bar{X}_{J}\) , with \(J=I,C\) , denotes a sample average, \(\hat{\sigma }\) denotes an empirical estimate of the standard deviation in each group and n denotes the per-group sample size. In this paper, we consider the case of a three-stage trial. A detailed description of the theory behind multi-stage trials can be found in the book by Wassmer & Brannath [ 12 ]. In this book, it is shown that the following sequential testing method maintains the type I error rate.

We apply the t-test statistic at each stage. Let, \(n_1,n_2,n_3\) denote the respective sample sizes per group and stage. For simplicity, we consider the case of equal sample sizes in the intervention and control group. The resulting test statistics \(Z_{1},Z_{2},Z_{3}\) are independent and we assume large enough sample sizes, such that they asymptotically follow the distributions

Note that the test statistic distribution only depends on the endpoint distribution via the standardized treatment effect

At each stage i , a combination \(Z^{*}_i\) of these test statistics is applied for decision making. The respective test statistics are given by the inverse normal combination test [ 13 ]

The weights \(w_i\) are defined in advance of the the trial, with the condition \(w_1^2+w_2^2+w_3^2=1\) . In this paper, we define \(w_1=w_2=w_3=\frac{1}{\sqrt{3}}\) throughout. We choose Pocock’s critical values [ 14 ] for early rejection of \(H_0\) , which we denote by \(c_1,c_2,c_3\) and futility stopping with \(f_1=f_2=0\) , i.e. when the effect points in the wrong direction at one of the first two stages the trial stops with acceptance of \(H_0\) .

Analysis of interim results

In three-stage trials, interim results are potentially examined at two time points: after having observed the outcome of the first \(n_1\) patients per group, and after having observed the following \(n_2\) patients per group.

Having observed the interim results, one can estimate “how far” the trial is from proving the treatment effect at the next interim analysis. More formally, this can be expressed by the probability to reject \(H_{0}\) , given the observed interim results. This probability is called the conditional rejection probability and will be denoted by

where \(i\in \{1,2\}\) denotes the first or second interim analysis. Given the distribution of the stage-wise test statistic, we can derive the following equations for the stage-wise conditional rejection probability. The equation for the conditional rejection probability at stage two is given by

where \(\Phi\) is the cumulative distribution function of the standard normal distribution. The equation for the conditional rejection probability at stage three is given by

For recalculation at the first interim analysis, the probability to reject \(H_0\) in the remainder of the trial is important. This probability is called conditional power (CP) and consists of the conditional rejection probability at the second and third interim analysis in the following way:

Thereby, \(f_{Z_2^*|Z_1^*=z_1^*,N_2=n_2,\Delta =\delta }\) is the conditional density of \(Z_2^*\) , given first stage test statistic value \(z_1^*\) , second stage per-group sample size \(n_2\) , and effect size \(\delta\) . In this notation, \(\Delta\) is a random variable for the effect size, which takes the concrete realization \(\delta\) .

The concept of the conditional power can be used to decide upon the number of patients to recruit after the interim analysis. This number of patients can be chosen such that the conditional power reaches a certain value. Note, however, that the effect size \(\delta\) in the definition of the conditional power is unknown in practice. This is why recalculation rules based on the conditional power need to take uncertainty in the effect size \(\delta\) into account.

Group sequential designs and designs with sample size recalculation

In this study, we examine two kinds of designs: group sequential designs and designs with sample size recalculation. For group sequential designs, the per-group sample sizes of all three stages \(n_1,n_2,n_3\) are fixed before the beginning of the trial. For designs with recalculation, the per-group sample sizes \(n_2,n_3\) of the stages two and three can be determined during the trial, based on the interim results.

In a three-stage trial, recalculation could take place at the first or at the second interim analysis. In this paper, we only consider the case of recalculation at the first interim analysis in detail and leave the case of recalculation at the second interim analysis for the Discussion part as it is very similar to the well known two-stage adaptive trial with sample size recalculation. When recalculation at the first interim analysis takes place, a number of \(n_1\) patients per group has already been recruited and the value \(z_1^*\) of the first stage test statistic has been obtained. If the first stage test statistic lies within the continuation region \(z_1^*\in [f_1,c_1]\) , the trial will go into the second-stage. At this point, recalculation allows us to determine the second-stage and third-stage sample sizes per group \(n_2,n_3\) . In line with Uesaka et al. [ 11 ], we highlight the similarity to the sample size calculation for a common two-stage trial: The sample sizes for the following two stages need to be determined, such that \(H_0\) can get rejected with sufficient probability. Just like for a common two-stage trial, the interim results included in \(z_1^*\) are available and should provide information about the true standardized treatment effect. This should help to decide about an adequate sample size for the remainder of the trial.

Sample size recalculation at the first interim analysis in a three-stage trial differs from sample size recalculation in a two-stage trial in terms of the remaining sample size of the trial and in terms of conditional power: For recalculation in a two-stage trial, the remaining sample size of the trial is determined by \(n_2\) . In contrast, for recalculation in a three-stage trial, there is still a second interim analysis in which the trial could either stop for efficacy/futility or continue in the third stage. So, even though the second stage sample size \(n_2\) and the third stage sample size \(n_3\) (which is only applied if the trial continues in the third stage) has been determined at the first interim analysis, the remaining sample size remains stochastic at this point and is expressed by the term \(n_2+n_3\cdot I_{z_2^*\in [f_2,c_2]}\) . Similarly, the conditional power in a two-stage trial is simply given by \(CP_{\Delta }(z_1^*,n_2)=CRP_{\Delta }^{(2)}(z_1^*,n_2)\) , while the Formula ( 3 ) for the three-stage trial includes the conditional rejection probability at the third stage and the distribution of the second stage test statistic. It are precisely these differences in terms of remaining trial sample size and conditional power which need to be taken into account when transferring concepts from recalculation rules in two-stage trials to recalculation in three-stage trials.

In principle, it would be possible to choose unequal per-group sample sizes \(n_2\ne n_3\) for stages two and three in the same way as it is possible to choose unequal stage-wise sample sizes in a two-stage trial. To keep a clear scope of the research, we restrict our analysis to the case of equal per-group stage-wise sample size \(n_2=n_3\) and treat the case of unequal stage-wise sample sizes in the Discussion part.

Note that the choice of the first-stage sample size \(n_1\) affects both the benefits of the sequential testing procedure and the potential benefits of recalculation: For recalculation, on the one hand, a small first-stage sample size yielding little information about the underlying effect size can only be of limited value for the choice of the remaining sample size. On the other hand, a high first-stage sample size means that a large share of the required patients has already been recruited and the impact of recalculation to adjust the number of further recruitments is limited. The sequential testing procedure is also affected by the choice of \(n_1\) , as a low choice of \(n_1\) reduces the probability to stop for efficacy at the first interim analysis and a high choice of \(n_1\) makes an efficacy stop at the first interim analysis likely but reduces the benefits of sample size reduction if \(n_1\) is already large. There is no consensus about the ideal choice of sample sizes for an interim analysis. In the two-stage recalculation literature, there exist approaches for optimizing the choice of \(n_1\) according to the criteria, the applying statistician deems most important [ 8 ]. For three-stage trials, such approaches are so far missing. In our paper, we apply a group sequential design with equal sample sizes per stage, where the sample sizes are chosen in order to reach a pre-specified power \(1-\beta\) at an assumed effect size. The considered designs with recalculation use the same first-stage sample size but allow for a flexible sample size in the remainder of the trial.

Evaluation of recalculation rules

A recalculation rule is intended as a remedy for the problem of uncertainty about the standardized treatment effect. Ideally, the design with recalculation should perform well over a range of standardized treatment effects \(\delta\) of interest. Which values of \(\delta\) are of interest should thereby be decided based on considerations about the minimally clinically relevant effect and/or logistical restrictions in terms of sample sizes, a maximum plausible effect, and assumptions about the endpoint standard deviation. It is a common approach in the literature to evaluate designs with recalculation over an interval of values for the (standardized) treatment effect [ 1 , 7 , 15 , 16 ]. In the following, we provide performance measures \(S(\delta )\) , measuring how well the sample size is chosen if the underlying standardized treatment effect is \(\delta\) . We will then examine this performance over a range of effect sizes \(\delta\) . Ideally, a design with recalculation should provide good performance over the whole range of effect sizes considered.

With regard to the concepts of oversizing and underpowering, we consider a performance as good if the design achieves a high power at \(\delta\) while at the same time not requiring too many patients. What a “high” power is can be measured in comparison with a certain target power \(1-\beta\) (80% or 90% are usually targeted in practice). If the power falls below \(1-\beta\) , this should be indicated by a worse performance score. What “too many” patients are can be measured in comparison to the number of patients necessary to achieve a power of \(1-\beta\) at a standardized treatment effect \(\delta\) . If the design chooses a sample size which is larger than necessary to achieve a power of \(1-\beta\) , this should also be punished by the performance measure.

While the basic idea of how a performance measure S should work is clear (quantifying underpowering/oversizing over a range of \(\delta\) ), the literature does not agree on some aspects of the evaluation. In particular, there are two perspectives on evaluation of recalculation rules: the global perspective [ 5 ] and the conditional perspective [ 3 ]. The global perspective evaluates a design at each \(\delta\) with regard to the (global) power and sample size. This can be interpreted as an assessment based on the information level before the beginning of the trial: I.e. there is an initial assumption of a clinically relevant effect size and the range in which the true effect size is likely to lie. There is agreement, on how much sample size would be acceptable to use. And there are not yet any interim results. Given this level of information, the global measures indicate whether a design can achieve the required power over the range of plausible and relevant effect sizes while complying with sample size restrictions. The conditional perspective works slightly differently: it assumes that in the situation where the recalculation is performed, the interest lies in the conditional power rather than the global power (because the level of information changed, e.g. with interim results being available). Accordingly, it evaluates the designs with regard to conditional power and sample size. As both evaluation perspectives have their advantages, we apply both evaluation principles in the following.

Global performance perspective

The most commonly applied global performance criteria for recalculation rules are the (global) power \(Pow_{\delta }\) and expected sample size \(E_{\delta }[N]\) . In line with various studies in the literature on recalculation [ 1 , 15 , 16 ], we report these global performance criteria in our simulation study over a range of effect sizes \(\delta\) . Examining power and expected sample size is very helpful when evaluating the global performance. However, it is hard to derive how good or bad a design is when examining these criteria individually: An oversized design performs bad in terms of expected sample size, but most likely good in terms of power. However, is its performance better or worse than a design with lower sample size which is slightly underpowered?

Instead of examining power and sample size individually, it can be helpful to examine them in a combined performance score. Such a performance score can express the tradeoff

between power and expected sample size. Optimization of global scores involving a linear tradeoff between power and sample size has already been studied by Jennison & Turnbull [ 7 ] and Kunzmann & Kieser [ 9 ]. In this research, we apply the tradeoff value

where \(N_{fix}(\delta )\) is the fixed design sample size required to obtain a power of \(1-\beta\) at effect size \(\delta\) . In this way, the ideal score values at each \(\delta\) are achieved when the power is close to \(1-\beta\) and the expected sample size is close to \(N_{fix}(\delta )\) . We say “close to”, because multi-stage designs achieve the same power as fixed designs while having lower expected sample sizes. This leads to an ideal trade-off \(S^{G}(\delta )\) where the power is slightly above \(1-\beta\) and the expected sample size is slightly below \(N_{fix}(\delta )\) . For fixed designs, the optimal tradeoff would be exactly at \(1-\beta\) and \(N_{fix}(\delta )\) . To illustrate the idea of the score, Fig. 1 shows the performance of a fixed design regarding \(S^G\) .

figure 1

Global performance score \(S^G\) for a fixed design, having power \(1-\beta =0.8\) at effect size \(\delta =0.3\) . Smaller effect sizes than 0.2 are deemed clinically irrelevant or unfeasible in terms of the required sample size and therefore performance is not evaluated there

The score \(S^G\) provides a good summary measure of power and expected sample size. For effect sizes where the designs are underpowering or oversizing, the values of \(S^G\) will drop. Another reason why we decided to apply this global performance score is that it is relatively simple to derive recalculation rules maximizing this score (as previously similarly done in e.g. [ 17 , 18 ]). Evaluating such score-optimized recalculation rules can help us to judge the potential of recalculation rules to prevent underpowering and oversizing. If the optimal recalculation rule does not increase \(S^G\) notably compared to a group sequential design, the potential of recalculation, considered from a global perspective, is limited.

Global performance measures, like the global power, the expected sample size, and the \(S^G\) score are highly useful criteria given the information level before the beginning of the trial (i.e. no interim results observed yet, some initial assumptions about the effect size). Given this level of information, the global performance measures then indicate whether a design ensures enough power over the range of plausible effect sizes and how much sample size it requires. Once the trial has started, some of these initial assumptions might change: even smaller effect sizes might become relevant (because a competing drug showed unexpected side effects) or the institution conducting a trial is willing and able to recruit more/less patients than initially deemed feasible. In addition, interim results become available at the first interim analysis. So, conditional performance measures gain importance at this point.

Conditional performance perspective

The conditional performance score \(S^{C}\) allows a comparison of different sample size recalculation rules when \(z_1 \in [f_1, c_1]\) [ 3 ]. The basic idea is to evaluate both the (observed) conditional power and total recalculated per-group sample size regarding their location ( l ) and variation ( v ). This leads to four components (here presented with an equal weighting), which together build a score of the form

Here \(l_{CP}\) and \(v_{CP}\) are the location and variation components of the (observed) conditional power, while \(l_{N}\) and \(v_{N}\) denote the location and variation component of the sample size. The four components and thereby the whole score can take values between 0 and 1, where 1 refers to the ideal performance and 0 to the worst possible performance.

In the following, we describe the definition of the location and variation components in more detail. The location components measure the difference between the expected value of (observed) conditional power respectively sample size from a corresponding target value. This difference is then scaled by dividing by the maximal possible difference. The location component of the (observed) conditional power is given by

Thereby, \(CP_{target, \delta }\) denotes the target value for the observed conditional power. It is defined depending on the effect size: If \(\delta\) is large enough to reach a power of \(1-\beta\) with a sample size smaller or equal to the maximum allowed sample size \(n_{max}\) , the target value is \(1-\beta\) . If the effect size is too small, the target value is \(\alpha\) . The location component of the sample size is defined as

In this equation, \(N_{target,\delta }\) denotes the target value for the sample size. If the effect size is large enough such that \(n_{fix}(\delta )\) is smaller than \(n_{max}\) , the target value is \(n_{fix}(\delta )\) . If the effect size is too small, the target value is \(n_1\) .

The variation components of the score measure the variance of the observed conditional power and sample size, standardized by the maximum possible variance. They are defined by

Thereby, the maximum possible variances are 1/4 and \({\left( (n_{max}-n_1)/2 \right) }^2\) , respectively.

Initially, the conditional performance score was defined for sample size recalculation in two-stage trials [ 3 ]. However, the definition of the conditional performance score can also be applied to recalculation at the second stage of three-stage trials. There are only minor changes necessary: A recalculation rule at the first interim analysis chooses the sample sizes \(n_2\) and \(n_3\) . Hence, the total sample size to evaluate becomes \(N=n_1+n_2+n_3\) . The effect of the choice of \(n_2\) and \(n_3\) on the observed conditional power can be calculated, using Eq. ( 3 ), as

Apart from these slight modifications, the conditional performance score can be applied to three-stage trials in the same way as to two-stage trials.

Recalculation

In this paper, we consider the principle of sample size recalculation, based on an unblinded interim analysis. There is vast literature about such recalculation procedures in two-stage trials [ 6 , 7 , 10 , 15 ]. Such trials start with recruiting a number \(n_1\) patients per group, which is fixed before the beginning of the trial. At an interim analysis based on these patients, the value \(z_1^*\) of the first stage test statistic is then calculated. The second stage per-group sample size \(n_2\) is then calculated based on \(z_1^*\) . To represent this functional relationship, we apply the notation

For three-stage trials, there is the possibility to recalculate at the first and second interim analysis. We consider recalculation at the first interim analysis. Hence, not only the second stage per-group sample size \(n_2\) , but also the third stage per-group sample size \(n_3\) can be determined at this point. So, recalculation rules from two-stage trials need to be modified in that they yield suitable sample sizes \(n_2=n_2(z_1^*)\) and \(n_3=n_3(z_1^*)\) , based on the interim result \(z_1^*\) .

For practical reasons, it is plausible to assume that the recalculated sample size could not be chosen arbitrarily large. For this reason, we specified a certain maximum sample size \(n_{max}\) for the complete trial. All of the described recalculation rules may yield a number between 0 and \(n_{max}-n_1\) patients for the following two stages of the trial.

Sample size-optimized recalculation

Jennison & Turnbull [ 7 ] provide a recalculation rule for two-stage trials, which solves the following constrained optimization problem: minimize \(E_{\delta }[N]\) under the constraint \(Pow_{\delta }\ge 1-\beta\) , for a given effect size \(\delta\) . They show that the solution to this optimization problem maximizes the performance criterion

for a certain constant \(\gamma\) . The solution is given by

For each \(\gamma\) , a recalculation rule can be derived from the above equation. The smaller \(\gamma\) is chosen, the higher the power \(Pow_{\delta }\) will be. To solve the constrained optimization problem minimizing \(E_{\delta }[N]\) under the constraint \(Pow_{\delta }\ge 1-\beta\) , one only needs to systematically try different values of \(\gamma\) , until one has the \(\gamma\) which yields a recalculation rule for which \(Pow_{\delta }= 1-\beta\) . This recalculation rule will necessarily solve the constrained optimization problem.

Jennison & Turnbull [ 7 ] also explained how to extend this principle to finding a recalculation rule, which minimizes the expected sample size, taking into account uncertainty in the effect size represented by a prior \(f_{\Delta }\) . The resulting optimization criterion is

and the respective recalculation rule fulfills

where \(f_{\Delta |Z_{1}^*=z_{1}^*}(\delta )\) is the posterior density for the effect size. In the same way as for a fixed effect size, systematic trial of different values for \(\gamma\) yields a solution to a constrained optimization problem: Minimize the expected sample size \(\int E_{\delta }[N]f_{\Delta }(\delta )d\delta\) under the constraint \(\int Pow_{\delta }f_{\Delta }(\delta )d\delta \ge 1-\beta\) for the expected power.

In this work, we extend the approach by Jennison & Turnbull [ 7 ] to the case of recalculation at the first interim analysis of a three-stage trial. In the Appendix, we derive the sample size-optimized recalculation rule

for a fixed effect size \(\delta\) . \(TO^{(i+1)}_{z_i^*,\delta }(n_{i+1})\) can be interpreted as a trade-off between the rejection probability and the chosen sample size.

For uncertainty in the effect size, represented by the prior \(f_{\Delta }\) , we derive the recalculation rule

S G score-optimized recalculation

The Jennison & Turnbull [ 7 ] approach and our derived modification for the case of three-stage trials yields a recalculation rule, which maximizes the trade-off

for a fixed \(\gamma\) . A slight modification of the approach leads to an optimization of the global score \(S^G\) , defined by

We simply need to apply the effect-dependent value \(\gamma _{\delta }:=\frac{\partial Pow_{\delta }(N)}{\partial N}|_{N=N_{fix}(\delta )}\) in the respective equations. Hence, the trade-off functions in the recalculation rule definition of the last section become

This modified trade-off specification has the effect that the resulting recalculation rules optimize the global score defined in “ Global performance perspective ” section. The main difference to the Jennison & Turnbull approach is that our performance criterion creates more incentive to maintain a high power at low effect sizes and save sample size at high effect sizes.

In our simulation study, we apply the restriction \(n_2=n_3\) when performing recalculation by score-optimization. Note that it is also possible to apply the recalculation rule without the restriction of equal sample sizes for the stages 2 and 3. In this case, one would need, for a given \(z_1\) , to calculate \(\int \left( TO^{(2)}_{z_1^*,\delta }(n_{2})+\int _{f_2}^{c_2} \left( TO^{(3)}_{z_2^*,\delta }(n_{3})\right) f_{Z_2^*|Z_1^*=z_1^*,N_2=n_2,\Delta =\delta }(z_2^*)dz_2^*\right) \cdot f_{Z_1^*|\Delta =\delta }(z_1^*)\cdot f_{\Delta }(\delta )\) for each possible combination of \(n_2\) and \(n_3\) and then choose the combination of sample sizes, which yields the maximum.

Observed conditional power approach

An alternative way to choose the recalculated sample size is the observed conditional power approach. This approach uses an effect estimate \(\hat{\delta }\) at the first interim analysis and chooses the recalculated per-group sample size \(n_2\) such that the observed conditional power reaches a certain target value

In previous literature, this approach is mostly applied for recalculation in a two-stage trial. An exception is the study by Uesaka et al. [ 11 ], where the observed conditional power approach is applied at the first interim analysis of a three-stage trial. We follow their approach in this paper. The sample size for each interim result \(z_1^*\) can by obtained by using Eq. ( 3 ). We only need to plug-in the effect estimate \(\hat{\delta }=\sqrt{\frac{2}{n_1}}z_1^*\) for the true effect size \(\Delta\) and then calculate for each possible per-group sample size n the conditional power \(CP_{\hat{\delta }}(z_1^*,n,n)\) . We then choose the smallest per-group sample size n , which fulfills \(CP_{\hat{\delta }}(z_1^*,n,n)\ge 1-\beta\) or \(n=\frac{n_{max}-n_{1}}{2}\) . We then set \(n_2=n_3=n\) . Note that there exists also the option to use unequal sample sizes for the stages 2 and 3.

Simulation study

In our simulation study, we compared five different designs: A two-stage group sequential design (i.e. with constant sample sizes per stage), a three-stage group sequential design as both described in “ Group sequential designs and designs with sample size recalculation ” section, a three-stage design with the observed conditional power approach for recalculation, a three-stage design with the expected sample size minimization approach for recalculation by Jennison & Turnbull [ 7 ] (see “ Sample size-optimized recalculation ” section), and a three-stage design with the \(S^G\) -optimization approach for recalculation (see “ SG score-optimized recalculation ” section).

We evaluated the performance over the range [0, 0.6] of effect sizes \(\delta\) . For effect sizes of this magnitude, sufficient sample size is required, such that the application of multi-stage designs in practice is conceivable. The effect size range of [0, 0.6] was also applied in previous studies about sample size recalculation [ 3 , 19 ]. We powered the group sequential designs for \(\delta =0.3\) as the assumed underlying effect size before the start of the trial. Therefore, the quality of a recalculation rule is judged by its ability to avoid underpowering for \(\delta <0.3\) and oversizing for \(\delta >0.3\) . We choose the maximum sample size \(n_{max}\) so that a fixed design could achieve a power of \(1-\beta =0.8\) with \(n_{max}\) patients per group under an effect size of \(\delta =0.2\) . In this way, the recalculation rules are theoretically able to prevent underpowering for effect sizes \(\delta \ge 0.2\) . Smaller effect sizes are deemed not feasible in terms of sample sizes and logistics.

The simulations were conducted using R version 4.2.2 [ 20 ], and the results for the group sequential designs were obtained using the package rpact [ 21 ].

Simulation settings

For the group sequential designs, we applied Pocock efficacy boundaries [ 14 ] and futility stops for interim test statistic results below zero. For both the two-stage and the three-stage designs, we set the first-stage per-group sample size to \(n_1=70\) . For the two-stage group sequential design, we set the second-stage per-group sample size to 140. For the three-stage group sequential design, we set the second and third-stage per-group sample sizes both to 70. In this way, both the two-stage and the three-stage group sequential design have their first interim analysis based on the results of 70 patients per group and recruit in total 210 patients per group, if the trial enters the respective final stage. With these sample size choices, the group sequential designs both achieve a power of \(1-\beta =0.8\) at an effect size of \(\delta =0.3\) . The fact that the sample size for the first interim analysis is equal for all designs considered is important, because we applied the conditional performance score methodology, which compares designs conditional on the results at interim analysis. For a meaningful comparison, it was necessary that this interim analysis was at the same time for the different designs.

The recalculation rules were specified in the following way: We set the maximum per-group sample size for the whole trial to \(n_{max}=393\) . Therefore, after the first stage with \(n_1=70\) patients per group, the recalculation rules where restricted to recruit between 0 and 323 more patients for the remainder of the trial. For the observed conditional power recalculation approach, we set the targeted conditional power to 0.8. For the sample size optimization approach, we set the cost of sample size to \(\gamma =0.0028\) . In this way, the design obtains a power of \(1-\beta =0.8\) at the effect size \(\delta =0.3\) . For the optimization of \(S^G\) , the cost of sample size was set to \(\gamma _{\delta }:=\frac{\partial Pow_{\delta }(N)}{\partial N}|_{N=N_{fix}(\delta )}\) for \(\delta \in \{0.2, 0.25,0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6\}\) . In this way, the score is optimized over the effect sizes \(\delta \in [0.2,0.6]\) under consideration.

The target parameters \(CP_{target,\delta }\) and \(N_{target,\delta }\) in the definition of the conditional performance score are functions depending on the effect size \(\delta\) . They are defined according to “ Conditional performance perspective ” section.

Results with respect to global performance

To examine the global performance of our designs, we calculated the power and the expected sample size for all the designs at each effect size \(\delta\) . For the effect sizes deemed relevant ( \(\delta \in [0.2,0.6]\) ), we also calculated the global score \(S^G\) . The results are illustrated in Fig. 2 and provided in Table 1 . Regarding the global score, we can see that for effect sizes of \(\delta \ge 0.3\) , the two-stage design performs worse than all the three-stage designs. There is a simple explanation for this: The additional interim analysis with the possibility of an efficacy stop reduces the expected sample size compared to the two-stage designs (see plot on the right), while the power curves of these designs are almost identical (see plot in the middle).

figure 2

Global performance measures of the different designs. “ gs 2”, “ gs 3” denote the group-sequential designs with 2 or 3 stages. “ ocp ”, “ \(optS^G\) ”, “ optN ” denote the three-stage designs with recalculation at the first interim analysis using the observed conditional power approach, the optimization of \(S^G\) or the Jennison & Turnbull approach to optimize the expected sample size. The red areas mark regions of underpowering (power of less than \(1-\beta\) ) and oversizing (expected sample size higher than \(N_{fix}(\delta )\) )

When examining the global performance score results of the three-stage designs, we note that the group sequential design and the design with sample size-optimized recalculation perform very similar. The slight advantage of the sample size-optimized recalculation rule comes from a reduction of the expected sample size from 115.8 to 114.2 over the range of \(\delta\) between 0.2 and 0.6, compared to the group sequential design. The power curves of these two designs are almost identical. In contrast, the observed conditional power approach and the \(S^G\) -optimized recalculation approach perform notably different from the group sequential design. With power values of 58% (OCP) and 54% ( \(S^G\) -optimized), they suffer notably less from underpowering at effect size \(\delta =0.2\) than the group seuquential design which has a power of 46%. This is why they show a better performance regarding \(S^G\) for small effect sizes. However, for effect sizes larger than \(\delta =0.3\) , they suffer more from oversizing than the group sequential trial.

Results with respect to conditional performance

To evaluate the conditional performance, we calculated the conditional performance score \(S^C\) as well as its components \(l_{CP},l_{N},v_{CP},v_{N}\) for the location and variation of observed conditional power and sample size. The results are illustrated in Fig. 3 and provided in Table 2 . In contrast to the global performance, the two-stage group sequential design performs best according to the conditional performance score for effect sizes lower than 0.4. The reason for this can easily be found when examining the score components: The two-stage group sequential design has no variation in the second-stage sample size (given that the trial continues to the second stage) and hence the component \(v_N\) is constantly 1. In contrast, all of the three-stage designs have variation in their sample sizes after the first interim analysis, leading to values of \(v_N\) below 1. For effect sizes above 0.4, however, the two stage design is inferior to the three-stage designs. The reason for this is the location of the sample size: Due to its higher expected sample size, the two-stage design suffers more from oversizing then the three-stage designs.

figure 3

Conditional performance measures of the different designs. “ gs 2”, “ gs 3” denote the group-sequential designs with 2 or 3 stages. “ ocp ”, “ \(optS^G\) ”, “ optN ” denote the three-stage designs with recalculation at the first interim analysis using the observed conditional power approach, the optimization of \(S^G\) or the Jennison & Turnbull approach to optimize the expected sample size

When comparing the three-stage designs with regard to the conditional performance score, the group sequential design performs best, especially for low effect sizes. The reason for this lies in the sample size components: The variation in the sample size is again lower for the group sequential design than for the designs with recalculation. In addition, the \(S^G\) -optimized and sample size-optimized recalculation rules perform much worse in terms of sample size location for effect sizes \(\delta <0.2\) . This is because they recruit a large number of patients even though the effect size is too small to achieve sufficient power. A slight benefit of the recalculation rules can be seen in the variation component \(v_{CP}\) of the observed conditional power. However, this benefit does not compensate the disadvantage in terms of location and variation of the sample size.

In this paper, we have analyzed sample size recalculation in three-stage clinical trials with a focus on different evaluation perspectives. To this end, we applied sample size recalculation methods from the literature for two-stage clinical trials to the case of recalculation at the first interim analysis in three-stage clinical trials. While an extension of the observed conditional power approach to three-stage trials has already been performed by Uesaka et al. [ 11 ], we are, to the best of our knowledge, the first having extended the sample size optimization approach of Jennison & Turnbull [ 7 ] to three-stage clinical trials. Apart from an extension of recalculation rules to three-stage trials, we have also extended an evaluation method for recalculation rules, namely the conditional performance score, to the case of three-stage trials.

In terms of global performance, measured by power and expected sample size, the three-stage designs performed notably better than the considered two-stage design. In contrast, the two-stage group sequential design outperformed the three-stage designs with regard to the conditional performance score. This shows that the performance of a recalculation rule strongly depends on the perspective one takes with regard to evaluation. Does the three-stage design’s reduction in expected sample size, from the global perspective outweigh the disadvantage of higher variation of the sample size, which leads to the inferior performance, in terms of the conditional performance score? This question is up to the applicant.We note that the conditional performance score definition can be customized depending on which performance aspects the applicant finds most relevant. In the score definition ( 4 ) we applied an equal weighting scheme for the four performance components. However, if an applicant deems expected sample size reduction due to the second interim analysis of a three-stage trial more relevant than the uncertainty in sample size, which it implies, s/he could assign a higher weight to the \(l_N\) and a lower weight to the \(v_N\) component in the score definition. Such a change in the weighting would lead to a relative performance improvement of three-stage trials compared to two-stage trials, with regard to the conditional performance score.

We also compared global and conditional performance of group sequential three-stage designs and three-stage designs with recalculation. Recalculation leads to a deterioration in terms of conditional performance. This is in line with similar comparisons in the context of two-stage designs [ 3 ]. Note that none of the recalculation rules applied in this study were optimized according to the conditional performance score, so there might still be a margin for improvement. However, even for conditional performance score-optimized recalculation rules, the two-stage literature so far only showed limited potential for improvement over group sequential trials [ 18 ]. In terms of global performance, the recalculation rules achieved a slight advantage over the group sequential design. This is due to their potential to achieve the same power by a lower expected sample size and by their ability to work against underpowering at a low effect size. The relatively small performance gain compared to group sequential designs is not specific for three-stage trials but was also noted in the two-stage recalculation literature. E.g. Jennison and Turnbull [ 7 ] showed that recalculation can only marginally reduce the expected sample size, given power constraints, compared to group sequential designs and Pilz et al. [ 8 ] optimized recalculation rules to prevent underpowering at low effect sizes but found that these can lead to very high sample size choices. So, given these results from the two-stage literature, it is not surprising that the simulation did not reveal more significant performance gains of three-stage trials with recalculation.

For any difference between the observed global and conditional performance for the designs, it should be noted that it is not only a matter of the performance perspective (conditional versus global) but also a matter of the definition of the two scores: the conditional performance score includes the variance of the sample size and power while the global score does not. Regarding the robustness against underpowering, it needs to be said that a group sequential design powered for lower effect sizes would also suffer less from underpowering.

In this paper, we restricted our analysis to recalculation at the first interim analysis of a three-stage trial. In this way, the resulting design can be considered as a combination of an adaptive design (with flexible sample size choice at the first interim analysis) and a group sequential design (with the second interim analysis offering the option to stop for efficacy or futility, but not to adapt sample size for the remainder of the trial). So, stages two and three can be interpreted as a common two-stage group sequential design, where the sample size choice has been made in advance (at the first interim analysis). Consequently, stages two and three share typical limitations of group sequential trials. In particular, it is possible that second-stage interim results can suggest that the chosen sample size for the third stage offers low power, and the trial nevertheless continues with the third stage, without an increase in sample size. As there is an ongoing debate about the potential benefits of adaptive designs in terms of flexibility versus the benefits of group sequential designs in terms of simplicity and planning security [ 7 , 10 , 22 ], we deem the suggested three-stage designs of practical relevance, despite their limitations.

An alternative to the suggested design would be to allow recalculation at the second interim analysis instead of at the first interim analysis. Recalculation at the second interim analysis is, in principle, very similar to recalculation in two-stage trials because there is only one remaining stage after the recalculation is performed. Accordingly, recalculation rules from the two-stage trial literature would be applicable without much modification. Hence, an analysis of recalculation at the first interim analysis is arguably of higher research interest. In addition, we are of the opinion that the potential of sample size recalculation is higher at the first interim analysis than at the second interim analysis. This is because the probability of a trial to go into the third stage is lower than the probability to get into the second stage. Hence, a recalculation rule at the third stage is less likely to be applied in the trial. Moreover, at the first interim analysis there is more remaining \(\alpha\) to spend than at the second interim analysis (where one additional option for efficacy stop has already passed). Thus, there is a higher potential to affect the power of the design by modifying the sample size at the first interim analysis.

A possible extension of our suggested designs with recalculation at the first interim analysis would be to allow additional recalculation at the second interim analysis. This would sacrifice the advantage of stage two’s and stage three’s group-sequential structure in favor of more flexibility. A proper definition of recalculation rules at the second interim analysis, when the design also allows recalculation at the first interim analysis, is, however, not trivial. This is due to the fact that, in this case, the sampling procedure for the effect estimate at the second interim analysis depends on the results of the first interim analysis [ 23 ]. This dependence can make effect estimates biased (which would be problematic for recalculation approaches like OCP, which rely on effect estimates) and generally makes the distribution of such effect estimates or the corresponding second-stage test statistic complex and difficult to express mathematically (which is problematic for optimization approaches like sample size-optimized or \(S^G\) score-optimized recalculation). Given this problem of a proper definition of recalculation rules at the second interim analysis, if recalculation at the first interim analysis is allowed, we decided not to include recalculation at the second interim analysis in this study. We encourage further research in this direction, which provides a proper solution for the definition of the recalculation rules at the second interim analysis. Having defined recalculation rules for the second interim analysis, the recalculation rules considered here for the first interim analysis could, with modifications, be applied. In this case, the formulas for the conditional power and expected sample size, on which the observed conditional power, the sample size-optimization, and the \(S^G\) -optimization approach rely, would need to be modified such that the third-stage per-group sample size \(n_3(z_2^*)\) is specified as a function of the second-stage interim result. Apart from this change, the recalculation rules for the first interim analysis can be derived in the same way as we did here.

Another possible extension of the considered methodology would be different types of endpoints. Our paper focuses on continuous endpoints but three-stage trials would also be possible for binary and time-to-event endpoints. Also, the considered recalculation rules could be extended as they rely on the concept of the conditional power, which is also feasible for these other endpoint types. Some aspects of the trial design would become more complicated in the case of time-to-event endpoints: In our paper, we made the standard assumption that the continuous outcome of a patient is observable at the time of recruitment so that the amount of information at the first interim analysis depends directly on the number of recruited patients \(n_1\) . For time-to-event endpoints, however, the event rate is also important, so that the amount of information not only depends on \(n_1\) but also on the event rate. This makes it more complicated to find a right time point for an interim analysis, such that enough information for recalculation is available and at the same time not too many patients have already been recruited. This could also be an interesting aspect for future research.

With this work we shed light on three-stage trials with sample size recalculation and their evaluation. However, in the practical applications, extensive simulation studies are needed for a detailed comparison of realistic options and other sample size recalculation approaches (e.g. [ 6 ]).

Availability of data and materials

No original trial data are used in this work. Simulated data and software source code that support the findings of the simulation study can be found in the github repository https://github.com/bokelmab/three_stage_trials .

Jennison C, Turnbull BW. Adaptive and nonadaptive group sequential tests. Biometrika. 2006;93(1):1–21.

Article   Google Scholar  

Kieser M. Methods and applications of sample size calculation and recalculation in clinical trials. Springer; 2020. pp. 48–51.

Herrmann C, Pilz M, Kieser M, Rauch G. A new conditional performance score for the evaluation of adaptive group sequential designs with sample size recalculation. Stat Med. 2020;39(15):2067–100.

Article   PubMed   Google Scholar  

Mehta C, Bhingare A, Liu L, Senchaudhuri P. Optimal adaptive promising zone designs. Stat Med. 2022;41(11):1950–70.

Liu GF, Zhu GR, Cui L. Evaluating the adaptive performance of flexible sample size designs with treatment difference in an interval. Stat Med. 2008;27(4):584–96.

Article   CAS   PubMed   Google Scholar  

Mehta CR, Pocock SJ. Adaptive increase in sample size when interim results are promising: a practical guide with examples. Stat Med. 2011;30(28):3267–84.

Jennison C, Turnbull BW. Adaptive sample size modification in clinical trials: start small then ask for more? Stat Med. 2015;34(29):3793–810.

Pilz M, Kunzmann K, Herrmann C, Rauch G, Kieser M. Optimal planning of adaptive two-stage designs. Stat Med. 2021;40(13):3196–213.

Kunzmann K, Kieser M. Optimal adaptive single-arm phase II trials under quantified uncertainty. J Biopharm Stat. 2020;30(1):89–103.

Kunzmann K, Grayling MJ, Lee KM, Robertson DS, Rufibach K, Wason JM. Conditional power and friends: the why and how of (un) planned, unblinded sample size recalculations in confirmatory trials. Stat Med. 2022;41(5):877–90.

Article   PubMed   PubMed Central   Google Scholar  

Uesaka H, Morikawa T, Kada A. Two-phase, three-stage adaptive designs in clinical trials. Jpn J Biom. 2015;35(2):69–93.

Wassmer G, Brannath W. Group sequential and confirmatory adaptive designs in clinical trials. Springer; 2016. pp. 135–146.

Lehmacher W, Wassmer G. Adaptive sample size calculations in group sequential trials. Biometrics. 1999;55(4):1286–90.

Pocock SJ. Group sequential methods in the design and analysis of clinical trials. Biometrika. 1977;64(2):191–9.

Pilz M, Kunzmann K, Herrmann C, Rauch G, Kieser M. A variational approach to optimal two-stage designs. Stat Med. 2019;38(21):4159–71.

Cui L, Zhang L. On the efficiency of adaptive sample size design. Stat Med. 2019;38(6):933–44.

Kunzmann K, Pilz M, Herrmann C, Rauch G, Kieser M. The adoptr package: adaptive optimal designs for clinical trials in R. J Stat Softw. 2021;98:1–21.

Herrmann C, Kieser M, Rauch G, Pilz M. Optimization of adaptive designs with respect to a performance score. Biom J. 2022; 64(6):989–1006

Bokelmann B, Rauch G, Meis J, Kieser M, Herrmann C. Extension of a conditional performance score for sample size recalculation rules to the setting of binary endpoints. BMC Med Res Methodol. 2024;24(1):1–14.

R Core Team. R: a language and environment for statistical computing. Vienna; 2022. https://www.R-project.org/ . Accessed 03 Jan 2024.

Wassmer G, Pahlke F. Confirmatory Adaptive Clinical Trial Design and Analysis. 2023. R package version 3.3.4. https://CRAN.R-project.org/package=rpact . Accessed 03 Jan 2024.

Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:1–15.

Meis J, Pilz M, Bokelmann B, Herrmann C, Rauch G, Kieser M. Point estimation, confidence intervals, and P -values for optimal adaptive two-stage designs with normal endpoints. Stat Med. 2024;43(8):1577–603.

Download references

Open Access funding enabled and organized by Projekt DEAL. This work was supported by the German Research Foundation (grants RA 2347/4-2 and KI 708/4-2).

Author information

Authors and affiliations.

Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Biometry and Clinical Epidemiology, Charitéplatz 1, Berlin, 10117, Germany

Björn Bokelmann, Geraldine Rauch & Carolin Herrmann

Institute of Medical Biometry, University Medical Center Ruprechts-Karls University Heidelberg, Im Neuenheimer Feld 130.3, Heidelberg, 69120, Germany

Jan Meis & Meinhard Kieser

Technische Universität Berlin, Straße des 17. Juni 135, Berlin, 10623, Germany

Geraldine Rauch

You can also search for this author in PubMed   Google Scholar

Contributions

BB derived the theoretical results, conducted the simulation study, drafted the article and finally approved it. CH drafted parts of the article. GR and CH developed the research question, supported the conception of the study and the analysis and critically revised the manuscript draft as well as approved the final version. MK and JM revised and approved the manuscript.

Corresponding author

Correspondence to Björn Bokelmann .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Bokelmann, B., Rauch, G., Meis, J. et al. Sample size recalculation in three-stage clinical trials and its evaluation. BMC Med Res Methodol 24 , 214 (2024). https://doi.org/10.1186/s12874-024-02337-9

Download citation

Received : 01 February 2024

Accepted : 10 September 2024

Published : 25 September 2024

DOI : https://doi.org/10.1186/s12874-024-02337-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical trials
  • Adaptive trial design
  • Sample size adaptation
  • Performance evaluation

BMC Medical Research Methodology

ISSN: 1471-2288

research methodology literature review example

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

electronics-logo

Article Menu

research methodology literature review example

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Literature review of explainable tabular data analysis.

research methodology literature review example

1. Introduction

The objectives of this survey.

  • To analyze the various techniques, inputs, and methods used to build XAI models since 2021, aiming to identify any superior models for tabular data that have been created since Sahakyan et al.’s, paper.
  • To identify and expand upon Sahakyan et al.’s description of need, challenges, gaps, and opportunities in XAI for tabular data.
  • To explore evaluation methods and metrics used to assess the effectiveness of XAI models specifically concerning tabular data and to see if any new metrics have been developed.

2. Background

Aspects of
Transparency
DefinitionsReference
TransparencyTransparency does not ensure that a user will fully understand the system, but it does provide access to all relevant information regarding the training data, data preprocessing, system performance, and more.[ ]
Algorithmic transparencyRefers to the user’s capacity to comprehend the process the model uses to generate a specific output based on its input data. The main limitation for algorithmically transparent models is that they must be entirely accessible for exploration through mathematical analysis and techniques.[ ]
DecomposabilityDecomposability is the capacity to explain each component of a model, including its inputs, parameters, and calculations. This enhances the understanding and interpretation of the model’s behavior. However, similar to algorithmic transparency, not all models can achieve this. For a model to be decomposable, each input must be easily interpretable, meaning complex features may hinder this ability. Additionally, for an algorithmically transparent model to be decomposable, all its parts must be comprehensible to a human without needing external tools.[ ]
SimulatabilityThis is a model’s capacity to be understood and conceptualized by a human, with complexity being a main factor. Simple models like single perceptron neural networks fit this criterion, more complex rule-based systems with excessive rules do not. An interpretable model should be easily explained through text and visualizations. The model must be sufficiently self-contained for a person to consider and reason on it as a whole.[ ]
Interaction
transparency
Is the clarity and openness in the interactions between users and AI systems? It involves giving users feedback they understand about the system’s actions, decisions, and processes, allowing them to understand how their inputs influence outcomes. This transparency fosters trust and enables users to engage more effectively with technology, as they can see and understand the rationale behind the AI’s behavior.[ ]
Social transparencyThis is the openness and clarity of an AI system’s impact on social dynamics and user interactions. It involves making the system’s intentions, decision-making processes, and effects on individuals and communities clear to users and stakeholders. Social transparency helps users understand how AI influences relationships, societal norms, and behaviors, fostering trust and the responsible use of technology.[ ]

3. Existing Techniques for Explainable Tabular Data Analysis

4. challenges and gaps in explainable tabular data analysis, 4.1. challenges of tabular data, 4.2. bias, incomplete and inaccurate data, 4.3. explanation quality, 4.4. scalability of techniques, 4.5. neural networks, 4.6. xai methods, 4.7. benchmark datasets, 4.8. scalability, 4.9. data structure, 4.10. model evaluation and benchmarks, 4.11. review, 5. applications of explainable tabular data analysis, 5.1. financial sector, 5.2. healthcare sector, 5.4. retail sector, 5.5. manufacturing sector, 5.6. utility sector, 5.7. education, 5.8. summary, 6. future directions and emerging trends, 7. conclusions, author contributions, data availability statement, conflicts of interest.

  • Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023 , 99 , 101805. [ Google Scholar ] [ CrossRef ]
  • Burkart, N.; Huber, M.F. A Survey on the Explainability of Supervised Machine Learning. J. Artif. Intell. Res. 2021 , 70 , 245–317. [ Google Scholar ] [ CrossRef ]
  • Weber, L.; Lapuschkin, S.; Binder, A.; Samek, W. Beyond explaining: Opportunities and challenges of XAI-based model improvement. Inf. Fusion 2023 , 92 , 154–176. [ Google Scholar ] [ CrossRef ]
  • Marcinkevičs, R.; Vogt, J.E. Interpretable and explainable machine learning: A methods-centric overview with concrete examples. WIREs Data Min. Knowl. Discov. 2023 , 13 , e1493. [ Google Scholar ] [ CrossRef ]
  • Sahakyan, M.; Aung, Z.; Rahwan, T. Explainable Artificial Intelligence for Tabular Data: A Survey. IEEE Access 2021 , 9 , 135392–135422. [ Google Scholar ] [ CrossRef ]
  • Alicioglu, G.; Sun, B. A survey of visual analytics for Explainable Artificial Intelligence methods. Comput. Graph. 2021 , 102 , 502–520. [ Google Scholar ] [ CrossRef ]
  • Cambria, E.; Malandri, L.; Mercorio, F.; Mezzanzanica, M.; Nobani, N. A survey on XAI and natural language explanations. Inf. Process. Manag. 2023 , 60 , 103111. [ Google Scholar ] [ CrossRef ]
  • Chinu, U.; Bansal, U. Explainable AI: To Reveal the Logic of Black-Box Models. New Gener. Comput. 2023 , 42 , 53–87. [ Google Scholar ] [ CrossRef ]
  • Schwalbe, G.; Finzel, B. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts. Data Min. Knowl. Discov. 2023 , 38 , 3043–3101. [ Google Scholar ] [ CrossRef ]
  • Yang, W.; Wei, Y.; Wei, H.; Chen, Y.; Huang, G.; Li, X.; Li, R.; Yao, N.; Wang, X.; Gu, X.; et al. Survey on Explainable AI: From Approaches, Limitations and Applications Aspects. Hum.-Centric Intell. Syst. 2023 , 3 , 161–188. [ Google Scholar ] [ CrossRef ]
  • Hamm, P.; Klesel, M.; Coberger, P.; Wittmann, H.F. Explanation matters: An experimental study on explainable AI. Electron. Mark. 2023 , 33 , 17. [ Google Scholar ] [ CrossRef ]
  • Lance, E. Ways That the GDPR Encompasses Stipulations for Explainable AI or XAI ; SSRN, Stanford Center for Legal Informatics: Stanford, CA, USA, 2022; pp. 1–7. Available online: https://ssrn.com/abstract=4085089 (accessed on 30 August 2023).
  • Gunning, D.; Vorm, E.; Wang, J.Y.; Turek, M. DARPA’s explainable AI (XAI) program: A retrospective. Appl. AI Lett. 2021 , 2 , e61. [ Google Scholar ] [ CrossRef ]
  • Allgaier, J.; Mulansky, L.; Draelos, R.L.; Pryss, R. How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare. Artif. Intell. Med. 2023 , 143 , 102616. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Graziani, M.; Dutkiewicz, L.; Calvaresi, D.; Amorim, J.P.; Yordanova, K.; Vered, M.; Nair, R.; Abreu, P.H.; Blanke, T.; Pulignano, V.; et al. A global taxonomy of interpretable AI: Unifying the terminology for the technical and social sciences. Artif. Intell. Rev. 2022 , 56 , 3473–3504. [ Google Scholar ] [ CrossRef ]
  • Bellucci, M.; Delestre, N.; Malandain, N.; Zanni-Merk, C. Towards a terminology for a fully contextualized XAI. Procedia Comput. Sci. 2021 , 192 , 241–250. [ Google Scholar ] [ CrossRef ]
  • Barbiero, P.; Fioravanti, S.; Giannini, F.; Tonda, A.; Lio, P.; Di Lavore, E. Categorical Foundations of Explainable AI: A Unifying Formalism of Structures and Semantics. In Explainable Artificial Intelligence. xAI, Proceedings of the Communications in Computer and Information Science, Delhi, India, 21–24 May 2024 ; Springer: Cham, Switzerland, 2024; Volume 2155, pp. 185–206. [ Google Scholar ] [ CrossRef ]
  • Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021 , 76 , 89–106. [ Google Scholar ] [ CrossRef ]
  • Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019 , 1 , 206–215. [ Google Scholar ] [ CrossRef ]
  • Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020 , 58 , 82–115. [ Google Scholar ] [ CrossRef ]
  • Haresamudram, K.; Larsson, S.; Heintz, F. Three Levels of AI Transparency. Computer 2023 , 56 , 93–100. [ Google Scholar ] [ CrossRef ]
  • Wadden, J.J. Defining the undefinable: The black box problem in healthcare artificial intelligence. J. Med Ethic 2021 , 48 , 764–768. [ Google Scholar ] [ CrossRef ]
  • Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016 , 3 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Markus, A.F.; Kors, J.A.; Rijnbeek, P.R. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 2021 , 113 , 103655. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Brożek, B.; Furman, M.; Jakubiec, M.; Kucharzyk, B. The black box problem revisited. Real and imaginary challenges for automated legal decision making. Artif. Intell. Law 2023 , 32 , 427–440. [ Google Scholar ] [ CrossRef ]
  • Li, D.; Liu, Y.; Huang, J.; Wang, Z. A Trustworthy View on Explainable Artificial Intelligence Method Evaluation. Computer 2023 , 56 , 50–60. [ Google Scholar ] [ CrossRef ]
  • Nauta, M.; Trienes, J.; Pathak, S.; Nguyen, E.; Peters, M.; Schmitt, Y.; Schlötterer, J.; van Keulen, M.; Seifert, C. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 2023 , 55 , 295. [ Google Scholar ] [ CrossRef ]
  • Lopes, P.; Silva, E.; Braga, C.; Oliveira, T.; Rosado, L. XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Appl. Sci. 2022 , 12 , 9423. [ Google Scholar ] [ CrossRef ]
  • Baptista, M.L.; Goebel, K.; Henriques, E.M. Relation between prognostics predictor evaluation metrics and local interpretability SHAP values. Artif. Intell. 2022 , 306 , 103667. [ Google Scholar ] [ CrossRef ]
  • Fouladgar, N.; Alirezaie, M.; Framling, K. Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing. IEEE Access 2022 , 10 , 23995–24009. [ Google Scholar ] [ CrossRef ]
  • Oblizanov, A.; Shevskaya, N.; Kazak, A.; Rudenko, M.; Dorofeeva, A. Evaluation Metrics Research for Explainable Artificial Intelligence Global Methods Using Synthetic Data. Appl. Syst. Innov. 2023 , 6 , 26. [ Google Scholar ] [ CrossRef ]
  • Speith, T. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In Proceedings of the FAccT ‘22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 2239–2250. [ Google Scholar ]
  • Kurdziolek, M. Explaining the Unexplainable: Explainable AI (XAI) for UX. User Experience Magazine . 2022. Available online: https://uxpamagazine.org/explaining-the-unexplainable-explainable-ai-xai-for-ux/ (accessed on 20 August 2023).
  • Kim, B.; Wattenberg, M.; Gilmer, J.; Cai, C.; Wexler, J.; Viegas, F.; Sayres, R. Interpretability beyond feature attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In Proceedings of the 35th International Conference on Machine Learning, ICML, Stockholm, Sweden, 10–15 July 2018; Volume 6, pp. 4186–4195. Available online: https://proceedings.mlr.press/v80/kim18d/kim18d.pdf (accessed on 30 July 2024).
  • Kenny, E.M.; Keane, M.T. Explaining Deep Learning using examples: Optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowl. Based Syst. 2021 , 233 , 107530. [ Google Scholar ] [ CrossRef ]
  • Alfeo, A.L.; Zippo, A.G.; Catrambone, V.; Cimino, M.G.; Toschi, N.; Valenza, G. From local counterfactuals to global feature importance: Efficient, robust, and model-agnostic explanations for brain connectivity networks. Comput. Methods Programs Biomed. 2023 , 236 , 107550. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • An, J.; Zhang, Y.; Joe, I. Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models. Appl. Sci. 2023 , 13 , 8782. [ Google Scholar ] [ CrossRef ]
  • Bharati, S.; Mondal, M.R.H.; Podder, P. A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When? IEEE Trans. Artif. Intell. 2023 , 5 , 1429–1442. [ Google Scholar ] [ CrossRef ]
  • Chaddad, A.; Peng, J.; Xu, J.; Bouridane, A. Survey of Explainable AI Techniques in Healthcare. Sensors 2023 , 23 , 634. [ Google Scholar ] [ CrossRef ]
  • Chamola, V.; Hassija, V.; Sulthana, A.R.; Ghosh, D.; Dhingra, D.; Sikdar, B. A Review of Trustworthy and Explainable Artificial Intelligence (XAI). IEEE Access 2023 , 11 , 78994–79015. [ Google Scholar ] [ CrossRef ]
  • Chen, X.-Q.; Ma, C.-Q.; Ren, Y.-S.; Lei, Y.-T.; Huynh, N.Q.A.; Narayan, S. Explainable artificial intelligence in finance: A bibliometric review. Financ. Res. Lett. 2023 , 56 , 104145. [ Google Scholar ] [ CrossRef ]
  • Di Martino, F.; Delmastro, F. Explainable AI for clinical and remote health applications: A survey on tabular and time series data. Artif. Intell. Rev. 2022 , 56 , 5261–5315. [ Google Scholar ] [ CrossRef ]
  • Kök, I.; Okay, F.Y.; Muyanlı, O.; Özdemir, S. Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey. IEEE Internet Things J. 2023 , 10 , 14764–14779. [ Google Scholar ] [ CrossRef ]
  • Haque, A.B.; Islam, A.N.; Mikalef, P. Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technol. Forecast. Soc. Chang. 2023 , 186 , 122120. [ Google Scholar ] [ CrossRef ]
  • Sahoh, B.; Choksuriwong, A. The role of explainable Artificial Intelligence in high-stakes decision-making systems: A systematic review. J. Ambient. Intell. Humaniz. Comput. 2023 , 14 , 7827–7843. [ Google Scholar ] [ CrossRef ]
  • Saranya, A.; Subhashini, R. A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends. Decis. Anal. J. 2023 , 7 , 100230. [ Google Scholar ] [ CrossRef ]
  • Sosa-Espadas, C.E.; Orozco-Del-Castillo, M.G.; Cuevas-Cuevas, N.; Recio-Garcia, J.A. IREX: Iterative Refinement and Explanation of classification models for tabular datasets. SoftwareX 2023 , 23 , 101420. [ Google Scholar ] [ CrossRef ]
  • Meding, K.; Hagendorff, T. Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms. Philos. Technol. 2024 , 37 , 4. [ Google Scholar ] [ CrossRef ]
  • Batko, K.; Ślęzak, A. The use of Big Data Analytics in healthcare. J. Big Data 2022 , 9 , 3. [ Google Scholar ] [ CrossRef ]
  • Borisov, V.; Leemann, T.; Seßler, K.; Haug, J.; Pawelczyk, M.; Kasneci, G. Deep Neural Networks and Tabular Data: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2022 , 35 , 7499–7519. [ Google Scholar ] [ CrossRef ]
  • Mbanaso, M.U.; Abrahams, L.; Okafor, K.C. Data Collection, Presentation and Analysis. In Research Techniques for Computer Science, Information Systems and Cybersecurity ; Springer: Cham, Switzerland, 2023; pp. 115–138. [ Google Scholar ] [ CrossRef ]
  • Tjoa, E.; Guan, C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Trans. Neural Networks Learn. Syst. 2021 , 32 , 4793–4813. [ Google Scholar ] [ CrossRef ]
  • Gajcin, J.; Dusparic, I. Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and Opportunities. ACM Comput. Surv. 2024 , 56 , 219. [ Google Scholar ] [ CrossRef ]
  • Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cogn. Comput. 2023 , 16 , 45–74. [ Google Scholar ] [ CrossRef ]
  • Lötsch, J.; Kringel, D.; Ultsch, A. Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients. BioMedInformatics 2021 , 2 , 1–17. [ Google Scholar ] [ CrossRef ]
  • Hossain, I.; Zamzmi, G.; Mouton, P.R.; Salekin, S.; Sun, Y.; Goldgof, D. Explainable AI for Medical Data: Current Methods, Limitations, and Future Directions. ACM Comput. Surv. 2023 . [ Google Scholar ] [ CrossRef ]
  • Rudin, C.; Chen, C.; Chen, Z.; Huang, H.; Semenova, L.; Zhong, C. Interpretable machine learning: Fundamental principles and 10 grand challenges. Stat. Surv. 2022 , 16 , 1–85. [ Google Scholar ] [ CrossRef ]
  • Zhong, X.; Gallagher, B.; Liu, S.; Kailkhura, B.; Hiszpanski, A.; Han, T.Y.-J. Explainable machine learning in materials science. NPJ Comput. Mater. 2022 , 8 , 204. [ Google Scholar ] [ CrossRef ]
  • Ekanayake, I.; Meddage, D.; Rathnayake, U. A novel approach to explain the black-box nature of machine learning in compressive strength predictions of concrete using Shapley additive explanations (SHAP). Case Stud. Constr. Mater. 2022 , 16 , e01059. [ Google Scholar ] [ CrossRef ]
  • Černevičienė, J.; Kabašinskas, A. Explainable artificial intelligence (XAI) in finance: A systematic literature review. Artif. Intell. Rev. 2024 , 57 , 216. [ Google Scholar ] [ CrossRef ]
  • Weber, P.; Carl, K.V.; Hinz, O. Applications of Explainable Artificial Intelligence in Finance—A systematic review of Finance, Information Systems, and Computer Science literature. Manag. Rev. Q. 2024 , 74 , 867–907. [ Google Scholar ] [ CrossRef ]
  • Leijnen, S.; Kuiper, O.; van der Berg, M. Impact Your Future Xai in the Financial Sector a Conceptual Framework for Explainable Ai (Xai). Hogeschool Utrecht, Lectoraat Artificial Intelligence, Whitepaper, Version 1, 1–24. 2020. Available online: https://www.hu.nl/onderzoek/projecten/uitlegbare-ai-in-de-financiele-sector (accessed on 2 August 2024).
  • Dastile, X.; Celik, T. Counterfactual Explanations with Multiple Properties in Credit Scoring. IEEE Access 2024 , 12 , 110713–110728. [ Google Scholar ] [ CrossRef ]
  • Martins, T.; de Almeida, A.M.; Cardoso, E.; Nunes, L. Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in Finance. IEEE Access 2023 , 12 , 618–629. [ Google Scholar ] [ CrossRef ]
  • Kalra, A.; Mittal, R. Explainable AI for Improved Financial Decision Support in Trading. In Proceedings of the 2024 11th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 14–15 March 2024; pp. 1–6. [ Google Scholar ]
  • Wani, N.A.; Kumar, R.; Mamta; Bedi, J.; Rida, I. Explainable AI-driven IoMT fusion: Unravelling techniques, opportunities, and challenges with Explainable AI in healthcare. Inf. Fusion 2024 , 110 , 102472. [ Google Scholar ] [ CrossRef ]
  • Li, Y.; Song, X.; Wei, T.; Zhu, B. Counterfactual learning in customer churn prediction under class imbalance. In Proceedings of the 2023 6th International Conference on Big Data Technologies (ICBDT ‘23), Qingdao, China, 22–24 September 2023; pp. 96–102. [ Google Scholar ]
  • Zhang, L.; Zhu, Y.; Ni, Q.; Zheng, X.; Gao, Z.; Zhao, Q. Local/Global explainability empowered expert-involved frameworks for essential tremor action recognition. Biomed. Signal Process. Control 2024 , 95 , 106457. [ Google Scholar ] [ CrossRef ]
  • Sadeghi, Z.; Alizadehsani, R.; Cifci, M.A.; Kausar, S.; Rehman, R.; Mahanta, P.; Bora, P.K.; Almasri, A.; Alkhawaldeh, R.S.; Hussain, S.; et al. A review of Explainable Artificial Intelligence in healthcare. Comput. Electr. Eng. 2024 , 118 , 109370. [ Google Scholar ] [ CrossRef ]
  • Alizadehsani, R.; Oyelere, S.S.; Hussain, S.; Jagatheesaperumal, S.K.; Calixto, R.R.; Rahouti, M.; Roshanzamir, M.; De Albuquerque, V.H.C. Explainable Artificial Intelligence for Drug Discovery and Development: A Comprehensive Survey. IEEE Access 2024 , 12 , 35796–35812. [ Google Scholar ] [ CrossRef ]
  • Murindanyi, S.; Mugalu, B.W.; Nakatumba-Nabende, J.; Marvin, G. Interpretable Machine Learning for Predicting Customer Churn in Retail Banking. In Proceedings of the 2023 7th International Conference on Trends in Electronics and Informatics (ICOEI)., Tirunelveli, India, 11–13 April 2023; pp. 967–974. [ Google Scholar ]
  • Mill, E.; Garn, W.; Ryman-Tubb, N.; Turner, C. Opportunities in Real Time Fraud Detection: An Explainable Artificial Intelligence (XAI) Research Agenda. Int. J. Adv. Comput. Sci. Appl. 2023 , 14 , 1172–1186. [ Google Scholar ] [ CrossRef ]
  • Dutta, J.; Puthal, D.; Yeun, C.Y. Next Generation Healthcare with Explainable AI: IoMT-Edge-Cloud Based Advanced eHealth. In Proceedings of the IEEE Global Communications Conference, GLOBECOM, Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 7327–7332. [ Google Scholar ]
  • Njoku, J.N.; Nwakanma, C.I.; Lee, J.-M.; Kim, D.-S. Evaluating regression techniques for service advisor performance analysis in automotive dealerships. J. Retail. Consum. Serv. 2024 , 80 , 103933. [ Google Scholar ] [ CrossRef ]
  • Agostinho, C.; Dikopoulou, Z.; Lavasa, E.; Perakis, K.; Pitsios, S.; Branco, R.; Reji, S.; Hetterich, J.; Biliri, E.; Lampathaki, F.; et al. Explainability as the key ingredient for AI adoption in Industry 5.0 settings. Front. Artif. Intell. 2023 , 6 , 1264372. [ Google Scholar ] [ CrossRef ]
  • Finzel, B.; Tafler, D.E.; Thaler, A.M.; Schmid, U. Multimodal Explanations for User-centric Medical Decision Support Systems. CEUR Workshop Proc. 2021 , 3068 , 1–6. [ Google Scholar ]
  • Brochado, F.; Rocha, E.M.; Addo, E.; Silva, S. Performance Evaluation and Explainability of Last-Mile Delivery. Procedia Comput. Sci. 2024 , 232 , 2478–2487. [ Google Scholar ] [ CrossRef ]
  • Kostopoulos, G.; Davrazos, G.; Kotsiantis, S. Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review. Electronics 2024 , 13 , 2842. [ Google Scholar ] [ CrossRef ]
  • Nyrup, R.; Robinson, D. Explanatory pragmatism: A context-sensitive framework for explainable medical AI. Ethics Inf. Technol. 2022 , 24 , 13. [ Google Scholar ] [ CrossRef ]
  • Talaat, F.M.; Aljadani, A.; Alharthi, B.; Farsi, M.A.; Badawy, M.; Elhosseini, M. A Mathematical Model for Customer Segmentation Leveraging Deep Learning, Explainable AI, and RFM Analysis in Targeted Marketing. Mathematics 2023 , 11 , 3930. [ Google Scholar ] [ CrossRef ]
  • Kulkarni, S.; Rodd, S.F. Context Aware Recommendation Systems: A review of the state of the art techniques. Comput. Sci. Rev. 2020 , 37 , 100255. [ Google Scholar ] [ CrossRef ]
  • Sarker, A.A.; Shanmugam, B.; Azam, S.; Thennadil, S. Enhancing smart grid load forecasting: An attention-based deep learning model integrated with federated learning and XAI for security and interpretability. Intell. Syst. Appl. 2024 , 23 , 200422. [ Google Scholar ] [ CrossRef ]
  • Nnadi, L.C.; Watanobe, Y.; Rahman, M.; John-Otumu, A.M. Prediction of Students’ Adaptability Using Explainable AI in Educational Machine Learning Models. Appl. Sci. 2024 , 14 , 5141. [ Google Scholar ] [ CrossRef ]
  • Vellido, A.; Martín-Guerrero, J.D.; Lisboa, P.J.G. Making machine learning models interpretable. In Proceedings of the 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 25–27 April 2012; pp. 163–172. Available online: https://www.esann.org/sites/default/files/proceedings/legacy/es2012-7.pdf (accessed on 16 August 2024).
  • Alkhatib, A.; Ennadir, S.; Boström, H.; Vazirgiannis, M. Interpretable Graph Neural Networks for Tabular Data. In Proceedings of the ICLR 2024 Data-Centric Machine Learning Research (DMLR) Workshop, Vienna, Austria, 26–27 July 2024; pp. 1–35. Available online: https://openreview.net/pdf/60ce21fd5bcf7b6442b1c9138d40e45251d03791.pdf (accessed on 23 August 2024).
  • Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl. Based Syst. 2023 , 263 , 110273. [ Google Scholar ] [ CrossRef ]
  • de Oliveira, R.M.B.; Martens, D. A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data. Appl. Sci. 2021 , 11 , 7274. [ Google Scholar ] [ CrossRef ]
  • Bienefeld, N.; Boss, J.M.; Lüthy, R.; Brodbeck, D.; Azzati, J.; Blaser, M.; Willms, J.; Keller, E. Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals. NPJ Digit. Med. 2023 , 6 , 94. [ Google Scholar ] [ CrossRef ]
  • Molnar, C.; Casalicchio, G.; Bischl, B. Interpretable Machine Learning—A Brief History, State-of-the-Art and Challenges. In ECML PKDD 2020 Workshops, Proceedings of the ECML PKDD 2020, Ghent, Belgium, 14–18 September 2020 ; Koprinska, I., Kamp, M., Appice, A., Loglisci, C., Antonie, L., Zimmermann, A., Guidotti, R., Özgöbek, Ö., Ribeiro, R.P., Gavaldà, R., et al., Eds.; Springer: Cham, Switzerland, 2021; Volume 1323, pp. 417–431. [ Google Scholar ] [ CrossRef ]
  • Pawlicki, M.; Pawlicka, A.; Kozik, R.; Choraś, M. Advanced insights through systematic analysis: Mapping future research directions and opportunities for xAI in deep learning and artificial intelligence used in cybersecurity. Neurocomputing 2024 , 590 , 127759. [ Google Scholar ] [ CrossRef ]
  • Hartog, P.B.R.; Krüger, F.; Genheden, S.; Tetko, I.V. Using test-time augmentation to investigate explainable AI: Inconsistencies between method, model and human intuition. J. Cheminform. 2024 , 16 , 39. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Srinivasu, P.N.; Sandhya, N.; Jhaveri, R.H.; Raut, R. From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies. Mob. Inf. Syst. 2022 , 2022 , 167821. [ Google Scholar ] [ CrossRef ]
  • Rong, Y.; Leemann, T.; Nguyen, T.-T.; Fiedler, L.; Qian, P.; Unhelkar, V.; Seidel, T.; Kasneci, G.; Kasneci, E. Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations. IEEE Trans. Pattern Anal. Mach. Intell. 2024 , 46 , 2104–2122. [ Google Scholar ] [ CrossRef ]
  • Baniecki, H.; Biecek, P. Adversarial attacks and defenses in explainable artificial intelligence: A survey. Inf. Fusion 2024 , 107 , 102303. [ Google Scholar ] [ CrossRef ]
  • Panigutti, C.; Hamon, R.; Hupont, I.; Llorca, D.F.; Yela, D.F.; Junklewitz, H.; Scalzo, S.; Mazzini, G.; Sanchez, I.; Garrido, J.S.; et al. The role of explainable AI in the context of the AI Act. In Proceedings of the FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 1139–1150. [ Google Scholar ]
  • Madiega, T.; Chahri, S. EU Legislation in Progress: Artificial Intelligence Act, 1–12. 2024. Available online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf (accessed on 16 August 2024).

Click here to enlarge figure

DomainsExamples of Applications of Explainable Tabular Data
Financial SectorIdentity verification in client onboarding
Transaction data analysis
Fraud detection in claims management
Anti-money laundering monitoring
Financial trading
Risk management
Processing of loan applications
Bankruptcy prediction
Insurance industryInsurance premium calculation
Healthcare SectorPatient diagnosis
Drug efficacy
Personalized healthcare
FraudIdentification of fraudulent transactions
Retail SectorCustomer churn prediction
Improve product suggestions to a customer
Customer segmentation
Human resourcesEmployee churn prediction
Evaluate employee performance
Manufacturing SectorLogistics and supply chain management
Order fulfilment
Quality control
Process control
Planning and scheduling
Predictive maintenance
Utility SectorSmart grid load balancing
Forecast energy consumption for customers
EducationPredict student adaptability
Predict student exam grades
Course recommendations
DatabaseReasons
Google ScholarComprehensive Coverage: Accesses a wide range of disciplines and sources, including articles, theses, books, and conference papers, providing a broad view of available literature.
User-Friendly Interface: Easy to use, making it accessible
Citation Tracking: Shows how often articles have been cited and helps to gauge their influence and relevance.
IEEE XploreSpecialised Focus: On electrical engineering, computer science, and electronics.
High-Quality Publications: Includes peer-reviewed journals and conference proceedings from reputable organizations.
Cutting-Edge Research: Provides access to the latest research published in technology and engineering.
ACM Digital LibraryFocus on Computing and Information Technology: Resources specifically related to computing, software engineering, and information systems.
Peer-Reviewed Content: High academic quality through rigorous peer review.
Conference Proceedings: Important conferences in computing, giving the latest research developments.
PubMedBiomedical Focus: A vast collection of literature in medicine, life sciences, and health, often innovative computing solutions.
Free Access: Many articles are available for free.
High-Quality Research: Peer-reviewed journals and is a trusted source for medical and clinical research.
ScopusExtensive Database: A wide range of disciplines
Citation Analysis Tools: Provides metrics for authors and journals.
Quality Control: Peer-reviewed literature, reliability of the sources.
ScienceDirectMultidisciplinary Coverage: A vast collection of scientific and technical research.
Quality Journals: High-impact journals.
Full-Text Access: Access to a large number of full-text articles, facilitating in-depth research.
Search TermsNumber of Papers
XAI AND explainable artificial intelligence128
XAI AND explainable artificial intelligence AND 202128
XAI AND explainable artificial intelligence AND 202243
XAI AND explainable artificial intelligence AND 202357
2021 AND tabular2
2022 AND tabular5
2023 AND tabular5
2021 AND survey (in title)5
2022 AND survey (in title)1
2023 AND survey (in title)8
2021 AND survey AND tabular1
2022 AND survey AND tabular6
2023 AND survey AND tabular11
2021 AND survey AND tabular AND Sahakyan (Sahakyan’s article)1
2022 AND survey AND tabular AND Sahakyan0
2023 AND survey AND tabular AND Sahakyan2
ComprehensibilityDefinitionsReference
ComprehensibilityThe clarity of the language employed by a method for providing explanations.[ ]
Comprehensible systemsUnderstandable systems produce symbols, allowing users to generate explanations for how a conclusion is derived.[ ]
Degree of comprehensibilityThis is a subjective evaluation, as the potential for understanding relies on the viewer’s background knowledge. The more specialized the AI application, the greater the reliance on domain knowledge for the comprehensibility of the XAI system.[ ]
Comprehensibility of individual explanationsThe length of explanations and how readable they are.[ ]
Summary of XAI Types
Type of XAIDescriptionExamplesProsConsEvaluation
Counterfactual explanationsCounterfactual explanations illustrate how minimal changes in input features can change the model’s prediction, e.g., “If income increases by £5000, the loan is approved”.DiCECausal insight—understand the causal relationship between input features and predictions.Complexity—generating counterfactuals is computationally intensive, particularly for complex models and high-dimensional data.Alignment with predicted outcome—ensuring the generated counterfactual instances closely reflect the intended predicted outcome.
WatcherCFPersonalized explanations—tailors individualized insights for better insights.Complexity—generating counterfactuals is computationally intensive, particularly for complex models
and high-dimensional data.
Alignment with predicted outcome—ensuring the generated counterfactual instances closely reflect the intended predicted outcome.
GrowingSpheresCFDecision support—aids decision making with actionable outcome-focused changesModel specificity—effectiveness is influenced by the underlying model’s characteristics.Proximity to original instance—maintaining similarity to the original instance whilst altering the fewest features possible.
Interpretation—conveying implications can necessitate domain expertise.Diverse outputs—capable of producing multiple diverse counterfactual explanations.
Feasible feature values—the counterfactual features should be practical and adhere to the data distribution.
Feature importanceFeature importance methods assess how much each feature contributes to the model’s predictions. Permutation ImportanceHelps in feature selection and model interpretability.May not capture complex feature interactions.Relative importance—rank features based on their contribution to the model’s prediction.
Gain Importance.Provides insight into the most influential features driving the model’s decisions.Can be sensitive to data noise and model assumptions.Stability—ensure consistency of feature importance over different subsets of the data or re-trainings of the model.
SHAP Model impact—Assessing the influence of individual features on the model’s predictive performance
LIME
Feature interactionsFeature interaction analysis looks at how the combined effect of multiple input features influences the model’s predictions.Partial Dependence plotsReveals intricate and synergistic connections among features. Visualizing and interpreting features can be difficult, especially when dealing with high-dimensional data. Non-linear relationships—uncovers and visualizes complex, non-linear interactions among the features.
Accumulated Local Effects plots.Enhances insight into the model’s decision-making mechanism.The computational complexity grows as the number of interacting features increases.Holistic insight—provides a comprehensive understanding of how features collectively impact the model’s predictions.
Individual Conditional Expectation Plots Predictive power—evaluates the combined effects of interacting features on the model ‘s performance.
Interaction Values
Decision rulesDecision rules provide clear, human-readable guidelines derived from the model, such as “If age > 30 and income > 50k, then approve loan”.Decision TreesProvides clear and intuitive insights into the model’s predictions.Might struggle to capture complex relationships in the data, leading to oversimplification.Transparency—offers clear and interpretable explanations of the conditions and criteria used for decision making.
Rule-Based ModelsEasily understood by non-technical stakeholders.Can be prone to overfitting, reducing generalization performanceUnderstandability—ensures ease of understanding by non-technical stakeholders and experts alike.
Anchors Model adherence—check that decision rules capture accurately the model’s decision logic without oversimplification.
Simplified modelsSimplified models are interpretable machine learning models that approximate the behavior of a more complex black-box modelGeneralized
Additive Models.
Gives a balance between model interpretability and model complexity.Might not capture the total complexity of the underlying data generating process.Balance of complexity—achieves an optimal compromise between model simplicity and predictive performance.
Interpretable Tree Ensembles.Offers global insights into the model’s decision-making processNeeds careful model choice and tunning to maintain a good trade-off between interpretability and accuracy.Interpretable representation—ensures that the offers transparent and intuitive insights into the original complex model’s behavior.
Fidelity to original model—Assesses the extent to which the simplified model captures the key characteristics and patterns of the original complex model.
Possible Research AreasSuggestions
Hybrid Explanations [ ]Combining multiple XAI techniques to provide more comprehensive and robust explanations for tabular data models [ ].
Integrating global and local interpretability methods to offer both high-level and instance-specific insights.
Counterfactual ExplanationsGenerating counterfactual examples that show how the model’s predictions would change if certain feature values were altered [ ].
Helping users understand the sensitivity of the model to different feature inputs and how to achieve desired outcomes.
Causal Inference [ ]Incorporating causal reasoning into XAI methods to better understand the underlying relationships and dependencies in tabular data [ ].
Identifying causal features that drive the model’s predictions, beyond just correlational relationships.
Interactive VisualizationsDeveloping interactive visualization tools that allow users to explore and interpret the model’s behavior on tabular data [ ].
Enabling users to interactively adjust feature values and observe the corresponding changes in model outputs [ ].
Scalable XAI Techniques [ ]Designing XAI methods that can handle the growing volume and complexity of tabular datasets across various domains [ ].
Improving the computational efficiency and scalability of XAI techniques to support real-world applications.
Domain-specific XAITailoring XAI approaches to the specific needs and requirements of different industries and applications that rely on tabular data, such as finance, healthcare, and manufacturing.
Incorporating domain knowledge and constraints to enhance the relevance and interpretability of explanations [ ].
Automated Explanation Generation [ ]Developing AI-powered systems that can automatically generate natural language explanations for the model’s decisions on tabular data [ ].
Bridging the gap between the technical aspects of the model and the end-user’s understanding [ ].
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

O’Brien Quinn, H.; Sedky, M.; Francis, J.; Streeton, M. Literature Review of Explainable Tabular Data Analysis. Electronics 2024 , 13 , 3806. https://doi.org/10.3390/electronics13193806

O’Brien Quinn H, Sedky M, Francis J, Streeton M. Literature Review of Explainable Tabular Data Analysis. Electronics . 2024; 13(19):3806. https://doi.org/10.3390/electronics13193806

O’Brien Quinn, Helen, Mohamed Sedky, Janet Francis, and Michael Streeton. 2024. "Literature Review of Explainable Tabular Data Analysis" Electronics 13, no. 19: 3806. https://doi.org/10.3390/electronics13193806

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Systematic Review
  • Open access
  • Published: 27 September 2024

Using best-worst scaling to inform policy decisions in Africa: a literature review

  • Laura K. Beres 1 ,
  • Nicola B. Campoamor 2 ,
  • Rachael Hawthorn 3 ,
  • Melissa L. Mugambi 4 ,
  • Musunge Mulabe 5 ,
  • Natlie Vhlakis 5 ,
  • Michael Kabongo 5 ,
  • Anne Schuster 2 &
  • John F. P. Bridges 2  

BMC Public Health volume  24 , Article number:  2607 ( 2024 ) Cite this article

Metrics details

Stakeholder engagement in policy decision-making is critical to inform required trade-offs, especially in low-and-middle income settings, such as many African countries. Discrete-choice experiments are now commonly used to engage stakeholders in policy decisions, but other methods such as best-worst scaling (BWS), a theory-driven prioritization technique, could be equally important. We sought to document and explore applications of BWS to assess stakeholder priorities in the African context to bring attention to BWS as a method and to assess how and why it is being used to inform policy.

We conducted a literature review of published applications of BWS for prioritization in Africa.

Our study identified 35 studies, with the majority published in the past four years. BWS has most commonly been used in agriculture (43%) and health (34%), although its broad applicability is demonstrated through use in fields influencing social and economic determinants of health, including business, environment, and transportation. Published studies from eastern, western, southern, and northern Africa include a broad range of sample sizes, design choices, and analytical approaches. Most studies are of high quality and high policy relevance. Several studies cited benefits of using BWS, with many of those citing potential limitations rather than observed limitations in their study.

Conclusions

Growing use of the method across the African continent demonstrates its feasibility and utility, recommending it for consideration among researchers, program implementers, policy makers, and funders when conducting preference research to influence policy and improve health systems.

Registration

The review was registered on PROSPERO (CRD42020209745).

Peer Review reports

Introduction

Health policies govern both health systems hardware (e.g., human resources, finance, medicines and technologies) and software (e.g., values, norms, power dynamics) by constraining or facilitating individual, organizational, and community actions and experiences [ 1 , 2 , 3 ]. Additionally, healthcare workers take numerous discretionary decisions each day to translate policy into practice and to fill gaps between policy guidance and implementation realities [ 4 , 5 , 6 ]. When guided by evidence, health policies and clinical decision-making facilitate optimized health practices and outcomes [ 6 , 7 , 8 ]. However, policy and practice decisions are often made in an evidence void due to a lack of data or poor evidence access and translation, resulting in inefficient, ineffective, or harmful health system outcomes [ 9 , 10 , 11 ].

Evidence about the preferences of those affected by health decision-making is particularly limited, but greatly needed for policy and practice. Internationally recognized processes for developing health guidelines and recommendations include incorporating the values and preferences of affected parties, such as patients and healthcare workers, into decision-making [ 12 , 13 ]. Limited availability of preference-based evidence downgrades the strength of recommendations [ 13 , 14 ]. Additionally, the welcomed and growing call for person-centered healthcare explicitly requires integration of patient preferences and perspectives into health practice [ 15 , 16 ]. Across all settings, policy makers must trade off services implemented with available resources. Required trade-offs are often more common and more challenging in more resource-limited settings, such as low-and-middle-income countries. Expanded use of tools to understand and systematically incorporate evidence on the preferences of patients, healthcare workers, and other stakeholders into policies and practices is needed to improve health systems at every level [ 17 , 18 , 19 , 20 ].

Researchers and practitioners use multiple methods to understand preferences and priorities, including deliberative processes such as testimony or community meetings, qualitative processes such as focus group discussions and interviews, and mixed methods approaches such as human-centered participatory design processes, surveys, Likert scales, and community scorecards. Developing an even broader methodological toolkit allows for more influential data to facilitate policy and practice changes, as teams will be equipped to optimize the methods selected for the target audience and research question. Stated preference methods offer a theory-driven, structured approach to understanding preferences and priorities. They produce interpretable outcomes with clear relevance to the questions of interest. They have been used across various industries, including healthcare, transportation services, and grocery retailing demonstrating their versatility and potential for suitability. However, while a range of preference elicitation methods exist [ 21 , 22 , 23 ], discrete choice experiments (DCEs) predominate in published health literature [ 24 ]. Recent studies internationally have shown the potential utility and appropriateness of other, lesser known but valuable quantitative stated preference methods, such as best-worst scaling (BWS) [ 25 ]. Studies from eastern, western, southern, and northern Africa have demonstrated interesting advances [ 26 , 27 , 28 ], but have received less attention than BWS in other regions.

The goal of this study was to document and explore applications of BWS to assess stakeholder priorities in eastern, western, southern, and northern Africa to inform future preference assessment implementation. Such documentation of current BWS in the African setting is an important step in bringing attention to this increasingly important method and to stress that there are other theory-driven alternatives to DCEs – that are now commonly applied in Africa [ 24 ]. The review presents study design, methods, quality, and policy relevance from extant studies to enable preference researchers to consider the appropriateness of similar BWS applications in their work. While several international reviews have been conducted on BWS in health [ 25 , 29 ] and more generally [ 30 ], it is important to document how this method is being used in the African context, and what specific role it might have in informing policy there. Furthermore, there has been increased use of these methods in Africa since these previous reviews. It is important that contributions of African preference researchers are well-document to ensure their inclusion in international efforts around preference methods and the presented strengths and weaknesses of these methods are well understood [ 31 ].

Best-worst scaling (BWS) is a choice experiment that is aimed to assess how individiuals or groups prioritize concepts. It offers a theory-driven alternative to descriptive rating, ranking or Likert scale preference measurement, leveraging relative participant ease of selecting extremes compared to mid-range rankings. Several types of BWS exist; however, they all share the same underlying concept. BWS relies on the concept of individuals choosing the ‘best’ and ‘worst’ (or ‘most’ and ‘least’ important) items from a given sub-set of three or more items. Sub-sets of the items are shown repeatedly in different combinations requiring choices of ‘best’ and ‘worst’ within each sub-set. This results in a prioritization of the items. Even if all options are preferred, participants are forced to prioritize among the choices. The purpose is to determine the most and least preferred options of items existing on a subjective, latent value continuum. The application of BWS for prioritizing objects has also been referred to as MaxDiff, object scaling, BWS case 1, or BWS object case. Best-worst responses can also be used in other choice formats that are more similar to DCEs, which are not the focus of this paper [ 32 ].

BWS draws on random utility theory to identify perceived importance and priorities among a set of items (known as attributes) of a scenario based on repeat choices. It can estimate the likelihood of preference selection and heterogeneity of preferences between groups. BWS can be used with a relatively small sample size and analyzed with a range of methods including more simple count analyses or more complex probablistic models. The range of analysis approaches makes them particularly useful when working across a broad range of stakeholders, including policy makers, who would want and need to understand how conclusions were drawn. Compared to DCEs, where participants select which of two or more presented profiles (specific item combinations) are preferred, BWS offers more information per choice task (i.e., best and worst choices instead of only best), allowing for a more efficient design with either a smaller sample size or more information per task. BWS may have a lower cognitive burden for participants than DCEs [ 33 , 34 ]. We refer the reader to additional resources for more detail on the theory, methods, and application of BWS [ 35 , 36 ].

Our review of BWS for prioritization in published research from southern, eastern, western, and northern Africa drew from a broader database of BWS studies identified in previous reviews. While an earlier literature review on all types of BWS choice formats had previously been published [ 29 ], our team completed the first systematic review of BWS applications in health published prior to 2022 [ 25 ]. We then extended our review (PROSPERO: CRD42020209745) to include publications from any field (e.g., health, business, agriculture, etc.) published prior to 2023 [ 30 ]. We have continued to improve this database of articles, utilizing additional search strategies and including previously unidentified relevant articles directly sent to our team. Our database currently has 623 published studies from which we systematically extract data application, development, design, administration/analysis, quality, and policy relevance. The study reported in this paper leverages the most expanded database. Review methods were detailed in prior publications [ 25 ].

We included all studies from the database that focused exclusively on, or incorporated into a multi-country study, participants from eastern, western, southern, or northern Africa. We then accessed extracted data in the database for each study to characterize BWS application types, study methods, context, quality, and policy relevance. Specific fields included: study year, country, topic, objective, sample size, perspective (i.e., whose preferences are measured), terminology used to describe BWS type (e.g., object case, MaxDiff), mode of survey administration (e.g., in-person), time frame of prioritization scenario (i.e., past, present, or future), methods of instrument development (e.g., formative research, literature review to determine attributes and survey tool), time frame of prioritization scenario (i.e., past, present, or future), measurement scale, experimental design type, BWS anchor description (i.e., most/least, best/worst), directionality, total number of objects, number of objects per task, number of tasks in the experiment, number of tasks per respondent, analysis approach, and theoretical assumptions [ 25 ]. We re-classified database ‘topic’ for three studies from ‘agriculture’ to ‘business’ after reviewing study journal and conclusions. To understand study quality, we utilized the PREFS checklist quality assessment which measures quality and validity of preference studies on a 0–5 point scale, assigning one point for each of the following: p urpose of the study clearly defined; r espondents similar to non-respondents (sampling); e xplanation of preference assessment methods clear; f indings reported for all respondents; and significance testing done [ 37 ]. We also present the validated subjective quality (range: 1–10) and policy relevance (range: 1–10) scores adjudicated by the prior review.

Identified strengths and weaknesses of BWS were extracted from the studies. This information primarily came from the background, methods, and discussion portions of the studies, specifically when justifying the use of BWS and highlighting any study limitations. Seven domains of strengths and weaknesses were dervied and modified from exisiting best-practice documents for preference research [ 38 , 39 ].

To highlight the application of BWS and improve understanding of the method, we include a narrative case study description of two of the included manuscripts. We chose health-related studies selected for their high quality (≥ 4 PREFS score) and policy relevance (≥ 7) with illustrative diversity across other factors including country, perspective, design, instrument development, and analysis approach. While diversity in other points such as assumptions, directionality, and heterogeneity analysis are interesting, (See Tables  1 , 2 and 3 ) the selected articles offered rich contrast among the articles in this review.

The review identified 35 published studies using best-worst scaling for prioritization focused on (in full or in part) participants from Africa. As seen in Table  1 , studies originated from northern Africa ( N  = 1), eastern Africa ( N  = 7), southern Africa ( N  = 13), and western Africa ( N  = 14). This included 7 studies not identified by previous reviews. The majority of the papers (72%) from Africa were published between 2019 and 2023 [ 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 ] and referred to ‘Best Worst Scaling’ in their write-up (85%) [ 40 , 41 , 43 , 44 , 45 , 46 , 47 , 49 , 51 , 52 , 54 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 ]. Of the 14 countries where research was conducted, South Africa produced the most studies (34%) [ 41 , 42 , 46 , 51 , 52 , 55 , 56 , 60 , 61 , 66 , 67 , 70 , 74 ]. Most papers presented results from empirical research (97%) [ 40 , 41 , 42 , 44 , 45 , 46 , 47 , 48 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 70 , 71 , 72 , 73 , 74 ]. Agriculture (43%) [ 41 , 43 , 44 , 45 , 47 , 49 , 50 , 51 , 54 , 59 , 60 , 63 , 64 , 65 , 69 , 70 , 71 ] and health (34%) [ 40 , 46 , 48 , 52 , 55 , 56 , 57 , 62 , 67 , 72 , 73 , 74 ] were the most common research topics. Most studies (51%) measured preferences from the perspective of the patient / consumer [ 41 , 42 , 46 , 49 , 51 , 54 , 55 , 57 , 60 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 72 ], with nearly a third measuring provider / producer preferences (37%) [ 40 , 43 , 44 , 45 , 47 , 48 , 50 , 56 , 58 , 59 , 62 , 71 , 74 ].

Study design

All identified studies articulated their approach to developing their BWS instrument. The vast majority, (74%), utilized a literature review [ 41 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 52 , 53 , 54 , 55 , 56 , 57 , 60 , 61 , 64 , 65 , 66 , 67 , 68 , 69 , 71 , 72 , 74 ] while less than a quarter conducted piloting (17%) [ 42 , 46 , 50 , 53 , 56 , 73 ] or pretesting (20%) [ 47 , 50 , 52 , 54 , 56 , 66 , 71 ] of the instrument prior to administration. Half of the studies reported utilizing formal qualitative methods during instrument development [ 40 , 42 , 43 , 46 , 48 , 50 , 53 , 56 , 57 , 60 , 62 , 67 , 68 , 69 , 73 , 74 ]. In-person, surveyor-administered was the most common mode of survey administration (66%) [ 40 , 43 , 44 , 45 , 46 , 47 , 48 , 50 , 52 , 53 , 54 , 55 , 58 , 59 , 61 , 62 , 64 , 65 , 66 , 67 , 68 , 69 , 72 ] with online (9%) [ 51 , 56 , 74 ] and self-administered (11%) [ 41 , 57 , 70 , 73 ] less frequently utilized. The time horizon used to contextualize the survey was present tense most frequently (89%) [ 40 , 41 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 74 ] with only four studies asking about the future (11%) [ 43 , 54 , 55 , 73 ] and no studies asking about the past. BWS most frequently measured importance (69%) [ 41 , 45 , 46 , 49 , 51 , 52 , 53 , 55 , 56 , 57 , 59 , 60 , 61 , 63 , 64 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 ], followed by preferences motivating responses (17%) [ 40 , 43 , 44 , 48 , 54 , 65 ]. The most common phrasing to anchor the experiment was asking participants to choose the “most” and “least” [important/preferred/concerning] (86%) [ 40 , 41 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 51 , 52 , 54 , 55 , 56 , 57 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 68 , 69 , 70 , 71 , 72 , 73 , 74 ], followed by asking participants to choose the “best” and “worst” (11%) [ 43 , 53 , 58 , 67 ]. The most common experimental design used was a Balanced Incomplete Block Design (BIBD) (69%) [ 40 , 41 , 42 , 43 , 44 , 45 , 46 , 48 , 49 , 50 , 51 , 53 , 54 , 56 , 58 , 59 , 60 , 64 , 65 , 67 , 70 , 71 , 72 , 73 ] with 14% using a design from Sawtooth software [ 47 , 57 , 66 , 68 , 69 ]. The mean total objects per experiment was 15.9 (standard deviation (sd): 9.1, min 6: max: 48). Experiments had a mean of 5.2 objects per task (sd: 3.5, min: 3 max: 24), 22.4 (sd: 38.4, min: 1 max: 210) choice tasks per experiment and a mean of 13.7 (sd: 8.2, min: 1 max: 51) choice tasks per respondent during their participation in the experiment (Table  2 ).

BWS administration and analysis

Median sample size was 282 participants (IQR: 150–451, min: 28, max: 1002) but only 17% of papers gave a formal sample size justification (Table  3 ) [ 40 , 47 , 48 , 57 , 62 , 66 ]. Stata was the most reported data analysis program utilized (14%) [ 46 , 50 , 53 , 67 , 73 ], followed by Excel [ 42 , 61 , 66 ], SAS [ 43 , 48 , 62 ], or SPSS (9% each) [ 47 , 66 , 69 ]. The remaining studies (60%) did not specify which statistical analysis program they used. Probability / ratio rescaling (43%) [ 40 , 42 , 43 , 45 , 48 , 49 , 51 , 54 , 59 , 60 , 64 , 65 , 67 , 68 , 71 ], counts (49%) [ 40 , 41 , 45 , 46 , 50 , 52 , 53 , 54 , 56 , 58 , 61 , 63 , 66 , 67 , 70 , 71 , 73 ], regression coefficients (43%) [ 43 , 44 , 48 , 49 , 52 , 53 , 55 , 63 , 64 , 65 , 68 , 69 , 71 , 72 , 73 ], and best-worst scores (46%) [ 41 , 42 , 45 , 46 , 47 , 52 , 54 , 58 , 59 , 60 , 61 , 64 , 66 , 70 , 71 , 73 ] were common analysis approaches. Approximately half of the studies (40%) effects coded their data [ 40 , 42 , 49 , 50 , 52 , 55 , 64 , 65 , 69 , 71 , 72 ] while fewer (17%) used an omitted variable [ 43 , 44 , 48 , 53 , 54 , 62 ]. Heterogeneity analyses were conducted by most studies (63%) [ 42 , 43 , 44 , 45 , 46 , 49 , 50 , 51 , 52 , 54 , 55 , 57 , 58 , 59 , 64 , 66 , 67 , 68 , 69 , 72 ], with stratification being the most common heterogeneity analysis approach (40%) [ 42 , 44 , 49 , 51 , 52 , 55 , 57 , 63 , 64 , 65 , 66 , 67 , 68 , 72 ] followed next by latent class analysis (20%) [ 45 , 46 , 50 , 54 , 58 , 59 , 69 ].

Study quality and policy relevance

Policy relevance of studies was high with 66% scoring 7 or above on the 10-point scale [ 40 , 41 , 42 , 44 , 46 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 57 , 59 , 64 , 65 , 67 , 68 , 70 , 71 , 72 , 73 ]. Most studies scored in the upper half of the PREFS scale [ 3 , 4 , 5 , 40 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 57 , 58 , 59 , 60 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 ].

Strengths and weaknesses of BWS

Strengths and weaknesses of using BWS were identified in most of the published studies from Africa. Most of these studies focused solely on strengths of using BWS, though some identified both strengths and weaknesses and a few focused solely on weaknesses. These strengths and weaknesses were categorized into seven domains: context, purpose, method, burden, results, comparisons, and bias (Table  4 ).

Strengths were noted across all seven domains. In the context domain, BWS was noted for eliciting priorities and preferences from populations with lower education levels [ 52 , 72 ] and lower income [ 72 , 73 ]. The purpose domain highlighted strengths such as engaging communities [ 42 ] and informing decision-making [ 55 ]. The method domain emphasized that BWS overcomes limitations of other ranking methods [ 41 , 42 , 59 , 61 , 71 ], including variations in interpretation related to Likert-type scales. The burden domain commonly cited that BWS reduces respondent burden [ 41 , 45 , 49 , 50 , 52 , 55 , 59 , 61 , 66 , 68 , 72 ]. The results domain stressed that BWS captures more information [ 49 , 59 , 64 , 66 , 68 , 72 ] and produces higher quality and more precise results than other methods [ 41 , 45 , 49 , 58 , 71 ]. The comparisons domain focused on the ability to discriminate between objects [ 45 , 49 , 59 , 65 , 68 , 73 ]. The bias domain noted a reduction of general bias [ 41 , 51 , 53 , 59 , 66 , 70 ].

Weaknesses or limitations of using BWS were provided for only five of the seven domains. In the context domain, it was suggested that BWS might be challenging to use in clinical practice as a decision-making tool [ 57 ] and, contrary to studies noting it as a strength, some identified it as, challenging for populations with lower education levels [ 52 ]. The method domain pointed out that BWS involves hypothetical scenarios that may not be realistic [ 64 ]. The burden domain cited possible respondent burden associated with completing a series of BWS tasks [ 52 , 59 , 66 ]. The comparisons domain highlighted potential difficulties in making comparisons between populations within the sample [ 68 , 69 ]. The bias domain noted the possibility of desirability bias, where respondents report socially acceptable factors rather than genuine preferences [ 72 ]. Most of these weaknesses were posed as possibilities, rather than observed limitations.

Case studies

Policy relevance : Ozawa et al. [ 72 ] used BWS scaling to inform message development and effective delivery strategies with the goal of improving childhood vaccination awareness and demand in Nahuche, Zamfara State northern Nigeria, a region with low vaccination uptake. Instrument development : The survey items were developed through a review of published literature from Nigeria and other low-and-middle income countries on factors that affect childhood vaccine demand and uptake. Identified factors were categorized into four groups and each written out as a negative or positive influence based on the literature (e.g., vaccines may harm a child (negative), trust the views of leaders about vaccines (positive)), balancing equal numbers of positive and negative statements. A local study advisory board reviewed the items. Population : The survey was administered in-person to parents with children under 5 years old during a household survey from a representative sample of households. Administration : The survey was translated into Hausa and presented as a pictorial questionnaire with both photographs and words used to represent each factor. Perspective and time scale : Participants were asked to select the most and least important factors to them (consumer/patient) when deciding whether to vaccinate a 1-year-old child (present). Design : The study utilized a BIBD where every participant was presented with 16 choice sets of 6 factors. The survey took approximately 1 h to complete. Analysis : They assumed sequential BWS and used conditional logistic regression with effects coding to determine factor rankings. Heterogeneity : They examined heterogeneity in the results by looking at different strata: male/female parent and by previous diphtheria, tetanus, and pertussis (DTaP) vaccination status (yes/no). Participation : 198 parents participated. Results : The most important motivating factor for vaccinating children was the perception that vaccination makes one a good parent. Trust and norms were found to be more important than benefits and risks in vaccination decisions. They identified differences in rankings between fathers and mothers and in families with and without prior DTaP vaccination.

Policy relevance : Yemeke et al. [ 52 ] compared the Uganda national budget resource allocations across 16 sectors to citizen preferences for such allocations. A particular emphasis was placed on understanding the health care sector as a funding priority. Instrument development : The sixteen survey factors represented each of the sixteen sector allocations within the Uganda national government’s budget. Results from a pre-test of the survey to assess respondent understanding were used to improve the instrument. Population : The survey was administered in-person to the head of household or spouse, of at least 18 years of age, in both rural and urban areas in the Mukono district in central Uganda. Administration : The survey was translated into Luganda and displayed accompanying pictorial representations (such as photographs or graphics) of the sectors. Descriptions of the sectors and their functions were also read aloud. Perspective and time scale : Respondents were asked to select the most and least important sectors for resource allocation for their community (present) in each choice task, offering societal perspective prioritization. Design : A main effects orthogonal design was used to generate 16 choice tasks, each with 4 sectors (factors). Analysis : Count analysis: Relative mean best-worse scores were calculated for each sector. Scores were transformed to a positive scale, anchored at zero, to calculate percentage preference relative to estimated cumulative sums. The preferred percentages were compared to the actual percentages allocated in the national budget. Assuming sequential BWS, the authors used McFadden’s conditional logistic regression with effects coding to regress a single dichotomous choice variable on all sectors to assess ranking of preferred sectors. Heterogeneity : They examined heterogeneity in the results across two strata (urban/rural). Participation : There were 432 respondents across two settings (217 urban respondents, 215 rural respondents). Results : The health sector was the highest ranked sector by a significant margin amongst both rural and urban respondents. This result was consistent in both the relative best-minus-worst score method and the regression analysis. This highlighted a clear disparity between citizen preferences and national budget al.location, where the health sector was ranked sixth.

Policy relevance : Nyarko et al. [ 48 ] used BWS to quantify the antimicrobial dispensing practices of medicine sales outlet staff. Understanding these practices can help to improve patient safety and care quality, as well as to serve as a guide for decision-making in the pharmaceutical sector. Instrument development : The initial list of survey items was identified through informant interviews with medicine sales experts and an extensive literature review. The list of items was condensed into eight objects through focus group discussions with medicine sales outlet staff. Population : The survey was conducted in-person through interviewer-questionnaire administration with medicine sales outlet staff over a two-month period. Staff were eligible for the study if they had dispensed microbials within the past year. Administration : Demographic information was collected at the start of the questionnaire, followed by questions regarding the staff’s prescribing and dispensing practices of antimicrobial medications. Perspective and time scale : Participants were asked to indicate which object regarding antimicrobial dispensing practices concerned them the most and least. Design : A BIBD was used, generating 8 tasks, each with 7 objects. Analysis : Assuming the respondent chose the items they most liked and disliked, a maximum difference model with effects coding was used to determine parameter estimates. The relative importance of each object was determined based on the parameter estimates, allowing the objects to be ranked by level of importance. Heterogeneity : Heterogeneity was examined by comparing antimicrobial dispensing practices with their associated objects. Participation : 200 staff participated. Results : The antimicrobial dispensing practice that concerned respondents most was the need to follow the drug act and avoid dispensing antimicrobials without a prescription. Dispensing antibiotics to poor patients who may not be able to afford medical bills was not a concern for respondents. Overall, the study suggests that staff are careful when dispensing antimicrobials.

Our study identified 35 studies from across Africa, with the majority published in the past four years. BWS has most commonly been used in agriculture and health, although its broad applicability is demonstrated through its use in fields including business, environment, and transportation. Published studies from eastern, western, southern, and northern Africa include a broad range of sample sizes, design choices, and analytical approaches. Consistent with other BWS reviews [ 25 ], the majority of studies are both of high quality and of high policy relevance. It is interesting to highlight that among articles classified as ‘multi-country,’ two articles included participants from at least one African country in their sample. However, we considered them ‘near misses’ and excluded them as neither disaggregated data by country to ensure review data came from Africa. Both had few participants from African countries relative to the overall sample.

The application of BWS for prioritization in the Africa context is an emerging practice as demonstrated by our findings that its use has increased dramatically over time. The quality of studies, as measured by PREFS, has remained consistently high over time [ 24 ]. As the method continues to be applied, guidelines exist that could further ensure researchers conduct high-quality studies and publish high-quality papers about them [ 38 , 39 , 75 ]. This includes the increased use of instrument development methods, especially formal qualitative work, pretesting with cognitive interviewing, and piloting [ 76 ]. Importantly, this also requires attentiveness to ensuring accuracy of conceptual translation which is often missed if focusing exclusively on linguistic translation for studies working across multiple language groups [ 77 ].

The use of BWS also draws attention to the notion of prioritization itself. Our findings highlight the relevance of the method to policy making; over three-quarters of the included studies received a policy relevance score of seven or more. Clearly expressed priorities may allow policy makers to shift away from informal decision-making heuristics to more formal, principled decision making practices. Certainly, the more recently observed integration of BWS into deliberative processes [ 78 ] such as the modified policy Delphi [ 79 ] and deliberative democracy [ 80 ] exemplifies other ways that priority and preference elicitation can help inform group deliberation to achieve consensus [ 81 , 82 ]. That said, there remain questions about aggregating individual priorities in group decision making [ 36 ] and is a topic that others have grappled with, including in health state valuation [ 83 ]. Finally, it is important to note that BWS is only one method to assess priorities, where other methods include deliberation [ 84 ], simple rating or ranking approaches alone or as a part of a Delphi approach [ 85 ], pick n of m tasks [ 86 ], or stated preference methods such as willingness to pay [ 87 ], DCEs, or conjoint analyses [ 88 ].

With only 34% of included studies in health, this shows an opportunity for greater application in the health field. The range of health-specific topics to which BWS was applied in this study demonstrates versatility across health areas. Various types of preference facilitation have proven successful in health-specific areas [ 25 , 89 ]. Additionally, as health systems are conceptualized more broadly to include social determinants of health such as transportation options, food systems, and the environment the other topical applications demonstrate direct relevance to health systems and decision-making. Similarly, its use in business may be applicable to business-based approaches to health such as social marketing strategies for behavior change. While patient-centered care may involve individual-level preference accommodation of individual patients (e.g., choosing a community-adherence group over fast-track appointments among HIV differentiated service delivery options), systematic understanding of trends in patient preferences at a broader level can inform efficient health system decision-making (e.g., the creation of differentiated service delivery options for HIV). The two studies including data from an African country that we nearly included but did not due to lack of geographic specificity both show successful implementation of multi-country BWS.

Our study is subject to publication bias, as the systematic review searched published literature. The review targeted object case BWS (also known as case 1, MaxDiff, and object scaling). Further research into other types of BWS would contribute to a more comprehensive understanding of BWS in Africa. While we introduce BWS, we do not offer guidance on methods or analysis. Multiple other peer reviewed papers, including some included in this review [ 42 ], and texts offer clear instruction on BWS implementation to aid researchers wishing to apply this method [ 36 ]. Further, we do not include assessments of BWS participation from the participant perspective, as literature on this topic is very limited [ 90 ]. The field would benefit from frameworks for participant assessment of BWS participation to better incorporate this into BWS findings, as have been developed for instrument development [ 76 ].

We need effective tools to measure preferences and priorities, including tools that suit those whose input is more traditionally sought in health decision making (e.g., providers) and those whose voice is critical, but often unheard (e.g., patients, consumers, and community members). BWS is one of those tools. BWS offers a versatile alternative to DCEs and other better-known methods of measuring preferences. Researchers can successfully employ BWS across a range of sample sizes, and using various analysis approaches and programs. Growing use of the method across the African continent demonstrates its feasibility and utility, recommending it for consideration among researchers, program implementers, policy makers, and funders when conducting preference research to influence policy and improve health systems. Further research can help to recommend specific applications, including further work to understand context-specific implications of the strengths and limitations of the methods alongside cognitive burden and population-specific recommendations.

Data availability

All study data are available upon request from the authors for the review registered on PROSPERO (CRD42020209745).

Abbreviations

Balanced incomplete block design

  • Best-worst scaling

Discrete choice experiments

Diphtheria, tetanus, and pertussis

Purpose, responses, explanations, findings

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Sheikh K, Gilson L, Agyepong IA, Hanson K, Ssengooba F, Bennett S. Building the field of health policy and systems research: framing the questions. PLoS Med. 2011;8(8):e1001073.

Article   PubMed   PubMed Central   Google Scholar  

Waitzberg R, Quentin W, Webb E, Glied S. The structure and financing of Health Care systems affected how providers coped with COVID-19. Milbank Q. 2021;99(2):542.

Maeda M, Muraki Y, Kosaka T, Yamada T, Aoki Y, Kaku M, et al. Impact of health policy on structural requisites for antimicrobial stewardship: a nationwide survey conducted in Japanese hospitals after enforcing the revised reimbursement system for antimicrobial stewardship programs. J Infect Chemother. 2021;27(1):1–6.

Article   PubMed   Google Scholar  

Lipsky M. Street-level bureaucracy: dilemmas of the individual in public service. Russell Sage Foundation; 2010.

Mwamba C, Mukamba N, Sharma A, Lumbo K, Foloko M, Nyirenda H, et al. Provider discretionary power practices to support implementation of patient-centered HIV care in Lusaka, Zambia. Front Health Serv. 2022;2:918874.

Bylund S, Målqvist M, Peter N, van Herzig S. Negotiating social norms, the legacy of vertical health initiatives and contradicting health policies: a qualitative study of health professionals’ perceptions and attitudes of providing adolescent sexual and reproductive health care in Arusha and Kilimanjaro region, Tanzania. Global Health Action. 2020;13(1):1775992.

Akobeng AK. Principles of evidence based medicine. Arch Dis Child. 2005;90(8):837–40.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Petticrew M, Whitehead M, Macintyre SJ, Graham H, Egan M. Evidence for public health policy on inequalities: 1: the reality according to policymakers. J Epidemiol Community Health. 2004;58(10):811–6.

Gagliardi AR, Brouwers MC. Do guidelines offer implementation advice to target users? A systematic review of guideline applicability. BMJ open. 2015;5(2):e007047.

Cairney P, Oliver K. Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Res Policy Syst. 2017;15:1–11.

Article   Google Scholar  

Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS ONE. 2011;6(7):e21704.

Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–6.

Organization WH. WHO handbook for guideline development. 2nd ed. Geneva: World Health Organization; 2014.

Google Scholar  

Neumann I, Brignardello-Petersen R, Wiercioch W, Carrasco-Labra A, Cuello C, Akl E, et al. The GRADE evidence-to-decision framework: a report of its testing and application in 15 international guideline panels. Implement Sci. 2015;11:1–8.

De Man J, Roy WM, Sarkar N, Waweru E, Leys M, Van Olmen J, et al. Patient-centered care and people-centered health systems in sub-saharan Africa: why so little of something so badly needed? Int J Person Centered Med. 2016;6(3):162.

Scholl I, Zill JM, Härter M, Dirmaier J. An integrative model of patient-centeredness–a systematic review and concept analysis. PLoS ONE. 2014;9(9):e107828.

Armstrong MJ, Bloom JA. Patient involvement in guidelines is poor five years after institute of medicine standards: review of guideline methodologies. Res Involv Engagem. 2017;3:1–11.

Erismann S, Pesantes MA, Beran D, Leuenberger A, Farnham A, Berger Gonzalez de White M, et al. How to bring research evidence into policy? Synthesizing strategies of five research projects in low-and middle-income countries. Health Res Policy Syst. 2021;19:1–13.

McHugh N, Baker R, Bambra C. Policy actors’ perceptions of public participation to tackle health inequalities in Scotland: a paradox? Int J Equity Health. 2023;22(1):57.

Kuipers SJ, Cramm JM, Nieboer AP. The importance of patient-centered care and co-creation of care for satisfaction with care and physical and social well-being of patients with multi-morbidity in the primary care setting. BMC Health Serv Res. 2019;19:1–9.

Soekhai V, Whichello C, Levitan B, Veldwijk J, Pinto CA, Donkers B, et al. Methods for exploring and eliciting patient preferences in the medical product lifecycle: a literature review. Drug Discovery Today. 2019;24(7):1324–31.

Meara A, Crossnohere NL, Bridges JF. Methods for measuring patient preferences: an update and future directions. Curr Opin Rheumatol. 2019;31(2):125–31.

Bridges JFW, Albert W, Segal J, Bandeen-Roche K, Bone LR, Purnell T. Stated-preference methods. Center for Health Services and Outcome Research - The Johns Hopkins Bloomberg School of Public Health; 2014.

Brown L, Lee TH, De Allegri M, Rao K, Bridges JFP. Applying stated-preference methods to improve health systems in sub-saharan Africa: a systematic review. Expert Rev PharmacoEcon Outcomes Res. 2017;17(5):441–58.

Hollin IL, Paskett J, Schuster ALR, Crossnohere NL, Bridges JFP. Best-worst scaling and the prioritization of objects in Health: a systematic review. PharmacoEconomics. 2022;40(9):883–99.

Karuga R, Kok M, Luitjens M, Mbindyo P, Broerse JE, Dieleman M. Participation in primary health care through community-level health committees in Sub-saharan Africa: a qualitative synthesis. BMC Public Health. 2022;22(1):359.

Laterra A, Callahan T, Msiska T, Woelk G, Chowdhary P, Gullo S, et al. Bringing women’s voices to PMTCT CARE: adapting CARE’s community score Card© to engage women living with HIV to build quality health systems in Malawi. BMC Health Serv Res. 2020;20:1–14.

Rosenberg NE, Obiezu-Umeh C, Gbaja-Biamila T, Tahlil KM, Nwaozuru U, Oladele D, et al. Strategies for enhancing uptake of HIV self-testing among Nigerian youths: a descriptive analysis of the 4YouthByYouth crowdsourcing contest. BMJ Innovations. 2021;7(3):590.

Cheung KL, Wijnen BFM, Hollin IL, Janssen EM, Bridges JFP, Evers SMAA, et al. Using Best–Worst Scaling Investigate Preferences Health Care PharmacoEconomics. 2016;34(12):1195–209.

PubMed   Google Scholar  

Schuster ALR, Crossnohere NL, Campoamor NB, Hollin IL, Bridges JFP. The rise of best-worst scaling for prioritization: a transdisciplinary literature review. J Choice Modelling. 2024;50:100466.

DiSantostefano RL, Smith IP, Falahee M, Jiménez-Moreno AC, Oliveri S, Veldwijk J, et al. Research priorities to increase confidence in and Acceptance of Health Preference Research. What Questions Should be Prioritized Now? The Patient - Patient-Centered Outcomes Research; 2023.

Mühlbacher AC, Kaczynski A, Zweifel P, Johnson FR. Experimental measurement of preferences in health and healthcare using best-worst scaling: an overview. Health Econ Rev. 2016;6(1):2.

Potoglou D, Burge P, Flynn T, Netten A, Malley J, Forder J, et al. Best–worst scaling vs. discrete choice experiments: an empirical comparison using social care data. Soc Sci Med. 2011;72(10):1717–27.

Flynn TN, Louviere JJ, Peters TJ, Coast J. Best–worst scaling: what it can do for health care research and how to do it. J Health Econ. 2007;26(1):171–89.

Aizaki H, Fogarty J. R packages and tutorial for case 1 best–worst scaling. J Choice Modelling. 2023;46:100394.

Louviere JJ, Flynn TN, Marley AAJ. Best-worst scaling: theory, methods and applications. Cambridge: Cambridge University Press; 2015.

Book   Google Scholar  

Joy SM, Little E, Maruthur NM, Purnell TS, Bridges JFP. Patient preferences for the treatment of type 2 diabetes: a scoping review. PharmacoEconomics. 2013;31(10):877–92.

Bridges JFP, de Bekker-Grob EW, Hauber B, Heidenreich S, Janssen E, Bast A, et al. A Roadmap for increasing the usefulness and impact of patient-preference studies in decision making in Health: a good practices Report of an ISPOR Task Force. Value Health. 2023;26(2):153–62.

Bridges JFP, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, et al. Conjoint Analysis Applications in Health—a Checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health. 2011;14(4):403–13.

Honda A, Krucien N, Ryan M, Diouf ISN, Salla M, Nagai M, et al. For more than money: willingness of health professionals to stay in remote Senegal. Hum Resour Health. 2019;17(1):28.

Pentz C, Filter M, editors. From Generation Y to Generation Wine? A Best-Worst scaling study of wine attribute importance. Proceedings of the 52nd International Academic Conferences; 2019; Barcelona: International Institute of Social and Economic Sciences.

Teffo M, Earl A, Zuidgeest M. Understanding public transport needs in Cape Town’s informal settlements: a best-worst-scaling approach. J South Afr Institution Civil Eng. 2019;61(2):39–50.

Amadou Z. Agropastoralists’ Climate Change Adaptation Strategy Modeling: Software and Coding Method Accuracies for Best-Worst Scaling Data. African Handbook of Climate Change Adaptation2020. pp. 1–10.

Amadou Z. Which Sustainable Development Goals and Eco-challenges Matter Most to Niger’s Farmers and Herdsmen? A Best Worst Scaling Approach. Agricultural Research & Technology Open Access Journal; 2020.

Jin S, Mansaray B, Jin X, Li H. Farmers’ preferences for attributes of rice varieties in Sierra Leone. Food Secur. 2020;12(5):1185–97.

Kim H-Y, Hanrahan CF, Dowdy DW, Martinson NA, Golub JE, Bridges JF. Priorities among HIV-positive individuals for tuberculosis preventive therapies. Int J Tuberculosis Lung Disease. 2020;24(4):396–402.

Nyabinwa P, Kashongwe OB, Hirwa CD, Bebe BO. Perception of farmers about endometritis prevention and control measures for zero-grazed dairy cows on smallholder farms in Rwanda. BMC Vet Res. 2020;16(1):175.

Nyarko E, Akoto FM, Doku-Amponsah K. Perceived antimicrobial dispensing practices in medicine outlets in Ghana: a maximum difference experiment design. PLoS ONE. 2023;18(7):e0288519.

Okpiaifo G, Durand-Morat A, West GH, Nalley LL, Nayga RM, Wailes EJ. Consumers’ preferences for sustainable rice practices in Nigeria. Global Food Secur. 2020;24.

Ola O, Menapace L. Revisiting constraints to smallholder participation in high-value markets: a best‐worst scaling approach. Agric Econ. 2020;51(4):595–608.

Pentz C, Forrester A. The importance of wine attributes in an emerging wine-producing country. South Afr J Bus Manage. 2020;51(1).

Yemeke TT, Kiracho EE, Mutebi A, Apolot RR, Ssebagereka A, Evans DR, et al. Health versus other sectors: multisectoral resource allocation preferences in Mukono district, Uganda. PLoS ONE. 2020;15(7):e0235250.

Lewis-Brown E, Beatty H, Davis K, Rabearisoa A, Ramiaramanana J, Mascia MB et al. The importance of future generations and conflict management in conservation. Conserv Sci Pract. 2021;3(9).

Muunda E, Mtimet N, Schneider F, Wanyoike F, Dominguez-Salas P, Alonso S. Could the new dairy policy affect milk allocation to infants in Kenya? A best-worst scaling approach. Food Policy. 2021;101:102043.

Seymour ZA, Cloete E, McCurdy M, Olson M, Hughes J. Understanding values of sanitation users: examining preferences and behaviors for sanitation systems. J Water Sanitation Hygiene Dev. 2021;11(2):195–207.

van Niekerk K, Dada S, Tönsing K. Perspectives of rehabilitation professionals on assistive technology provision to young children in South Africa: a national survey. Disabil Rehabilitation: Assist Technol. 2023;18(5):588–95.

Gertz AM, Soffi ASM, Mompe A, Sickboy O, Gaines AN, Ryan R et al. Developing an Assessment of Contraceptive preferences in Botswana: piloting a Novel Approach using best-worst scaling of attributes. Front Global Women’s Health. 2022;3.

Maruyama Y, Ujiie K, Ahmed C, Diagne M, Irie M. Farmers’ preferences of the agricultural inputs for rice farming in Senegal River Basin, Mauritania: a best-worst scaling approach. J Arid Land Stud. 2022;32(S):61–5.

Ahoudou I, Sogbohossou DE, Hotegni NVF, Adjé CO, Komlan FA, Moumouni-Moussa I, et al. Farmers’ selection criteria for sweet potato varieties in Benin: an application of best-worst scaling. Exp Agric. 2023;59:e25.

Filter M, Pentz CD. Dealcoholised wine: exploring the purchasing considerations of South African generation Y consumers. Br Food J. 2023;125(13):205–19.

Walaza S, Onderwater P, Zuidgeest M, editors. Best-worst scaling approach to measure public transport user quality perceptions and preferences in cape town2023: Southern African Transport Conference.

Nyarko E, Arku D, Duah G. Best-worst scaling in studying the impact of the coronavirus pandemic on health professionals in Ghana. Model Assist Stat Appl. 2023;18(3):227–36.

Amadou Z, Which agricultural innovations matter most to Niger’s farmers?, Count-based and best-worst scaling approaches. Russian J Agricultural Socio-Economic Sci. 2023;140(8):3–12.

Abubakar M, Amadou Z, Daniel K. Best-worst scaling approach in predicting seed attribute preferences among resource poor farmers in Northern Nigeria. Int J Humanit Social Sci. 2014;2(9):304–10.

Amadou Z, Baky AD. Consumers’ preferences for quality and safety attributes of milk products in Niger: a best-worst scaling approach. J Agric Sci Technol. 2015;5(9):635–42.

Irlam JH, Zuidgeest M. Barriers to cycling mobility in a low-income community in Cape Town: a best-worst scaling approach. Case Stud Transp Policy. 2018;6(4):815–23.

Kim HY, Dowdy DW, Martinson NA, Golub E, Bridges J, Hanrahan JF. Maternal priorities for preventive therapy among HIV-positive pregnant women before and after delivery in South Africa: a best–worst scaling survey. J Int AIDS Soc. 2018;21(7):e25143.

Lagerkvist CJ, Kokko S, Karanja N. Health in perspective: framing motivational factors for personal sanitation in urban slums in Nairobi, Kenya, using anchored best–worst scaling. J Water Sanitation Hygiene Dev. 2014;4(1):108–19.

Lagerkvist CJ, Okello J, Karanja N. Anchored vs. relative best–worst scaling and latent class vs. hierarchical bayesian analysis of best–worst choice data: investigating the importance of food quality attributes in a developing country. Food Qual Prefer. 2012;25(1):29–40.

Lategan BW, Pentz CD, du Preez R. Importance of wine attributes: a South African generation Y perspective. Br Food J. 2017;119(7):1536–46.

Mansaray B, Jin S, Yuan R, Li H, editors. Farmers Preferences for Attributes of Seed Rice in Sierra Leone: A Best-Worst Scaling Approach. International Conference of Agricultural Economists; 2018.

Ozawa S, Wonodi C, Babalola O, Ismail T, Bridges J. Using best-worst scaling to rank factors affecting vaccination demand in northern Nigeria. Vaccine. 2017;35(47):6429–37.

Torbica A, De Allegri M, Belemsaga D, Medina-Lara A, Ridde V. What criteria guide national entrepreneurs’ policy decisions on user fee removal for maternal health care services? Use of a best–worst scaling choice experiment in West Africa. J Health Serv Res Policy. 2014;19(4):208–15.

van Niekerk K, Dada S, Tönsing K. Perspectives of rehabilitation professionals on assistive technology provision to young children in South Africa: a national survey. Disability and Rehabilitation: Assistive Technology; 2021.

Hollin IL, Craig BM, Coast J, Beusterien K, Vass C, DiSantostefano R, et al. Reporting formative qualitative research to support the development of quantitative preference study protocols and corresponding Survey instruments: guidelines for authors and reviewers. Patient - Patient-Centered Outcomes Res. 2020;13(1):121–36.

Janssen EM, Segal JB, Bridges JFP. A Framework for Instrument Development of a choice experiment: an application to type 2 diabetes. Patient - Patient-Centered Outcomes Res. 2016;9(5):465–79.

Herdman M, Fox-Rushby J, Badia X. ‘Equivalence’and the translation and adaptation of health-related quality of life questionnaires. Qual Life Res. 1997;6.

Oortwijn W, Husereau D, Abelson J, Barasa E, Bayani DD, Santos VC, et al. Designing and implementing deliberative processes for health technology assessment: a good practices report of a joint HTAi/ISPOR task force. Int J Technol Assess Health Care. 2022;38(1):e37.

Majumder MA, Blank ML, Geary J, Bollinger JM, Guerrini CJ, Robinson JO, et al. Challenges to building a gene variant commons to assess hereditary cancer risk: results of a modified policy Delphi panel deliberation. J Personalized Med. 2021;11(7):646.

Boyd JL, Sugarman J. Toward responsible public engagement in neuroethics. AJOB Neurosci. 2022;13(2):103–6.

Manera KE, Tong A, Craig JC, Shen J, Jesudason S, Cho Y, et al. An international Delphi survey helped develop consensus-based core outcome domains for trials in peritoneal dialysis. Kidney Int. 2019;96(3):699–710.

Van Schoubroeck S, Springael J, Van Dael M, Malina R, Van Passel S. Sustainability indicators for biobased chemicals: a Delphi study using Multi-criteria decision analysis. Resour Conserv Recycl. 2019;144:198–208.

Akunne AF, Bridges JF, Sanon M, Sauerborn R. Comparison of individual and group valuation of health state scenarios across communities in West Africa. Appl Health Econ Health Policy. 2006;5:261–8.

Raj M, Ryan K, Nong P, Calhoun K, Trinidad MG, De Vries R, et al. Public deliberation process on patient perspectives on health information sharing: evaluative descriptive study. JMIR cancer. 2022;8(3):e37793.

Voehler D, Neumann PJ, Ollendorf DA. Patient and caregiver views on measures of the value of health interventions. Patient Prefer Adherence. 2022:3383–92.

Kinter ET, Schmeding A, Rudolph I, dosReis S, Bridges JFP. Identifying patient-relevant endpoints among individuals with schizophrenia: an application of patient-centered health technology assessment. Int J Technol Assess Health Care. 2009;25(1):35–41.

Heinzen RR, Bridges JF. Comparison of four contingent valuation methods to estimate the economic value of a pneumococcal vaccine in Bangladesh. Int J Technol Assess Health Care. 2008;24(4):481–7.

Bridges JFP, Selck FW, Gray GE, McIntyre JA, Martinson NA. Condom avoidance and determinants of demand for male circumcision in Johannesburg, South Africa. Health Policy Plann. 2011;26(4):298–306.

Beckham SW, Crossnohere NL, Gross M, Bridges JFP. Eliciting preferences for HIV Prevention technologies: a systematic review. Patient - Patient-Centered Outcomes Res. 2021;14(2):151–74.

Rogers HJ, Marshman Z, Rodd H, Rowen D. Discrete choice experiments or best-worst scaling? A qualitative study to determine the suitability of preference elicitation tasks in research with children and young people. J patient-reported Outcomes. 2021;5:1–11.

Download references

Acknowledgements

Not applicable.

John F.P. Bridges holds an Innovation in Regulatory Science Award from the Burroughs Wellcome Fund. Laura K Beres’ contributions were supported by National Institute of Mental Health 1K01MH130244-01A1. The contents included here are the responsibility of the authors and do not represent the official views of the National Institute of Mental Health.

Author information

Authors and affiliations.

Department of International Health, Johns Hopkins Bloomberg School of Public Health, 615 N Wolfe Street, Office, Baltimore, MD, 5032, 21205, USA

Laura K. Beres

Department of Biomedical Informatics, The Ohio State University College of Medicine, 220 Lincoln Tower, 1800 Cannon Drive, Columbus, OH, 43210, USA

Nicola B. Campoamor, Anne Schuster & John F. P. Bridges

Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research (CATALYST), The Ohio State University, 700 Ackerman Road, Columbus, OH, 43202, USA

Rachael Hawthorn

Department of Global Health, University of Washington, UW Box #351620, Seattle, WA, 98195, USA

Melissa L. Mugambi

Centre for Infectious Disease Research in Zambia, Stand 378A / 15, Main Street, P.O. Box 34681, Ibex, Lusaka, Zambia

Musunge Mulabe, Natlie Vhlakis & Michael Kabongo

You can also search for this author in PubMed   Google Scholar

Contributions

LKB, JFPB: Conceptualization; LKB, RH, AS: Formal analysis; LKB, JFPB: Funding acquisition; LKB, RH, AS, JFPB: Wrote original draft; MLM, MM, NK, MK, NC: Review and editing.

Corresponding author

Correspondence to John F. P. Bridges .

Ethics declarations

Ethics approval and consent to participate.

The study did not include human subjects research. The review was registered on PROSPERO (CRD42020209745).

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Beres, L.K., Campoamor, N.B., Hawthorn, R. et al. Using best-worst scaling to inform policy decisions in Africa: a literature review. BMC Public Health 24 , 2607 (2024). https://doi.org/10.1186/s12889-024-20068-w

Download citation

Received : 10 February 2024

Accepted : 12 September 2024

Published : 27 September 2024

DOI : https://doi.org/10.1186/s12889-024-20068-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Measuring priorities
  • Stakeholder engagement

BMC Public Health

ISSN: 1471-2458

research methodology literature review example

IMAGES

  1. (PDF) Literature review as a research methodology: An overview and

    research methodology literature review example

  2. Flow chart for the literature review process.

    research methodology literature review example

  3. 15 Research Methodology Examples (2024)

    research methodology literature review example

  4. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    research methodology literature review example

  5. Sample of Research Literature Review

    research methodology literature review example

  6. Example Of Methodology

    research methodology literature review example

VIDEO

  1. Research Methods 3 & 4

  2. Literature review I Research Methodology I Dr Md Tanwir Alam I Unani Medicine I PG Class

  3. How to Write Literature Review for Research Proposal

  4. Research Methodology: Review of Literature

  5. CONDUCTING SYSTEMATIC LITERATURE REVIEW

  6. Fundamentals of Literature Review in Research Methodology for MSc & PhD Students

COMMENTS

  1. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  2. Literature review as a research methodology: An overview and guidelines

    This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. Therefore, questions can be raised about the quality and trustworthiness of these types of reviews.

  3. Literature Review

    Types of Literature Review are as follows: Narrative literature review: This type of review involves a comprehensive summary and critical analysis of the available literature on a particular topic or research question. It is often used as an introductory section of a research paper. Systematic literature review: This is a rigorous and ...

  4. (PDF) Literature Review as a Research Methodology: An overview and

    The use of a literature review as a methodology was previously explored in a recent study which provided an in-depth discussion on the processes and types of using literature review as a ...

  5. What is a Literature Review? How to Write It (with Examples)

    A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing ...

  6. Writing a Literature Review

    Writing a Literature Review. A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and ...

  7. PDF METHODOLOGY OF THE LITERATURE REVIEW

    Definitely, there are many frameworks within the Seven-Step Model, such as steps within steps. Therefore, the CLR is a meta-framework. For example, in Step 1: Exploring Beliefs and Topics, we provide many parts of the belief system, such as worldview, field/discipline-specific beliefs, and topic-specific beliefs.

  8. Literature Review Example (PDF + Template)

    The literature review opening/introduction section; The theoretical framework (or foundation of theory) The empirical research; The research gap; The closing section; We then progress to the sample literature review (from an A-grade Master's-level dissertation) to show how these concepts are applied in the literature review chapter. You can ...

  9. How-to conduct a systematic literature review: A quick guide for

    A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a topic about research ...

  10. Reviewing the research methods literature: principles and strategies

    A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent ...

  11. Steps in Conducting a Literature Review

    A literature review is important because it: Explains the background of research on a topic. Demonstrates why a topic is significant to a subject area. Discovers relationships between research studies/ideas. Identifies major themes, concepts, and researchers on a topic. Identifies critical gaps and points of disagreement.

  12. Which review is that? A guide to review types

    What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC medical research methodology, 18(1), 1-9. Full Text Chong, S. W., & Reinders, H. (2021). A methodological review of qualitative research syntheses in CALL: The state-of-the-art. System, 103, 102646 ...

  13. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  14. Literature Review

    Literature Review. A literature review is a discussion of the literature (aka. the "research" or "scholarship") surrounding a certain topic. A good literature review doesn't simply summarize the existing material, but provides thoughtful synthesis and analysis. The purpose of a literature review is to orient your own work within an existing ...

  15. Methodological Approaches to Literature Review

    The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature.

  16. What is a Literature Review?

    A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research. There are five key steps to writing a literature review: Search for relevant literature. Evaluate sources. Identify themes, debates and gaps.

  17. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  18. LibGuides: SOC 200

    Methodologically: group your sources by methodology. For example, divide the literature into categories like qualitative versus quantitative, or by population or geographical region, etc. Theoretically: group your sources by theoretical lens. Your textbook should have a section(s) dedicated to the various theories in your field.

  19. Literature Review Research

    The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic. A literature review is important because it: Explains the background of research on a topic. Demonstrates why a topic is significant to a subject area. Discovers relationships between research studies/ideas.

  20. Research Methods: Literature Reviews

    A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal.

  21. State-of-the-art literature review methodology: A six-step approach for

    This explicit methodology description is essential since many academic journals list SotA reviews as an accepted type of literature review. For instance, Educational Research Review , the American Academy of Pediatrics , and Thorax all lists SotA reviews as one of the types of knowledge syntheses they accept . However, while SotA reviews are ...

  22. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  23. Literature Review: Conducting & Writing

    This guide will provide research and writing tips to help students complete a literature review assignment. ... These are sample literature reviews from a class that were given to us by an instructor when APA 6th edition was still in effect. These were excellent papers from her class, but it does not mean they are perfect or contain no errors ...

  24. All guides: Literature reviews: Reviewing research methodologies

    Research methodology is the specific strategies, processes, or techniques utilised in the collection of information that is created and analysed. The methodology section of a research paper, or thesis, enables the reader to critically evaluate the study's validity and reliability by addressing how the data was collected or generated, and how ...

  25. Qualitative Research Methods in Mental Health and Psychotherapy: A

    This book provides a user-friendly introduction to the qualitative methods most commonly used in the mental health and psychotherapy arena. Chapters are written by leading researchers and the editors are experienced qualitative researchers, clinical trainers, and mental health practitioners Provides chapter-by-chapter guidance on conducting a qualitative study from across a range of approaches ...

  26. Sample size recalculation in three-stage clinical trials and its

    Background In clinical trials, the determination of an adequate sample size is a challenging task, mainly due to the uncertainty about the value of the effect size and nuisance parameters. One method to deal with this uncertainty is a sample size recalculation. Thereby, an interim analysis is performed based on which the sample size for the remaining trial is adapted. With few exceptions ...

  27. Literature Review of Explainable Tabular Data Analysis

    An extensive literature review was conducted using digital library databases, focusing on the phrases 'Explainable artificial intelligence', 'XAI', 'survey', and 'tabular data' from 2021 to 2024. ... an area of research focus is on methods of adversarial example-based analysis; this is being done on natural language processing ...

  28. Using best-worst scaling to inform policy decisions in Africa: a

    Background Stakeholder engagement in policy decision-making is critical to inform required trade-offs, especially in low-and-middle income settings, such as many African countries. Discrete-choice experiments are now commonly used to engage stakeholders in policy decisions, but other methods such as best-worst scaling (BWS), a theory-driven prioritization technique, could be equally important ...