literature review vs critical appraisal

  • Translation

Difference between a Literature Review and a Critical Review

By charlesworth author services.

  • Charlesworth Author Services
  • 08 October, 2021

As you read research papers, you may notice that there are two very different kinds of review of prior studies. Sometimes, this section of a paper is called a literature review, and at other times, it is referred to as a critical review or a critical context . These differences may be more commonly seen across different fields. Although both these sections are about reviewing prior and existing studies, this article aims to clarify the differences between the two.

Literature review

A literature review is a summary of prior or existing studies that are related to your own research paper . A literature review can be a part of a research paper or can form a paper in itself . For the former, the literature review is designed as a basis upon which your own current study is designed and built. The latter forms a synthesis of prior studies and is a way to highlight future research agendas or a framework.

Writing a literature review

In a literature review, you should attempt to discuss the arguments and findings in prior studies and then work to build on these studies as you develop your own research. You can also highlight the connection between existing and prior literature to demonstrate how the current study you are presenting can advance your knowledge in the field .

When performing a literature review, you should aim to summarise your discussions using a specific aspect of the literature, such as by topic, time, methodology/ design and findings . By doing so, you should be able to establish an effective way to present the relevant literature and demonstrate the connection between prior studies and your research.

Do note that a literature review does not include a presentation or discussion of any results or findings – this should come at a later point in the paper or study. You should also not impose your subjective viewpoints or opinions on the literature you discuss. 

Critical review

A critical review is also a popular way of reviewing prior and existing studies. It can cover and discuss the main ideas or arguments in a book or an article, or it can review a specific concept, theme, theoretical perspective or key construct found in the existing literature .

However, the key feature that distinguishes a critical review from a literature review is that the former is more than just a summary of different topics or methodologies. It offers more of a reflection and critique of the concept in question, and is engaged by authors to more clearly contextualise their own research within the existing literature and to present their opinions, perspectives and approaches .

Given that a critical review is not just a summary of prior literature, it is generally not considered acceptable to follow the same strategy as for a literature review. Instead, aim to organise and structure your critical review in a way that would enable you to discuss the key concepts, assert your perspectives and locate your arguments and research within the existing body of work. 

Structuring a critical review

A critical review would generally begin with an introduction to the concepts you would like to discuss. Depending on how broad the topics are, this can simply be a brief overview or it could set up a more complex framework. The discussion that follows through the rest of the review will then address and discuss your chosen themes or topics in more depth. 

Writing a critical review

The discussion within a critical review will not only present and summarise themes but also critically engage with the varying arguments, writings and perspectives within those themes. One important thing to note is that, similar to a literature review , you should keep your personal opinions, likes and dislikes out of a review. Whether you personally agree with a study or argument – and whether you like it or not – is immaterial. Instead, you should focus upon the effectiveness and relevance of the arguments , considering such elements as the evidence provided, the interpretations and analysis of the data, whether or not a study may be biased in any way, what further questions or problems it raises or what outstanding gaps and issues need to be addressed.

In conclusion

Although a review of previous and existing literature can be performed and presented in different ways, in essence, any literature or critical review requires a solid understanding of the most prominent work in the field as it relates to your own study. Such an understanding is crucial and significant for you to build upon and synthesise the existing knowledge, and to create and contribute new knowledge to advance the field .

Read previous (fourth) in series: How to refer to other studies or literature in the different sections of a research paper

Maximise your publication success with Charlesworth Author Services .

Charlesworth Author Services, a trusted brand supporting the world’s leading academic publishers, institutions and authors since 1928. 

To know more about our services, visit:  Our Services

Share with your colleagues

cwg logo

Scientific Editing Services

Sign up – stay updated.

We use cookies to offer you a personalized experience. By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.

  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Types of Literature Reviews

What Makes a Systematic Review Different from Other Types of Reviews?

  • Planning Your Systematic Review
  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91–108. doi:10.1111/j.1471-1842.2009.00848.x

Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode Seeks to identify most significant items in the field No formal quality assessment. Attempts to evaluate according to contribution Typically narrative, perhaps conceptual or chronological Significant component: seeks to identify conceptual contribution to embody existing or derive new theory
Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings May or may not include comprehensive searching May or may not include quality assessment Typically narrative Analysis may be chronological, conceptual, thematic, etc.
Mapping review/ systematic map Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature Completeness of searching determined by time/scope constraints No formal quality assessment May be graphical and tabular Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research
Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses Graphical and tabular with narrative commentary Numerical analysis of measures of effect assuming absence of heterogeneity
Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other
Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics May or may not include comprehensive searching (depends whether systematic overview or not) May or may not include quality assessment (depends whether systematic overview or not) Synthesis depends on whether systematic or not. Typically narrative but may include tabular features Analysis may be chronological, conceptual, thematic, etc.
Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies May employ selective or purposive sampling Quality assessment typically used to mediate messages not for inclusion/exclusion Qualitative, narrative synthesis Thematic analysis, may include conceptual models
Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research Completeness of searching determined by time constraints Time-limited formal quality assessment Typically narrative and tabular Quantities of literature and overall quality/direction of effect of literature
Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research) Completeness of searching determined by time/scope constraints. May include research in progress No formal quality assessment Typically tabular with some narrative commentary Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review
Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives Aims for comprehensive searching of current literature No formal quality assessment Typically narrative, may have tabular accompaniment Current state of knowledge and priorities for future investigation and research
Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review Aims for exhaustive, comprehensive searching Quality assessment may determine inclusion/exclusion Typically narrative with tabular accompaniment What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research
Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’ Aims for exhaustive, comprehensive searching May or may not include quality assessment Minimal narrative, tabular summary of studies What is known; recommendations for practice. Limitations
Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment May or may not include comprehensive searching May or may not include quality assessment Typically narrative with tabular accompaniment What is known; uncertainty around findings; limitations of methodology
Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results Identification of component reviews, but no search for primary studies Quality assessment of studies within component reviews and/or of reviews themselves Graphical and tabular with narrative commentary What is known; recommendations for practice. What remains unknown; recommendations for future research
  • << Previous: Home
  • Next: Planning Your Systematic Review >>
  • Last Updated: Jul 23, 2024 3:40 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

University of Texas

  • University of Texas Libraries
  • UT Libraries

Systematic Reviews & Evidence Synthesis Methods

Critical appraisal.

  • Types of Reviews
  • Formulate Question
  • Find Existing Reviews & Protocols
  • Register a Protocol
  • Searching Systematically
  • Supplementary Searching
  • Managing Results
  • Deduplication
  • Glossary of terms
  • Librarian Support
  • Video tutorials This link opens in a new window
  • Systematic Review & Evidence Synthesis Boot Camp

Some reviews require a critical appraisal for each study that makes it through the screening process. This involves a risk of bias assessment and/or a quality assessment. The goal of these reviews is not just to find all of the studies, but to determine their methodological rigor, and therefore, their credibility.

"Critical appraisal is the balanced assessment of a piece of research, looking for its strengths and weaknesses and them coming to a balanced judgement about its trustworthiness and its suitability for use in a particular context." 1

It's important to consider the impact that poorly designed studies could have on your findings and to rule out inaccurate or biased work.

Selection of a valid critical appraisal tool, testing the tool with several of the selected studies, and involving two or more reviewers in the appraisal are good practices to follow.

1. Purssell E, McCrae N. How to Perform a Systematic Literature Review: A Guide for Healthcare Researchers, Practitioners and Students. 1st ed. Springer ;  2020.

Evaluation Tools

  • The Appraisal of Guidelines for Research & Evaluation Instrument (AGREE II) The Appraisal of Guidelines for Research & Evaluation Instrument (AGREE II) was developed to address the issue of variability in the quality of practice guidelines.
  • Centre for Evidence-Based Medicine (CEBM). Critical Appraisal Tools "contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples."
  • Critical Appraisal Skills Programme (CASP) Checklists Critical Appraisal checklists for many different study types
  • Critical Review Form for Qualitative Studies Version 2, developed out of McMaster University
  • Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS) Downes MJ, Brennan ML, Williams HC, et al. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open 2016;6:e011458. doi:10.1136/bmjopen-2016-011458
  • Downs & Black Checklist for Assessing Studies Downs, S. H., & Black, N. (1998). The Feasibility of Creating a Checklist for the Assessment of the Methodological Quality Both of Randomised and Non-Randomised Studies of Health Care Interventions. Journal of Epidemiology and Community Health (1979-), 52(6), 377–384.
  • GRADE The Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group "has developed a common, sensible and transparent approach to grading quality (or certainty) of evidence and strength of recommendations."
  • Grade Handbook Full handbook on the GRADE method for grading quality of evidence.
  • MAGIC (Making GRADE the Irresistible choice) Clear succinct guidance in how to use GRADE
  • Joanna Briggs Institute. Critical Appraisal Tools "JBI’s critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers." Includes checklists for 13 types of articles.
  • Latitudes Network This is a searchable library of validity assessment tools for use in evidence syntheses. This website also provides access to training on the process of validity assessment.
  • Mixed Methods Appraisal Tool A tool that can be used to appraise a mix of studies that are included in a systematic review - qualitative research, RCTs, non-randomized studies, quantitative studies, mixed methods studies.
  • RoB 2 Tool Higgins JPT, Sterne JAC, Savović J, Page MJ, Hróbjartsson A, Boutron I, Reeves B, Eldridge S. A revised tool for assessing risk of bias in randomized trials In: Chandler J, McKenzie J, Boutron I, Welch V (editors). Cochrane Methods. Cochrane Database of Systematic Reviews 2016, Issue 10 (Suppl 1). dx.doi.org/10.1002/14651858.CD201601.
  • ROBINS-I Risk of Bias for non-randomized (observational) studies or cohorts of interventions Sterne J A, Hernán M A, Reeves B C, Savović J, Berkman N D, Viswanathan M et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions BMJ 2016; 355 :i4919 doi:10.1136/bmj.i4919
  • Scottish Intercollegiate Guidelines Network. Critical Appraisal Notes and Checklists "Methodological assessment of studies selected as potential sources of evidence is based on a number of criteria that focus on those aspects of the study design that research has shown to have a significant effect on the risk of bias in the results reported and conclusions drawn. These criteria differ between study types, and a range of checklists is used to bring a degree of consistency to the assessment process."
  • The TREND Statement (CDC) Des Jarlais DC, Lyles C, Crepaz N, and the TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. Am J Public Health. 2004;94:361-366.
  • Assembling the Pieces of a Systematic Reviews, Chapter 8: Evaluating: Study Selection and Critical Appraisal.
  • How to Perform a Systematic Literature Review, Chapter: Critical Appraisal: Assessing the Quality of Studies.

Other library guides

  • Duke University Medical Center Library. Systematic Reviews: Assess for Quality and Bias
  • UNC Health Sciences Library. Systematic Reviews: Assess Quality of Included Studies
  • Last Updated: Sep 6, 2024 12:39 PM
  • URL: https://guides.lib.utexas.edu/systematicreviews

Creative Commons License

Banner

Best Practice for Literature Searching

  • Literature Search Best Practice
  • What is literature searching?
  • What are literature reviews?
  • Hierarchies of evidence
  • 1. Managing references
  • 2. Defining your research question
  • 3. Where to search
  • 4. Search strategy
  • 5. Screening results
  • 6. Paper acquisition
  • 7. Critical appraisal
  • Further resources
  • Training opportunities and videos
  • Join FSTA student advisory board This link opens in a new window
  • Chinese This link opens in a new window
  • Italian This link opens in a new window
  • Persian This link opens in a new window
  • Portuguese This link opens in a new window
  • Spanish This link opens in a new window

What is critical appraisal?

We critically appraise information constantly, formally or informally, to determine if something is going to be valuable for our purpose and whether we trust the content it provides.

In the context of a literature search, critical appraisal is the process of systematically evaluating and assessing the research you have found in order to determine its quality and validity. It is essential to evidence-based practice.

More formally, critical appraisal is a systematic evaluation of research papers in order to answer the following questions:

  • Does this study address a clearly focused question?
  • Did the study use valid methods to address this question?
  • Are there factors, based on the study type, that might have confounded its results?
  • Are the valid results of this study important?
  • What are the confines of what can be concluded from the study?
  • Are these valid, important, though possibly limited, results applicable to my own research?

What is quality and how do you assess it?

In research we commissioned in 2018, researchers told us that they define ‘high quality evidence’ by factors such as:

  • Publication in a journal they consider reputable or with a high Impact Factor.
  • The peer review process, coordinated by publishers and carried out by other researchers.
  • Research institutions and authors who undertake quality research, and with whom they are familiar.

In other words, researchers use their own experience and expertise to assess quality.

However, students and early career researchers are unlikely to have built up that level of experience, and no matter how experienced a researcher is, there are certain times (for instance, when conducting a systematic review) when they will need to take a very close look at the validity of research articles.

There are checklists available to help with critical appraisal.  The checklists outline the key questions to ask for a specific study design.  Examples can be found in the  Critical Appraisal  section of this guide, and the Further Resources section.  

You may also find it beneficial to discuss issues such as quality and reputation with:

  • Your principal investigator (PI)
  • Your supervisor or other senior colleagues
  • Journal clubs. These are sometimes held by faculty or within organisations to encourage researchers to work together to discover and critically appraise information.
  • Topic-specific working groups

The more you practice critical appraisal, the quicker and more confident you will become at it.

  • << Previous: What are literature reviews?
  • Next: Hierarchies of evidence >>
  • Last Updated: May 17, 2024 5:48 PM
  • URL: https://ifis.libguides.com/literature_search_best_practice

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

literature review vs critical appraisal

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

literature review vs critical appraisal

Try for free

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

literature review vs critical appraisal

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

A guide to critical appraisal of evidence

Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN

Ellen Fineout-Overholt is the Mary Coulter Dowdy Distinguished Professor of Nursing at the University of Texas at Tyler School of Nursing, Tyler, Tex.

The author has disclosed no financial relationships related to this article.

Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers successfully determine what is known about a clinical issue. Patient outcomes are improved when clinicians apply a body of evidence to daily practice.

How do nurses assess the quality of clinical research? This article outlines a stepwise approach to critical appraisal of research studies' worth to clinical practice: rapid critical appraisal, evaluation, synthesis, and recommendation. When critical care nurses apply a body of valid, reliable, and applicable evidence to daily practice, patient outcomes are improved.

FU1-4

Critical care nurses can best explain the reasoning for their clinical actions when they understand the worth of the research supporting their practices. In c ritical appraisal , clinicians assess the worth of research studies to clinical practice. Given that achieving improved patient outcomes is the reason patients enter the healthcare system, nurses must be confident their care techniques will reliably achieve best outcomes.

Nurses must verify that the information supporting their clinical care is valid, reliable, and applicable. Validity of research refers to the quality of research methods used, or how good of a job researchers did conducting a study. Reliability of research means similar outcomes can be achieved when the care techniques of a study are replicated by clinicians. Applicability of research means it was conducted in a similar sample to the patients for whom the findings will be applied. These three criteria determine a study's worth in clinical practice.

Appraising the worth of research requires a standardized approach. This approach applies to both quantitative research (research that deals with counting things and comparing those counts) and qualitative research (research that describes experiences and perceptions). The word critique has a negative connotation. In the past, some clinicians were taught that studies with flaws should be discarded. Today, it is important to consider all valid and reliable research informative to what we understand as best practice. Therefore, the author developed the critical appraisal methodology that enables clinicians to determine quickly which evidence is worth keeping and which must be discarded because of poor validity, reliability, or applicability.

Evidence-based practice process

The evidence-based practice (EBP) process is a seven-step problem-solving approach that begins with data gathering (see Seven steps to EBP ). During daily practice, clinicians gather data supporting inquiry into a particular clinical issue (Step 0). The description is then framed as an answerable question (Step 1) using the PICOT question format ( P opulation of interest; I ssue of interest or intervention; C omparison to the intervention; desired O utcome; and T ime for the outcome to be achieved). 1 Consistently using the PICOT format helps ensure that all elements of the clinical issue are covered. Next, clinicians conduct a systematic search to gather data answering the PICOT question (Step 2). Using the PICOT framework, clinicians can systematically search multiple databases to find available studies to help determine the best practice to achieve the desired outcome for their patients. When the systematic search is completed, the work of critical appraisal begins (Step 3). The known group of valid and reliable studies that answers the PICOT question is called the body of evidence and is the foundation for the best practice implementation (Step 4). Next, clinicians evaluate integration of best evidence with clinical expertise and patient preferences and values to determine if the outcomes in the studies are realized in practice (Step 5). Because healthcare is a community of practice, it is important that experiences with evidence implementation be shared, whether the outcome is what was expected or not. This enables critical care nurses concerned with similar care issues to better understand what has been successful and what has not (Step 6).

Critical appraisal of evidence

The first phase of critical appraisal, rapid critical appraisal, begins with determining which studies will be kept in the body of evidence. All valid, reliable, and applicable studies on the topic should be included. This is accomplished using design-specific checklists with key markers of good research. When clinicians determine a study is one they want to keep (a “keeper” study) and that it belongs in the body of evidence, they move on to phase 2, evaluation. 2

In the evaluation phase, the keeper studies are put together in a table so that they can be compared as a body of evidence, rather than individual studies. This phase of critical appraisal helps clinicians identify what is already known about a clinical issue. In the third phase, synthesis, certain data that provide a snapshot of a particular aspect of the clinical issue are pulled out of the evaluation table to showcase what is known. These snapshots of information underpin clinicians' decision-making and lead to phase 4, recommendation. A recommendation is a specific statement based on the body of evidence indicating what should be done—best practice. Critical appraisal is not complete without a specific recommendation. Each of the phases is explained in more detail below.

Phase 1: Rapid critical appraisal . Rapid critical appraisal involves using two tools that help clinicians determine if a research study is worthy of keeping in the body of evidence. The first tool, General Appraisal Overview for All Studies (GAO), covers the basics of all research studies (see Elements of the General Appraisal Overview for All Studies ). Sometimes, clinicians find gaps in knowledge about certain elements of research studies (for example, sampling or statistics) and need to review some content. Conducting an internet search for resources that explain how to read a research paper, such as an instructional video or step-by-step guide, can be helpful. Finding basic definitions of research methods often helps resolve identified gaps.

To accomplish the GAO, it is best to begin with finding out why the study was conducted and how it answers the PICOT question (for example, does it provide information critical care nurses want to know from the literature). If the study purpose helps answer the PICOT question, then the type of study design is evaluated. The study design is compared with the hierarchy of evidence for the type of PICOT question. The higher the design falls within the hierarchy or levels of evidence, the more confidence nurses can have in its finding, if the study was conducted well. 3,4 Next, find out what the researchers wanted to learn from their study. These are called the research questions or hypotheses. Research questions are just what they imply; insufficient information from theories or the literature are available to guide an educated guess, so a question is asked. Hypotheses are reasonable expectations guided by understanding from theory and other research that predicts what will be found when the research is conducted. The research questions or hypotheses provide the purpose of the study.

Next, the sample size is evaluated. Expectations of sample size are present for every study design. As an example, consider as a rule that quantitative study designs operate best when there is a sample size large enough to establish that relationships do not exist by chance. In general, the more participants in a study, the more confidence in the findings. Qualitative designs operate best with fewer people in the sample because these designs represent a deeper dive into the understanding or experience of each person in the study. 5 It is always important to describe the sample, as clinicians need to know if the study sample resembles their patients. It is equally important to identify the major variables in the study and how they are defined because this helps clinicians best understand what the study is about.

The final step in the GAO is to consider the analyses that answer the study research questions or confirm the study hypothesis. This is another opportunity for clinicians to learn, as learning about statistics in healthcare education has traditionally focused on conducting statistical tests as opposed to interpreting statistical tests. Understanding what the statistics indicate about the study findings is an imperative of critical appraisal of quantitative evidence.

The second tool is one of the variety of rapid critical appraisal checklists that speak to validity, reliability, and applicability of specific study designs, which are available at varying locations (see Critical appraisal resources ). When choosing a checklist to implement with a group of critical care nurses, it is important to verify that the checklist is complete and simple to use. Be sure to check that the checklist has answers to three key questions. The first question is: Are the results of the study valid? Related subquestions should help nurses discern if certain markers of good research design are present within the study. For example, identifying that study participants were randomly assigned to study groups is an essential marker of good research for a randomized controlled trial. Checking these essential markers helps clinicians quickly review a study to check off these important requirements. Clinical judgment is required when the study lacks any of the identified quality markers. Clinicians must discern whether the absence of any of the essential markers negates the usefulness of the study findings. 6-9

TU1

The second question is: What are the study results? This is answered by reviewing whether the study found what it was expecting to and if those findings were meaningful to clinical practice. Basic knowledge of how to interpret statistics is important for understanding quantitative studies, and basic knowledge of qualitative analysis greatly facilitates understanding those results. 6-9

The third question is: Are the results applicable to my patients? Answering this question involves consideration of the feasibility of implementing the study findings into the clinicians' environment as well as any contraindication within the clinicians' patient populations. Consider issues such as organizational politics, financial feasibility, and patient preferences. 6-9

When these questions have been answered, clinicians must decide about whether to keep the particular study in the body of evidence. Once the final group of keeper studies is identified, clinicians are ready to move into the phase of critical appraisal. 6-9

Phase 2: Evaluation . The goal of evaluation is to determine how studies within the body of evidence agree or disagree by identifying common patterns of information across studies. For example, an evaluator may compare whether the same intervention is used or if the outcomes are measured in the same way across all studies. A useful tool to help clinicians accomplish this is an evaluation table. This table serves two purposes: first, it enables clinicians to extract data from the studies and place the information in one table for easy comparison with other studies; and second, it eliminates the need for further searching through piles of periodicals for the information. (See Bonus Content: Evaluation table headings .) Although the information for each of the columns may not be what clinicians consider as part of their daily work, the information is important for them to understand about the body of evidence so that they can explain the patterns of agreement or disagreement they identify across studies. Further, the in-depth understanding of the body of evidence from the evaluation table helps with discussing the relevant clinical issue to facilitate best practice. Their discussion comes from a place of knowledge and experience, which affords the most confidence. The patterns and in-depth understanding are what lead to the synthesis phase of critical appraisal.

The key to a successful evaluation table is simplicity. Entering data into the table in a simple, consistent manner offers more opportunity for comparing studies. 6-9 For example, using abbreviations versus complete sentences in all columns except the final one allows for ease of comparison. An example might be the dependent variable of depression defined as “feelings of severe despondency and dejection” in one study and as “feeling sad and lonely” in another study. 10 Because these are two different definitions, they need to be different dependent variables. Clinicians must use their clinical judgment to discern that these different dependent variables require different names and abbreviations and how these further their comparison across studies.

TU2

Sample and theoretical or conceptual underpinnings are important to understanding how studies compare. Similar samples and settings across studies increase agreement. Several studies with the same conceptual framework increase the likelihood of common independent variables and dependent variables. The findings of a study are dependent on the analyses conducted. That is why an analysis column is dedicated to recording the kind of analysis used (for example, the name of the statistical analyses for quantitative studies). Only statistics that help answer the clinical question belong in this column. The findings column must have a result for each of the analyses listed; however, in the actual results, not in words. For example, a clinician lists a t -test as a statistic in the analysis column, so a t -value should reflect whether the groups are different as well as probability ( P -value or confidence interval) that reflects statistical significance. The explanation for these results would go in the last column that describes worth of the research to practice. This column is much more flexible and contains other information such as the level of evidence, the studies' strengths and limitations, any caveats about the methodology, or other aspects of the study that would be helpful to its use in practice. The final piece of information in this column is a recommendation for how this study would be used in practice. Each of the studies in the body of evidence that addresses the clinical question is placed in one evaluation table to facilitate the ease of comparing across the studies. This comparison sets the stage for synthesis.

Phase 3: Synthesis . In the synthesis phase, clinicians pull out key information from the evaluation table to produce a snapshot of the body of evidence. A table also is used here to feature what is known and help all those viewing the synthesis table to come to the same conclusion. A hypothetical example table included here demonstrates that a music therapy intervention is effective in reducing the outcome of oxygen saturation (SaO 2 ) in six of the eight studies in the body of evidence that evaluated that outcome (see Sample synthesis table: Impact on outcomes ). Simply using arrows to indicate effect offers readers a collective view of the agreement across studies that prompts action. Action may be to change practice, affirm current practice, or conduct research to strengthen the body of evidence by collaborating with nurse scientists.

When synthesizing evidence, there are at least two recommended synthesis tables, including the level-of-evidence table and the impact-on-outcomes table for quantitative questions, such as therapy or relevant themes table for “meaning” questions about human experience. (See Bonus Content: Level of evidence for intervention studies: Synthesis of type .) The sample synthesis table also demonstrates that a final column labeled synthesis indicates agreement across the studies. Of the three outcomes, the most reliable for clinicians to see with music therapy is SaO 2 , with positive results in six out of eight studies. The second most reliable outcome would be reducing increased respiratory rate (RR). Parental engagement has the least support as a reliable outcome, with only two of five studies showing positive results. Synthesis tables make the recommendation clear to all those who are involved in caring for that patient population. Although the two synthesis tables mentioned are a great start, the evidence may require more synthesis tables to adequately explain what is known. These tables are the foundation that supports clinically meaningful recommendations.

Phase 4: Recommendation . Recommendations are definitive statements based on what is known from the body of evidence. For example, with an intervention question, clinicians should be able to discern from the evidence if they will reliably get the desired outcome when they deliver the intervention as it was in the studies. In the sample synthesis table, the recommendation would be to implement the music therapy intervention across all settings with the population, and measure SaO 2 and RR, with the expectation that both would be optimally improved with the intervention. When the synthesis demonstrates that studies consistently verify an outcome occurs as a result of an intervention, however that intervention is not currently practiced, care is not best practice. Therefore, a firm recommendation to deliver the intervention and measure the appropriate outcomes must be made, which concludes critical appraisal of the evidence.

A recommendation that is off limits is conducting more research, as this is not the focus of clinicians' critical appraisal. In the case of insufficient evidence to make a recommendation for practice change, the recommendation would be to continue current practice and monitor outcomes and processes until there are more reliable studies to be added to the body of evidence. Researchers who use the critical appraisal process may indeed identify gaps in knowledge, research methods, or analyses, for example, that they then recommend studies that would fill in the identified gaps. In this way, clinicians and nurse scientists work together to build relevant, efficient bodies of evidence that guide clinical practice.

Evidence into action

Critical appraisal helps clinicians understand the literature so they can implement it. Critical care nurses have a professional and ethical responsibility to make sure their care is based on a solid foundation of available evidence that is carefully appraised using the phases outlined here. Critical appraisal allows for decision-making based on evidence that demonstrates reliable outcomes. Any other approach to the literature is likely haphazard and may lead to misguided care and unreliable outcomes. 11 Evidence translated into practice should have the desired outcomes and their measurement defined from the body of evidence. It is also imperative that all critical care nurses carefully monitor care delivery outcomes to establish that best outcomes are sustained. With the EBP paradigm as the basis for decision-making and the EBP process as the basis for addressing clinical issues, critical care nurses can improve patient, provider, and system outcomes by providing best care.

Seven steps to EBP

Step 0–A spirit of inquiry to notice internal data that indicate an opportunity for positive change.

Step 1– Ask a clinical question using the PICOT question format.

Step 2–Conduct a systematic search to find out what is already known about a clinical issue.

Step 3–Conduct a critical appraisal (rapid critical appraisal, evaluation, synthesis, and recommendation).

Step 4–Implement best practices by blending external evidence with clinician expertise and patient preferences and values.

Step 5–Evaluate evidence implementation to see if study outcomes happened in practice and if the implementation went well.

Step 6–Share project results, good or bad, with others in healthcare.

Adapted from: Steps of the evidence-based practice (EBP) process leading to high-quality healthcare and best patient outcomes. © Melnyk & Fineout-Overholt, 2017. Used with permission.

Critical appraisal resources

  • The Joanna Briggs Institute http://joannabriggs.org/research/critical-appraisal-tools.html
  • Critical Appraisal Skills Programme (CASP) www.casp-uk.net/casp-tools-checklists
  • Center for Evidence-Based Medicine www.cebm.net/critical-appraisal
  • Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice . 3rd ed. Philadelphia, PA: Wolters Kluwer; 2015.

A full set of critical appraisal checklists are available in the appendices.

Bonus content!

This article includes supplementary online-exclusive material. Visit the online version of this article at www.nursingcriticalcare.com to access this content.

critical appraisal; decision-making; evaluation of research; evidence-based practice; synthesis

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Determining the level of evidence: experimental research appraisal, caring for hospitalized patients with alcohol withdrawal syndrome, the qt interval, evidence-based practice for red blood cell transfusions, searching with critical appraisal tools.

  • Library Homepage

Literature Review: The What, Why and How-to Guide: Evaluating Sources & Literature Reviews

  • Literature Reviews?
  • Strategies to Finding Sources
  • Keeping up with Research!
  • Evaluating Sources & Literature Reviews
  • Organizing for Writing
  • Writing Literature Review
  • Other Academic Writings

Evaluating Literature Reviews and Sources

  • Tips for Evaluating Sources (Print vs. Internet Sources) Excellent page that will guide you on what to ask to determine if your source is a reliable one. Check the other topics in the guide: Evaluating Bibliographic Citations and Evaluation During Reading on the left side menu.

Criteria to evaluate sources:

  • Authority : Who is the author? What are the author's credentials and areas of expertise? Is he or she affiliated with a university?
  • Usefulness : How this source related to your topic? How current or relevant it is to your topic?
  • Reliability : Does the information comes from a reliable, trusted source such as an academic journal?
  • Critically Analyzing Information Sources: Critical Appraisal and Analysis (Cornell University Library) Ten things to look for when you evaluate an information source.

Reading Critically

Reading critically (summary from how to read academic texts critically).

  • Who is the author? What is his/her standing in the field?
  • What is the author’s purpose? To offer advice, make practical suggestions, solve a specific problem, critique or clarify?
  • Note the experts in the field: are there specific names/labs that are frequently cited?
  • Pay attention to methodology: is it sound? what testing procedures, subjects, materials were used?
  • Note conflicting theories, methodologies, and results. Are there any assumptions being made by most/some researchers?
  • Theories: have they evolved over time?
  • Evaluate and synthesize the findings and conclusions. How does this study contribute to your project?
  • How to Read Academic Texts Critically Excellent document about how best to read critically academic articles and other texts.
  • How to Read an Academic Article This is an excellent paper that teach you how to read an academic paper, how to determine if it is something to set aside, or something to read deeply. Good advice to organize your literature for the Literature Review or just reading for classes.
  • << Previous: Keeping up with Research!
  • Next: Organizing for Writing >>
  • Last Updated: Mar 5, 2024 11:44 AM
  • URL: https://guides.library.ucsb.edu/litreview

Library Homepage

  • Literature Searching
  • Steps in Conducting a Literature Search
  • 1 Reflection on Research Question
  • Primary and Secondary Sources
  • Hierarchy of Evidence
  • Keyword Identification
  • Boolean Search Operators
  • Subject Headings, Thesaurus and MeSH
  • 4 Conducting Search

Critical Appraisal

  • Critical Appraisal Tools
  • 6 Documenting Search
  • Literature Reviews

What is Critical Appraisal?

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context. It is an essential skill for evidence-based practice as it allows people to find and use research evidence reliably and efficiently.

Key steps in critical appraisal:

1. Thoroughly understanding the research, including its aims, methodology, results and conclusions, while being aware of any limitations or potential bias.

2. Using a framework or checklist to provide structure and ensure all key points are considered. This allows you to record your reasoning behind decisions based on the research.

3. Identifying the research methods, such as study design, sample size, and data collection and analysis techniques, to assess validity and reliability.

4. Checking the results and conclusions to ensure they are justified by the data and not unduly influenced by bias.

5. Determining the relevance and applicability of the research findings to your specific context or question.

Critical appraisal skills are important as they enable you to systematically and objectively assess published papers, regardless of where they are published or who wrote them. It is crucial to avoid being misled by poor quality research and ensure that any findings used as evidence can reliably improve practice.

What are Critical Appraisal Tools?

Critical appraisal tools are instruments or checklists used to assess the methodological quality, validity, and relevance of published research studies. They provide a structured framework to evaluate various aspects of a study, such as the study design, sampling methods, data collection, statistical analysis, ethical considerations, and applicability of the results.

Key Points About Critical Appraisal Tools

They aim to assess the trustworthiness, relevance, and results of published papers by examining different components of the research process.

The content and criteria assessed by these tools can vary significantly, as there is a lack of consensus on the essential items for critical appraisal.

Many tools are study design-specific, evaluating different aspects for randomized controlled trials, observational studies, qualitative research, systematic reviews, and other study types.

Common elements appraised include sampling methods, internal validity, control of confounding factors, ethical conduct, statistical analysis, and generalizability of results.

Some tools provide an overall quality rating (e.g. high, medium, low) based on the individual item assessments.

The empirical basis for the construction and validation of many critical appraisal tools is often lacking, with limited evidence of their reliability and validity.

In summary, critical appraisal tools are structured instruments that aim to evaluate the methodological rigor and quality of research studies. They assess various aspects of the research process, but their content and criteria can vary widely due to the lack of consensus on essential items and empirical validation.

  • << Previous: 5 Screening, Evaluation and Selection of Search Results
  • Next: Critical Appraisal Tools >>
  • Last Updated: Sep 4, 2024 11:42 AM
  • URL: https://library.lsbu.ac.uk/literaturesearching

Critical Appraisal: Assessing the Quality of Studies

  • First Online: 05 August 2020

Cite this chapter

literature review vs critical appraisal

  • Edward Purssell   ORCID: orcid.org/0000-0003-3748-0864 3 &
  • Niall McCrae   ORCID: orcid.org/0000-0001-9776-7694 4  

8607 Accesses

There is great variation in the type and quality of research evidence. Having completed your search and assembled your studies, the next step is to critically appraise the studies to ascertain their quality. Ultimately you will be making a judgement about the overall evidence, but that comes later. You will see throughout this chapter that we make a clear differentiation between the individual studies and what we call the body of evidence , which is all of the studies and anything else that we use to answer the question or to make a recommendation. This chapter deals with only the first of these—the individual studies. Critical appraisal, like everything else in systematic literature reviewing, is a scientific exercise that requires individual judgement, and we describe some tools to help you.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Oxford Centre for Evidence-Based Medicine (OCEBM) (2016) OCEBM levels of evidence. In: CEBM. https://www.cebm.net/2016/05/ocebm-levels-of-evidence/ . Accessed 17 Apr 2020

Aromataris E, Munn Z (eds) (2017) Joanna Briggs Institute reviewer’s manual. The Joanna Briggs Institute, Adelaide

Google Scholar  

Daly J, Willis K, Small R et al (2007) A hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol 60:43–49. https://doi.org/10.1016/j.jclinepi.2006.03.014

Article   PubMed   Google Scholar  

EQUATOR Network (2020) What is a reporting guideline?—The EQUATOR Network. https://www.equator-network.org/about-us/what-is-a-reporting-guideline/ . Accessed 7 Mar 2020

Tong A, Sainsbury P, Craig J (2007) Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 19:349–357. https://doi.org/10.1093/intqhc/mzm042

von Elm E, Altman DG, Egger M et al (2007) The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med 4:e296. https://doi.org/10.1371/journal.pmed.0040296

Article   Google Scholar  

Brouwers MC, Kerkvliet K, Spithoff K, AGREE Next Steps Consortium (2016) The AGREE reporting checklist: a tool to improve reporting of clinical practice guidelines. BMJ 352:i1152. https://doi.org/10.1136/bmj.i1152

Article   PubMed   PubMed Central   Google Scholar  

Moher D, Liberati A, Tetzlaff J et al (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6:e1000097. https://doi.org/10.1371/journal.pmed.1000097

Boutron I, Page MJ, Higgins JPT, Altman DG, Lundh A, Hróbjartsson A (2019) Chapter 7: Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds). Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019), Cochrane. https://www.training.cochrane.org/handbook

Critical Appraisal Skills Programme (2018) CASP checklists. In: CASP—critical appraisal skills programme. https://casp-uk.net/casp-tools-checklists/ . Accessed 7 Mar 2020

Higgins JPT, Savović J, Page MJ et al (2019) Chapter 8: Assessing risk of bias in a randomized trial. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Chapter   Google Scholar  

Guyatt GH, Oxman AD, Kunz R et al (2011) GRADE guidelines 6. Rating the quality of evidence—imprecision. J Clin Epidemiol 64:1283–1293. https://doi.org/10.1016/j.jclinepi.2011.01.012

Sterne JAC, Savović J, Page MJ et al (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Sterne JA, Hernán MA, Reeves BC et al (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Wells GA, Shea B, O’Connell D et al (2019) The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Ottawa Hospital Research Institute, Ottawa. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp . Accessed 7 Mar 2020

Cochrane Community (2020) Glossary—Cochrane community. https://community.cochrane.org/glossary#letter-R . Accessed 8 Mar 2020

Messick S (1989) Meaning and values in test validation: the science and ethics of assessment. Educ Res 18:5–11. https://doi.org/10.3102/0013189X018002005

Sparkes AC (2001) Myth 94: qualitative health researchers will agree about validity. Qual Health Res 11:538–552. https://doi.org/10.1177/104973230101100409

Article   CAS   PubMed   Google Scholar  

Aguinis H, Solarino AM (2019) Transparency and replicability in qualitative research: the case of interviews with elite informants. Strat Manag J 40:1291–1315. https://doi.org/10.1002/smj.3015

Lincoln YS, Guba EG (1985) Naturalistic inquiry. Sage Publications, Beverly Hills, CA

Book   Google Scholar  

Hannes K (2011) Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K et al (eds) Supplementary guidance for inclusion of qualitative research in Cochrane systematic reviews of interventions. Cochrane Collaboration Qualitative Methods Group, London

Munn Z, Porritt K, Lockwood C et al (2014) Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol 14:108. https://doi.org/10.1186/1471-2288-14-108

Toye F, Seers K, Allcock N et al (2013) ‘Trying to pin down jelly’—exploring intuitive processes in quality assessment for meta-ethnography. BMC Med Res Methodol 13:46. https://doi.org/10.1186/1471-2288-13-46

Katikireddi SV, Egan M, Petticrew M (2015) How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study. J Epidemiol Community Health 69:189–195. https://doi.org/10.1136/jech-2014-204711

McKenzie JE, Brennan SE, Ryan RE et al (2019) Chapter 9: Summarizing study characteristics and preparing for synthesis. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Deeks JJ, Higgins JPT, Altman DG (2019) Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Download references

Author information

Authors and affiliations.

School of Health Sciences, City, University of London, London, UK

Edward Purssell

Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care, King’s College London, London, UK

Niall McCrae

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Edward Purssell .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Purssell, E., McCrae, N. (2020). Critical Appraisal: Assessing the Quality of Studies. In: How to Perform a Systematic Literature Review. Springer, Cham. https://doi.org/10.1007/978-3-030-49672-2_6

Download citation

DOI : https://doi.org/10.1007/978-3-030-49672-2_6

Published : 05 August 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-49671-5

Online ISBN : 978-3-030-49672-2

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Please enter both an email address and a password.

Account login

  • Show/Hide Password Show password Hide password
  • Reset Password

Need to reset your password?  Enter the email address which you used to register on this site (or your membership/contact number) and we'll email you a link to reset it. You must complete the process within 2hrs of receiving the link.

We've sent you an email.

An email has been sent to Simply follow the link provided in the email to reset your password. If you can't find the email please check your junk or spam folder and add [email protected] to your address book.

  • About RCS England

literature review vs critical appraisal

  • Dissecting the literature: the importance of critical appraisal

08 Dec 2017

Kirsty Morrison

This post was updated  in 2023.

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context.

Amanda Burls, What is Critical Appraisal?

Critical Appraisal 1

Why is critical appraisal needed?

Literature searches using databases like Medline or EMBASE often result in an overwhelming volume of results which can vary in quality. Similarly, those who browse medical literature for the purposes of CPD or in response to a clinical query will know that there are vast amounts of content available. Critical appraisal helps to reduce the burden and allow you to focus on articles that are relevant to the research question, and that can reliably support or refute its claims with high-quality evidence, or identify high-level research relevant to your practice.

Critical Appraisal 2

Critical appraisal allows us to:

  • reduce information overload by eliminating irrelevant or weak studies
  • identify the most relevant papers
  • distinguish evidence from opinion, assumptions, misreporting, and belief
  • assess the validity of the study
  • assess the usefulness and clinical applicability of the study
  • recognise any potential for bias.

Critical appraisal helps to separate what is significant from what is not. One way we use critical appraisal in the Library is to prioritise the most clinically relevant content for our Current Awareness Updates .

How to critically appraise a paper

There are some general rules to help you, including a range of checklists highlighted at the end of this blog. Some key questions to consider when critically appraising a paper:

  • Is the study question relevant to my field?
  • Does the study add anything new to the evidence in my field?
  • What type of research question is being asked? A well-developed research question usually identifies three components: the group or population of patients, the studied parameter (e.g. a therapy or clinical intervention) and outcomes of interest.
  • Was the study design appropriate for the research question? You can learn more about different study types and the hierarchy of evidence here .
  • Did the methodology address important potential sources of bias? Bias can be attributed to chance (e.g. random error) or to the study methods (systematic bias).
  • Was the study performed according to the original protocol? Deviations from the planned protocol can affect the validity or relevance of a study, e.g. a decrease in the studied population over the course of a randomised controlled trial .
  • Does the study test a stated hypothesis? Is there a clear statement of what the investigators expect the study to find which can be tested, and confirmed or refuted.
  • Were the statistical analyses performed correctly? The approach to dealing with missing data, and the statistical techniques that have been applied should be specified. Original data should be presented clearly so that readers can check the statistical accuracy of the paper.
  • Do the data justify the conclusions? Watch out for definite conclusions based on statistically insignificant results, generalised findings from a small sample size, and statistically significant associations being misinterpreted to imply a cause and effect.
  • Are there any conflicts of interest? Who has funded the study and can we trust their objectivity? Do the authors have any potential conflicts of interest, and have these been declared?

And an important consideration for surgeons:

  • Will the results help me manage my patients?

At the end of the appraisal process you should have a better appreciation of how strong the evidence is, and ultimately whether or not you should apply it to your patients.

Further resources:

  • How to Read a Paper by Trisha Greenhalgh
  • The Doctor’s Guide to Critical Appraisal by Narinder Kaur Gosall
  • CASP checklists
  • CEBM Critical Appraisal Tools
  • Critical Appraisal: a checklist
  • Critical Appraisal of a Journal Article (PDF)
  • Introduction to...Critical appraisal of literature
  • Reporting guidelines for the main study types

Kirsty Morrison, Information Specialist

Share this page:

  • Library Blog

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 23 October 2020

Making sense of the literature: an introduction to critical appraisal for the primary care practitioner

  • Kishan Patel 1 &
  • Meera Pajpani 2  

British Dental Journal volume  229 ,  pages 551–555 ( 2020 ) Cite this article

1132 Accesses

2 Citations

6 Altmetric

Metrics details

With an abundance of published literature, it is essential that primary care practitioners have the basic skills to determine which studies should impact their clinical decisions and which ones should not. This ability comes from having the skills required to critically appraise the literature. Critical appraisal is a fundamental part of being able to practice evidence-based dentistry, which is a General Dental Council requirement. This article will aim to provide practitioners with the key concepts of critical appraisal and introduce simple methods which can be adopted to aid in the reading of a research paper from a critical viewpoint.

Introduces practitioners to read a study from a critical viewpoint.

Encourages practitioners to practice evidence-based dentistry.

Helps practitioners formulate a research question to identify key items published in the literature to impact their patient care.

Points practitioners to objective tools to aid their critical appraisal.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 24 print issues and online access

251,40 € per year

only 10,48 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

literature review vs critical appraisal

Similar content being viewed by others

literature review vs critical appraisal

How to appraise the literature: basic principles for the busy clinician - part 1: randomised controlled trials

literature review vs critical appraisal

How to appraise the literature: basic principles for the busy clinician - part 2: systematic reviews and meta-analyses

literature review vs critical appraisal

Critical appraisal of systematic reviews of intervention in dentistry published between 2019-2020 using the AMSTAR 2 tool

Sackett D, Strauss S, Richardson W et al. Evidence-based medicine: how to practice and teach EBM . 2nd ed. Edinburgh: Churchill Linvingstone, 2000.

American Dental Association. Policy on Evidence-Based Dentistry. 2013. Available at https://www.ada.org/en/about-the-ada/ada-positions-policies-and-statements/policy-on-evidence-based-dentistry (accessed June 2020).

Gillette J, Matthews J D, Frantsve-Hawley J, Weyant R.J. The benefits of evidence-based dentistry for the private dental office. Dent Clin N Am 2009; 53: 33-45.

Burls A. What is critical appraisal? 2009. Available at http://www.bandolier.org.uk/painres/download/whatis/What_is_critical_appraisal.pdf (accessed May 2020).

Harbour R, Miller J. A new system for grading recommendations in evidence based guidelines. BMJ 2001; 323: 334-336.

BMJ Best Practice. What is GRADE? 2020. Available at https://bestpractice.bmj.com/info/toolkit/learn-ebm/what-is-grade/ (accessed June 2020).

Cochrane Methods. Reporting biases. Available at https://methods.cochrane.org/bias/reporting-biases (accessed June 2020).

Sterne J A C, Savović J, Paje M J et al. RoB2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019; 366: I4898.

Sterne J A C, Hernán M A, Reeves B C et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016; 355: I4919.

CASP. Critical Appraisal Skills Programme. Available online at https://casp-uk.net/ (accessed May 2020).

PRISMA. PRISMA Statement. 2015. Available at http://www.prisma-statement.org/ (accessed May 2020).

Hoyle P, Patel K, Benson P E. Does replacement of missing dental units with resin-retained bridges improve oral health-related quality of life? A systematic review. J Dent 2019; DOI: 10.1016/j.jdent.2019.103209.

Stroup D F, Berlin J A, Morton S C et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis of observational studies in epidemiology (MOOSE) Group. JAMA 2000; 283: 2008-2012.

CASP. CASP Checklist. 2018. Available at https://casp-uk.net/wp-content/uploads/2018/03/CASP-Systematic-Review-Checklist-2018_fillable-form.pdf (accessed May 2020).

Download references

Author information

Authors and affiliations.

Associate Dentist, UK

  • Kishan Patel

Speciality Doctor in OMFS, King’s College Hospital NHS Foundation Trust, London, UK

Meera Pajpani

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kishan Patel .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Patel, K., Pajpani, M. Making sense of the literature: an introduction to critical appraisal for the primary care practitioner. Br Dent J 229 , 551–555 (2020). https://doi.org/10.1038/s41415-020-2225-z

Download citation

Received : 27 May 2020

Accepted : 15 June 2020

Published : 23 October 2020

Issue Date : October 2020

DOI : https://doi.org/10.1038/s41415-020-2225-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

An introduction to clinical governance in dentistry.

British Dental Journal (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

literature review vs critical appraisal

  • Mayo Clinic Libraries
  • Evidence Synthesis Guide
  • Risk of Bias by Study Design

Evidence Synthesis Guide : Risk of Bias by Study Design

  • Review Types & Decision Tree
  • Standards & Reporting Results
  • Materials in the Mayo Clinic Libraries
  • Training Resources
  • Review Teams
  • Develop & Refine Your Research Question
  • Develop a Timeline
  • Project Management
  • Communication
  • PRISMA-P Checklist
  • Eligibility Criteria
  • Register your Protocol
  • Other Resources
  • Other Screening Tools
  • Grey Literature Searching
  • Citation Searching
  • Minimize Bias
  • GRADE & GRADE-CERQual
  • Data Extraction Tools
  • Synthesis & Meta-Analysis
  • Publishing your Review

Risk of Bias of Individual Studies

literature review vs critical appraisal

““Assessment of risk of bias is a key step that informs many other steps and decisions made in conducting systematic reviews. It plays an important role in the final assessment of the strength of the evidence.” 1  

Risk of Bias by Study Design (featured tools)

  • Systematic Reviews
  • Non-RCTs or Observational Studies
  • Diagnostic Accuracy
  • Animal Studies
  • Qualitative Research
  • Tool Repository
  • AMSTAR 2 The original AMSTAR was developed to assess the risk of bias in systematic reviews that included only randomized controlled trials. AMSTAR 2 was published in 2017 and allows researchers to identify high quality systematic reviews, including those based on non-randomised studies of healthcare interventions. more... less... AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews)
  • ROBIS ROBIS is a tool designed specifically to assess the risk of bias in systematic reviews. The tool is completed in three phases: (1) assess relevance(optional), (2) identify concerns with the review process, and (3) judge risk of bias in the review. Signaling questions are included to help assess specific concerns about potential biases with the review. more... less... ROBIS (Risk of Bias in Systematic Reviews)
  • BMJ Framework for Assessing Systematic Reviews This framework provides a checklist that is used to evaluate the quality of a systematic review.
  • CASP Checklist for Systematic Reviews This CASP checklist is not a scoring system, but rather a method of appraising systematic reviews by considering: 1. Are the results of the study valid? 2. What are the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Systematic Reviews Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • JBI Critical Appraisal Tools, Checklist for Systematic Reviews JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • NHLBI Study Quality Assessment of Systematic Reviews and Meta-Analyses The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • RoB 2 RoB 2 provides a framework for assessing the risk of bias in a single estimate of an intervention effect reported from a randomized trial, rather than the entire trial. more... less... RoB 2 (revised tool to assess Risk of Bias in randomized trials)
  • CASP Randomised Controlled Trials Checklist This CASP checklist considers various aspects of an RCT that require critical appraisal: 1. Is the basic study design valid for a randomized controlled trial? 2. Was the study methodologically sound? 3. What are the results? 4. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CONSORT Statement The CONSORT checklist includes 25 items to determine the quality of randomized controlled trials. Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report. more... less... CONSORT (Consolidated Standards of Reporting Trials)
  • NHLBI Study Quality Assessment of Controlled Intervention Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • JBI Critical Appraisal Tools Checklist for Randomized Controlled Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • ROBINS-I ROBINS-I is a tool for evaluating risk of bias in estimates of the comparative effectiveness… of interventions from studies that did not use randomization to allocate units to comparison groups. more... less... ROBINS-I (Risk Of Bias in Non-randomized Studies – of Interventions)
  • NOS This tool is used primarily to evaluate and appraise case-control or cohort studies. more... less... NOS (Newcastle-Ottawa Scale)
  • AXIS Cross-sectional studies are frequently used as an evidence base for diagnostic testing, risk factors for disease, and prevalence studies. The AXIS tool focuses mainly on the presented study methods and results. more... less... AXIS (Appraisal tool for Cross-Sectional Studies)
  • NHLBI Study Quality Assessment Tools for Non-Randomized Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. • Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies • Quality Assessment of Case-Control Studies • Quality Assessment Tool for Before-After (Pre-Post) Studies With No Control Group • Quality Assessment Tool for Case Series Studies more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • Case Series Studies Quality Appraisal Checklist Developed by the Institute of Health Economics (Canada), the checklist is comprised of 20 questions to assess the robustness of the evidence of uncontrolled case series studies.
  • Methodological Quality and Synthesis of Case Series and Case Reports In this paper, Dr. Murad and colleagues present a framework for appraisal, synthesis and application of evidence derived from case reports and case series.
  • MINORS The MINORS instrument contains 12 items and was developed for evaluating the quality of observational or non-randomized studies. This tool may be of particular interest to researchers who would like to critically appraise surgical studies. more... less... MINORS (Methodological Index for Non-Randomized Studies)
  • JBI Critical Appraisal Tools for Non-Randomized Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis. • Checklist for Analytical Cross Sectional Studies • Checklist for Case Control Studies • Checklist for Case Reports • Checklist for Case Series • Checklist for Cohort Studies
  • QUADAS-2 The QUADAS-2 tool is designed to assess the quality of primary diagnostic accuracy studies it consists of 4 key domains that discuss patient selection, index test, reference standard, and flow of patients through the study and timing of the index tests and reference standard. more... less... QUADAS-2 (a revised tool for the Quality Assessment of Diagnostic Accuracy Studies)
  • JBI Critical Appraisal Tools Checklist for Diagnostic Test Accuracy Studies JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • STARD 2015 The authors of the standards note that essential elements of diagnostic accuracy study methods are often poorly described and sometimes completely omitted, making both critical appraisal and replication difficult, if not impossible. The Standards for the Reporting of Diagnostic Accuracy Studies was developed to help improve completeness and transparency in reporting of diagnostic accuracy studies. more... less... STARD 2015 (Standards for the Reporting of Diagnostic Accuracy Studies)
  • CASP Diagnostic Study Checklist This CASP checklist considers various aspects of diagnostic test studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Diagnostic Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • SYRCLE’s RoB Implementation of SYRCLE’s RoB tool will facilitate and improve critical appraisal of evidence from animal studies. This may enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies. more... less... SYRCLE’s RoB (SYstematic Review Center for Laboratory animal Experimentation’s Risk of Bias)
  • ARRIVE 2.0 The ARRIVE 2.0 guidelines are a checklist of information to include in a manuscript to ensure that publications on in vivo animal studies contain enough information to add to the knowledge base. more... less... ARRIVE 2.0 (Animal Research: Reporting of In Vivo Experiments)
  • Critical Appraisal of Studies Using Laboratory Animal Models This article provides an approach to critically appraising papers based on the results of laboratory animal experiments, and discusses various bias domains in the literature that critical appraisal can identify.
  • CEBM Critical Appraisal of Qualitative Studies Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • CASP Qualitative Studies Checklist This CASP checklist considers various aspects of qualitative research studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • Quality Assessment and Risk of Bias Tool Repository Created by librarians at Duke University, this extensive listing contains over 100 commonly used risk of bias tools that may be sorted by study type.
  • Latitudes Network A library of risk of bias tools for use in evidence syntheses that provides selection help and training videos.

References & Recommended Reading

1.    Viswanathan, M., Patnode, C. D., Berkman, N. D., Bass, E. B., Chang, S., Hartling, L., ... & Kane, R. L. (2018). Recommendations for assessing the risk of bias in systematic reviews of health-care interventions .  Journal of clinical epidemiology ,  97 , 26-34.

2.     Kolaski, K., Logan, L. R., & Ioannidis, J. P. (2024). Guidance to best tools and practices for systematic reviews .  British Journal of Pharmacology ,  181 (1), 180-210

3.     Fowkes FG, Fulton PM.  Critical appraisal of published research: introductory guidelines.   BMJ (Clinical research ed).  1991;302(6785):1136-1140.

4.     Shea BJ, Reeves BC, Wells G, et al.  AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.   BMJ (Clinical research ed).  2017;358:j4008.

5..     Whiting P, Savovic J, Higgins JPT, et al.  ROBIS: A new tool to assess risk of bias in systematic reviews was developed.   Journal of clinical epidemiology.  2016;69:225-234.

6.     Sterne JAC, Savovic J, Page MJ, et al.  RoB 2: a revised tool for assessing risk of bias in randomised trials.  BMJ (Clinical research ed).  2019;366:l4898.

7.     Moher D, Hopewell S, Schulz KF, et al.  CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials.  Journal of clinical epidemiology.  2010;63(8):e1-37.

8..    Sterne JA, Hernan MA, Reeves BC, et al.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.  BMJ (Clinical research ed).  2016;355:i4919.

9.    Downes MJ, Brennan ML, Williams HC, Dean RS.  Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS).   BMJ open.  2016;6(12):e011458.

10.   Guo B, Moga C, Harstall C, Schopflocher D.  A principal component analysis is conducted for a case series quality appraisal checklist.   Journal of clinical epidemiology.  2016;69:199-207.e192.

11.   Murad MH, Sultan S, Haffar S, Bazerbachi F.  Methodological quality and synthesis of case series and case reports.  BMJ evidence-based medicine.  2018;23(2):60-63.

12.   Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J.  Methodological index for non-randomized studies (MINORS): development and validation of a new instrument.   ANZ journal of surgery.  2003;73(9):712-716.

13.   Whiting PF, Rutjes AWS, Westwood ME, et al.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.   Annals of internal medicine.  2011;155(8):529-536.

14.   Bossuyt PM, Reitsma JB, Bruns DE, et al.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.   BMJ (Clinical research ed).  2015;351:h5527.

15.   Hooijmans CR, Rovers MM, de Vries RBM, Leenaars M, Ritskes-Hoitinga M, Langendam MW.  SYRCLE's risk of bias tool for animal studies.   BMC medical research methodology.  2014;14:43.

16.   Percie du Sert N, Ahluwalia A, Alam S, et al.  Reporting animal research: Explanation and elaboration for the ARRIVE guidelines 2.0.  PLoS biology.  2020;18(7):e3000411.

17.   O'Connor AM, Sargeant JM.  Critical appraisal of studies using laboratory animal models.   ILAR journal.  2014;55(3):405-417.

  • << Previous: Minimize Bias
  • Next: GRADE & GRADE-CERQual >>
  • Last Updated: Aug 30, 2024 2:14 PM
  • URL: https://libraryguides.mayo.edu/systematicreviewprocess

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Eldredge J. Evidence Based Practice: A Decision-Making Guide for Health Information Professionals [Internet]. Albuquerque (NM): University of New Mexico Health Sciences Library and Informatics Center; 2024.

Cover of Evidence Based Practice

Evidence Based Practice: A Decision-Making Guide for Health Information Professionals [Internet].

Critical appraisal.

Critical Appraisal: Wall Street. Bryce Canyon National Park. Utah

The goal of researchers should be to create accurate and unbiased representations of selected parts of reality. The goal of HIPs should be to critically examine how well these researchers achieve those representations of reality. Simultaneously, HIPs should gauge how relevant these studies are to answering their own EBP questions. In short, HIPs need to be skeptical consumers of the evidence produced by researchers.

This skepticism needs to be governed by critical thinking while recognizing that there are no perfect research studies . The expression, “The search for perfectionism is the enemy of progress,” 1 certainly applies when critically appraising evidence. HIPs should engage in “Proportional Skepticism,” meaning that finding minor flaws in a specific study does not automatically disqualify it from consideration in the EBP process. Research studies also vary in quality of implementation, regardless of whether they are systematic reviews or randomized controlled trials (RCTs). Proportional Skepticism acknowledges that some study designs are better than others at accurately representing parts of reality and answering different types of EBP questions. 2

This chapter provides tips for readers in their roles as consumers of evidence, particularly evidence produced by research studies. First, this chapter explores the various forms of bias that can cloud the representation of reality in research studies. It then presents a brief overview of common pitfalls in research studies. Many of these biases and pitfalls can easily be identified by how they might manifest in various forms of evidence, including local evidence, regional or national comparative data, and the grey literature. The emphasis in this chapter, however, is on finding significant weaknesses in research-generated evidence, as these can be more challenging to detect than in the other forms of evidence. Next, the chapter further outlines the characteristics of various research studies that might produce relevant evidence. Tables summarizing the major advantages and disadvantages of specific research designs appear throughout the chapter. Finally, critical appraisal sheets serve as appendices to the chapter. These sheets provide key questions to consider when evaluating evidence produced by the most common EBP research study designs.

  • 4.1 Forms of Bias

Researchers and their research studies can be susceptible to many forms of bias, including implicit, structural, and systemic biases. These biases are deeply embedded in society and can inadvertently appear within research studies. 3 , 4 Bias in research studies can be defined as the “error in collecting or analyzing data that systematically over- or underestimates what the researcher is interested in studying.” 5 In simpler terms, bias results from an error in the design or conduct of a study. 6 As George Orwell reminds us, “..we are all capable of believing things we know to be untrue….” 7 Most experienced researchers are vigilant to avoid these types of biases, but biases can unintentionally permeate study designs or protocols irrespective of the researcher’s active vigilance. If researchers recognize these biases, they can mitigate them during their analyses or, at the very least, they should account for them in the limitations section of their research studies.

Expectancy Effect

For at least 60 years, behavioral scientists have observed a consistent phenomenon: when a researcher anticipates a particular response from someone, the likelihood of that person responding in the expected manner significantly increases. 8 This phenomenon, known as the Expectancy Effect, can be observed across research studies, ranging from interviews to experiments. In everyday life, we can observe the Expectancy Effect in action, such as when instructors selectively call on specific students in a classroom 9 , 10 or when the President or Press Secretary at a White House press conference chooses certain reporters while ignoring others. In both instances, individuals encouraged to interact with either the teacher or those conducting the press conference are more likely to seek future interactions. Robert Rosenthal, the psychologist most closely associated with uncovering and examining instances of the Expectancy Effect, has extensively documented these self-fulfilling prophecies in various everyday situations and within controlled research contexts. 11 , 12 , 13 , 14 It is important to recognize that HIP research studies have the potential to be influenced by the Expectancy Effect.

Hawthorne Effect

Participants in a research study, when aware of being observed by researchers, tend to behave differently than they would in other circumstances. 15 , 16 This phenomenon, known as the Hawthorne Effect, was initially discovered in an obscure management study conducted at the Hawthorne Electric Plant in Chicago. 17 , 18

The Hawthorne Effect might be seen in HIP participant-observer research studies where researchers monitor interactions such as help calls, chat sessions, and visits to a reference desk. In such studies, the employees responding to these requests might display heightened levels of patience, friendliness, or accommodation due to their awareness of being observed, which aligns with the Hawthorne Effect. It is important to note that the Hawthorne Effect is not inevitable, 19 and there are ways for researchers to mitigate its impact. 20

Historical events have the power to significantly shape research results. A compelling illustration of this can be found in the context of the COVID-19 pandemic, which left a profound mark on society. The impact of this crisis was so significant that a research study on attitudes toward infectious respiratory disease conducted in 2018 would likely yield different results compared to an identical study conducted in 2021, when society was cautiously emerging from the COVID-19 pandemic. Another instance demonstrating the impact of historical events is the banking crisis and recession of 2008-2009. Notably, two separate research studies on the most important research questions facing HIPs, conducted in 2008 and 2011, respectively, for the Medical Library Association Research Agenda, had unexpected differences in results. 21 The 2011 study identified HIPs as far more concerned about issues of economic and job security than the 2008 study. 22 The authors of the 2011 study specifically attributed the increased apprehension regarding financial insecurity to these historical events related to the economy. Additionally, historical events also can affect research results over the course of a long-term study. 23

During the course of longitudinal research studies, participants might experience changes in their attitudes or their knowledge. These changes, known as Maturation Effects, can sometimes be mistaken as outcomes resulting from specific events or interventions within the research study. 24 For instance, this might happen when a HIP provides instruction to first-year students aimed at fostering a positive attitude toward conducting literature searches. Following a second session on literature searching for second-year students, an attitudinal survey might indicate a higher opinion of this newly-learned skill among medical students. At this point, one might consider whether this attitude change resulted from the two instructional sessions or if there were other factors at play, such as a separate required course on research methods or other experiences of the students between the two searching sessions. In such a case, one would have to consider attributing the change to the Maturation Effect. 25

Misclassification

Misclassification can occur at multiple junctures in a research study, including when enrolling the participants, collecting data from participants, measuring exposures or interventions, or recording outcomes. 26 Minor misclassifications can introduce outsized distortions in any one of these junctures due to the multiple instances involved. The simple task of defining a research population can introduce the risk of misclassification, even among conscientious HIP researchers. 27

HIPs work with emerging information technologies far more than any other health sciences profession. Part of this work involves assessing the performance of these new information technologies as well as working on making adaptations to these technologies. Given the frequency that HIPs work with these new technologies, there remains the potential for novelty bias to arise, which refers to the initial fascination and perhaps even an enthusiasm towards innovations during their early phases of introduction and initial use. 28 This bias has been observed in publications from various health professions, spanning a wide range of innovations, such as dentistry tooth implants 29 or presentation software. 30

Many HIPs engage in partnerships with corporate information technology firms or other external organizations to pilot new platforms. These partnerships often result in reports on these technologies, typically case reports or new product reviews. Given the nature of these relationships, it is important for all authors to provide clear conflict of interest statements. For many HIPs, however, novelty bias is still an occupational risk due to their close involvement with information technologies. To mitigate this bias, HIPs can implement two strategies: increasing the number of participants in studies and continuing to try (often unsuccessfully) to replicate any initial rosy reports. 31

Recall Bias

Recall Bias poses a risk to study designs such as surveys and interviews, which heavily rely on the participants’ ability to recollect past events accurately. The wording of surveys and questions posed by interviewers can inadvertently direct participants’ attention to certain memories, thereby distorting the information provided to researchers. 32

Scalability Bias

Scalability Bias fails to consider the applicability of a study carried out in one specific context when transferred to another context. Shadish et al 33 identify two forms: Narrow-to-Broad Bias and Broad-to-Narrow Bias.

Narrow-to-Broad Bias applies findings in one setting and suggests that these findings apply to many other settings. For example, a researcher might attempt to depict the attitudes of all students on a large campus based on the interview of a single student or by surveying only five to 15 students who belong to a student interest group. Broad-to-Narrow Bias makes the inverse mistake by assuming that what generally applies to a large population should apply to an individual or a subset of that population. In this case, a researcher might conduct a survey on a campus to gauge attitudes toward a subject and assume that the general findings apply to every individual student. Readers familiar with classical training in logic or rhetoric will recognize these two biases as the Fallacy of Composition and the Fallacy of Hasty Generalization, respectively. 34

Selection Bias

Selection Bias happens when information or data collected in a research study does not accurately or fully represent the population of interest. It emerges when a sample distorts the realities of a larger population. For example, if a survey or a series of interviews with users only include friends of the researcher, it would be susceptible to Selection Bias, as it fails to encompass the broader range of attitudes present in the entire population. Recruitment into a study might occur only through media that are followed by a subset of the larger demographic profile needed. Engagement with an online patient portal, originally designed to mitigate Selection Bias in a particular study, unexpectedly gave rise to racial disparities instead. 35

Selection Bias can originate from within the study population itself, leading to potential distortions in the findings. For example, only those who feel strongly, either negatively or positively, towards a technology might volunteer to offer opinions on it. Selection Bias also might occur when an interviewer, for instance, either encourages interviewees or discourages interviewees from speaking on a subject. In all these cases, someone exerts control over the focus of the study that then misrepresents the actual experiences of the population. Rubin 36 reminds us that systemic and structural power structures in society exert control over what perspectives are heard in a research study.

While there are many other types of bias, the descriptions explained thus far should equip the vigilant consumer of research evidence with the ability to detect potential weaknesses across a wide range of HIP research articles.

  • 4.2 Other Research Pitfalls

A cause is “an antecedent event, condition, or characteristic” that precedes an outcome. A sufficient cause provides the prerequisites for an outcome to occur, while a necessary cause must exist before the outcome can occur. 37 These definitions rely on the event, condition, or characteristic to precede the outcome temporally. At the same time, the cause and its outcome must comply with biological and physical laws. There must also be a plausible strength of the association, and the link between the putative cause and the outcome must be replicable across varied instances. 38 , 39 In the past century, philosophers and physicists have examined the concept of causality exhaustively, while 40 the concept of causality has been articulated over the past 70 years in the health sciences. 41 HIPs should keep these guidelines in mind when critically appraising any claims that an identified factor “caused” a specific outcome.

Confounding

Confounding relates to the inaccurate linkage of a possible cause to an identified outcome. It means that another concurrent event, condition, or characteristic actually caused the outcome. One instance of confounding might be an advertised noontime training on a new information platform that also features a highly desirable lunch buffet. The event planners might mistakenly assume that the high attendance rate stemmed from the perceived need for the training, while the primary motivation actually was the lunch buffet. In this case, the lunch served as a confounder.

Confounding presents the most significant alternative explanation to biases when trying to determine causation. 42 One recurring EBP question that arises among HIPs and academic librarians pertains to whether student engagement with HIPs and the use of information resources leads to student success and higher graduation rates. A research team investigated this issue and found that student engagement with HIPs and information resources did indeed predict student success. During the process, they were able to identify and eliminate potential confounders that might explain student success, such as high school grade point average, standardized exams, or socioeconomic status. 43 It turns out that even artificial intelligence can be susceptible to confounding, although it can “learn” to overcome those confounders. 44 Identifying and controlling for potential confounders can even resolve seemingly intractable questions. 45 RCTs are considered far superior in controlling for known or unknown confounders than other study designs, so they are often considered to be the highest form of evidence for a single intervention study. 46

Study Population

When considering a research study as evidence for making an EBP decision, it is crucial to evaluate whether the study population closely and credibly resembles your own user population. Each study design has specific features that can help answer this question. There are some general questions to consider that will sharpen one’s critical appraisal skills.

One thing to consider is whether the study population accurately represents the population it was drawn from. Are there any concerns regarding the sample size, which might make it insufficient to represent the larger population? Alternatively, were there issues with how the researchers publicized or recruited participants? 47 Selection Bias, mentioned earlier in this chapter, might contribute to the misrepresentation of a population if the researchers improperly included or excluded potential participants. It is also important to evaluate the response rate—was it too low, which could potentially introduce a nonresponse bias? Furthermore, consider whether the researchers’ specific incentives to enroll in the study attracted nonrepresentative participants.

HIPs should carefully analyze how research study populations align with their own user populations. In other words, what are the essential relevant traits that a research population might share or not share with a user population?

Validity refers to the use of an appropriate study design with measurements that are suitable for studying the subject. 48 , 49 It also applies to the appropriateness of the conclusions drawn from the research results. 50 Researchers devote considerable energy to examining the validity of their own studies as well as those conducted by others, so validity generally resides outside the scope of this guide intended for consumers of the research evidence.

Two brief examples might convey the concept of validity. In the first example, instructors conducting a training program on a new electronic health record system might claim success based on the number of providers they train. A more valid study, however, would include clear learning objectives that lead to demonstratable skills, which can be assessed after the training. Researchers could further extend the validity by querying trainees about their satisfaction or evaluating these trainees’ skills two weeks later to gauge the retention of the training. As a second example, a research study on a new platform might increase its validity by merely not reporting the number of visits to the platform. Instead, the study could gauge the level of user engagement through factors such as downloads, time spent on the platform, or the diversity of users.

  • 4.3 Study Designs

Study designs, often referred to as “research methods” in journal articles or presentations, serve as the means to test researchers’ hypotheses. Study designs can be compared to tools such as hammers, screwdrivers, saws, or utensils used in a kitchen. Both analogies emphasize the importance of using the appropriate tool or utensil for the task at hand. One would not use a hammer when a saw would be a better choice, nor would one use a spatula to ladle a cup of soup from a pot. Similarly, researchers need to take care to use study designs best suited for the research question. Likewise, HIPs engaged in EBP should recognize the suitability of study designs in answering their EBP questions. The following section provides a review of the most common study designs 51 employed to answer EBP questions, which will be further discussed in the subsequent chapter on decision making.

Types of EBP Questions

There are three major types of EBP questions that repeatedly emerge from participants in continuing education courses: Exploration, Prediction, and Intervention. Coincidentally, these major types of questions also exist in research agendas for our profession. 52 Different study designs have optimal applicability in addressing the aforementioned issues of validity and in controlling biases for each type of question.

Exploration Questions

Exploration questions are frequently concerned with understanding the reasons behind certain phenomena and often begin with “Why.” For example, a central exploration question could be, “Why do health care providers seek health information?” Paradoxically, exploration “Why” research studies often do not ask participants direct “why” questions because this approach often leads to unproductive participant responses. 53 Other exploration questions might include:

  • What are the specific Point-of-Care information needs of our front-line providers?
  • Why do some potential users choose never to use the journals, books, and other electronic resources that we provide?
  • Do our providers find the alerts that automatically populate their electronic health records useful?

Prediction Questions

Prediction questions aim to forecast future needs based on past patterns, and HIPs frequently pose such inquiries. These questions attempt to draw a causal connection between events, conditions, or characteristics in the present with outcomes in the future. Examples of prediction questions might include:

  • To what extent do students retain their EBP question formulation and searching skills after two years?
  • Do hospitals that employ HIPs produce better patient outcomes, as indicated by measures such as length of stay, mortality rates, or infection rates?
  • Which archived committee meeting minutes within my organization are likely to be utilized within the next 30 years?

Intervention Questions

Intervention questions aim to distinguish between different potential courses of action to determine their effectiveness in achieving specific desirable outcomes. Examples of intervention questions might include:

  • Does providing training on EBP question formulation and searching lead to an increase in information-seeking behavior among public health providers in rural practices?
  • Which instructional approach yields better performance among medical students on standardized national licensure exams: didactic lecture or active learning with application exercises?
  • Which Point-of-Care tool, DynaMed or UpToDate, generates more answers to patient care questions and higher provider satisfaction?

Case Reports

Case reports ( Table 1 ) are prevalent in the HIP literature and are often referred to interchangeably as “Case Studies.” 54 They are records of a single program, project, or experience, 55 with a particular focus on new developments or programs. These reports provide rich details on a single instance in a narrative format that is easy to understand. When done correctly, they can be far more challenging to develop than expected, contrary to the misconception that they are easy to assemble. 56 The most popular case reports revolve around innovation, which is broadly defined as “an idea, practice, or object” perceived to be new.” 57 A HIP innovation might be a new information technology, management initiative, or outreach program. A case report might also be the only available evidence due to the newness of the innovation, and in some rare instances, a case report might be the only usable evidence. For this reason, case reports hold the potential to point to new directions or emerging trends in the profession.

While case reports can serve an educational purpose and are generally interesting to read, they are not without controversy. Some researchers do not consider case reports as legitimate sources of research evidence, 58 and practitioners often approach them with skepticism. There are many opportunities for authors to unintentionally introduce biases into case reports. Skeptical practitioners criticize the unrealistic and overly positive accounts of innovations found in some case reports.

Case reports focus solely on a single instance of an innovation or noteworthy experience. 59 As a pragmatic matter, it can be difficult to justify adopting a program based on a case report carried out in one specific and different context. To illustrate the challenges of using case reports statistically, consider a hypothetical scenario: HIPs at 100 institutions might attempt to implement a highly publicized new information technology. HIPs at 96 of those institutions experience frustration at the poor performance of the new technology, and many eventually abandon it. Meanwhile, in this scenario, HIPs at four institutions present their highly positive case reports on their experiences with the new technology at an annual conference. These reports subsequently appear in the literature six months later. While these four case reports do not even reach the minimum standard for statistical significance, they become part of the only evidence base available for the new technology, thereby gaining prominence solely from “Survivor Bias” as they alone continued when most efforts to implement the new technology had failed. 60 , 61

Defenders of case reports argue that some of these issues can be mitigated when the reports include more rigorous elements. 62 , 63 For instance, a featured program in a case report might detail how carefully the authors evaluated the program using multiple meaningful measurements. 64 Another case report might document a well-conducted survey that provides plausible results in order to gauge peoples’ opinions. Practitioners tend to view case reports more favorably when they provide negative aspects of the program or innovation, framing them as “lessons learned.” Multiple authors representing different perspectives or institutions 65 seem to garner greater credibility. Full transparency, where authors make foundational documents and data available to readers, further bolsters potential applicability. Finally, a thorough literature review of other research studies, providing context and perhaps even external support for the case report’s results, can further increase credibility. 66 All of these elements increase the likelihood that the featured case report experience could potentially be transferred to another institution.

Case reports gain greater credibility, statistical probability, and transferability when combined with other case reports on the same innovation or similar experience, forming a related, although separate, study design known as a case series. Similar to case reports, case series gain greater credibility when their observations across cases are accompanied by literature reviews. 67

Interviews ( Table 2 ) are another common HIP research design. Interviews aim to understand the thoughts, preferences, or feelings of others. Interviews take different forms, including in-person or remote settings, as well as structured or unstructured questionnaires. They can involve one interviewer and one interviewee or a small team of interviewers who conduct group interviews, often referred to as a focus group. 68 , 69 , 70 While interviews technically fall under the category of surveys, they are discussed separately here due to their popularity in the HIP literature and the unique role of the interviewer in mediating and responding to interviewees.

Interviews can be highly exploratory, allowing researchers to discover unrecognized patterns or sentiments regardless of format. They might uniquely be able to answer “why?” research questions or probe interviewees’ motivations. 71 Interviews can sometimes lead to associations that can be further tested using other study designs. For instance, a set of interviews with non-hospital-affiliated practitioners about their information needs 72 can lead to an RCT comparing preferences for two different Point-of-Care tools. 73

Interviews have the potential to introduce various forms of biases. To minimize bias, researchers can employ strategies such as recruiting a representative sample of participants, using a neutral party to conduct the interviews, following a standardized protocol to ensure participants are interviewed equitably, and avoiding leading questions. The flexibility for interviewers to mediate and respond offers the strength to discover new insights. On balance, this flexibility also carries the risk of introducing bias if interviewers inject their own agendas into the interaction. Considering these potential biases, interviews are ranked below descriptive surveys in the Levels of Evidence discussed later in this chapter. To address concerns about bias, researchers should ensure that interviews thoroughly document and analyze all de-identified data collected in a transparent manner, allowing practitioners reviewing these studies to detect and mitigate any potential biases.

Descriptive Surveys

Surveys are an integral part of our society. Every ten years, Article 1, Section 2 of the United States Constitution requires everyone living in the United States to report demographic and other information about themselves as part of the Census. Governments have been taking censuses ever since ancient times in Babylon, Egypt, China, and India. 74 , 75 While censuses might offer highly accurate portrayals, they are time-consuming, complex, and expensive endeavors.

Most descriptive surveys involve polling a finite sample of the population, making sample surveys less time-consuming and less expensive compared to censuses. Surveys involving samples, however, are still complex. Surveys can be defined as a method for collecting information about a population of people. 76 They aim to describe, compare, or explain individual and societal knowledge, feelings, values, preferences, and behaviors.” 77 Surveys are accessed by respondents without a live intermediary administering them. For participants, surveys are almost always confidential and are often anonymous. There are three basic types of surveys: descriptive, change metric, and consensus.

Descriptive surveys ( Table 3 ) elicit respondents’ thoughts, emotions, or experiences regarding a subject or situation. Cross-sectional studies are one type of descriptive survey. 78 , 79 Change metric surveys are part of a cohort or experimental study where at least one survey takes place prior to an exposure or intervention. Later on during the study, the same participants are surveyed again to assess any changes or the extent of change. Sometimes, change metric surveys gauge the differences between user expectations and actual user experiments in surveys, known as Gap Analyses. Change metric surveys resemble descriptive surveys. They are discussed in subsequent study designs later in this chapter. The aim of consensus surveys is to facilitate agreement among groups regarding collective preferences or goals, even in situations where initial consensus might seem elusive. Consensus survey techniques on decision-making in EBP will be discussed in the next chapter.

Descriptive surveys are likely the most utilized research study design employed by HIPs. 80 The common use of surveys by HIP researchers and in society at large is one of the greatest weaknesses of descriptive surveys. While surveys are familiar, they have numerous pitfalls and limitations. 81 , 82 , 83 Even when large-scale public opinion surveys are conducted by experts, discrepancies often exist between survey results and actual population behavior, as evidenced by repeated erroneous election predictions by veteran pollsters. 84

Beyond the inherent limitations of survey designs, there are multiple points where researchers can unintentionally introduce bias or succumb to other pitfalls. Problems can arise at the outset when researchers design a survey without conducting an adequate literature review to consider the previous research on the subject. The survey instrument itself might contain confusing or misleading questions, including asking leading questions that elicit a “correct” answer rather than a truthful response. 85 , 86 For example, a question about alcohol consumption in a week might face validity issues due to social stigma. The recruitment process and the characteristics of participants can also introduce Selection Bias. 87 The introduction of the survey of the medium through which participants interact with the survey might underrepresent some demographic groups based on age, gender, class, or ethnicity. It is also important to consider the representativeness of the sample in relation to the target population. Is the sample large enough? 88 Interpreting survey results, particularly answers to open-ended questions, can also distort the study results. Regardless of how straightforward surveys might appear to participants or to the casual observer, they are oftentimes complex endeavors. 89 It is no wonder that the classic Survey Kit consists of 10 volumes to explain all the details that need to be attended to for a more successful survey. 90

Cohort Studies

Cohort studies ( Table 4 ) are one of several observational study designs that focus on observing possible causes (referred to as “exposures”) and their potential outcomes within a specific population. In cohort studies, investigators collect observations, usually in the form of data, without directly interjecting themselves into the situation. Cohort members are identified without the need for the explicit enrollment typically required in other designs. Figure 1 depicts the elements of a defined population, the exposure of interest, and the hypothesized outcome(s) in cohort studies. Cohort studies are fairly popular HIP research designs, although they are rarely labeled as such in the research literature. Cohort studies can be either prospective or retrospective. Retrospective cohort studies are conducted after the exposure has already occurred to some members of a cohort. These studies focus on examining past exposures and their impact on outcomes. 91

Cohort Study Design. Copyright Jonathan Eldredge. © 2023.

Many HIPs conducting studies on resource usage employ the retrospective cohort design. These studies link resource usage patterns to past exposures that might explain the observed patterns. That exposure might be a feature in the curriculum that requires learners to use that resource, or it could be an institutional expectation for employees to complete an online training module, which affects the volume of traffic on the module. On the other hand, prospective cohort studies begin in the present by identifying a cohort within the larger population and observing the exposure of interest and whether this exposure leads to an identified outcome. The researchers are collecting specific data as the cohort study progresses, including whether cohort members have been exposed and, if so, the duration or intensity of the exposure. These varied levels of exposure might be likened to different drug dosages.

Prospective cohort studies are generally regarded as less prone to bias or confounding than retrospective studies because researchers are intentional about collecting all measures throughout the study period. In contrast, retrospective studies are dependent on data collected in the past for other purposes. Those pre-existing compiled data sets might have missing elements necessary for the retrospective study. For example, in a retrospective cohort study, usage or traffic data on an online resource might have been originally collected by an institution to monitor the maintenance, increase, or decrease in the number of licensed simultaneous users. These data were originally gathered for administrative purposes, rather than for the primary research objectives of the study. Similarly, in a retrospective cohort study investigating the impact of providing tablets (exposure) to overcome barriers in using a portal (outcome), there might be a situation where the inventory system, initially created to track the distribution of tablets, is repurposed for a different objective, such as video conferencing. 92 Cohort studies regularly use change metric surveys, as discussed above. Prospective cohort studies are better at monitoring cohort members over the study duration, while retrospective cohort studies do not always have a clearly identified cohort membership due to the possible participant attrition not recorded in the outcomes. For this reason, plus the potentially higher integrity of the intentionally collected data, prospective cohort studies tend to be considered a higher form of evidence. The increased use of electronic inventories and the collection of greater amounts of data sometimes means that a data set created for one purpose can still be repurposed for a retrospective cohort study.

Quasi-Experiment

In contrast to observational studies like cohort studies, which involve researchers simply observing exposures and then measuring outcomes, quasi-experiments ( Table 5 ) include the active involvement of researchers in an intervention that is an intentional exposure. In quasi-experiments, researchers deliberately intervene and engage all members of a group of participants. 93 These interventions can take the form of training programs or work requirements that involve participants interacting with a new electronic resource, for example. Quasi-experiments are often employed by instructors who pre-test a group of learners, provide them with training on a specific skill or subject, and then conduct a post-test to measure the learners’ improved level of comprehension. 94 In this scenario, there is usually no explicit comparison with another untrained group of learners. The researchers’ active involvement tends to reduce some forms of bias and other pitfalls. Confounding, nevertheless, represents one looming potential weakness in quasi-experiments since a third unknown factor, a confounder, might be associated with the training and outcome but goes unrecognized by the researchers. 95 Quasi-experiments do not use randomization, which also can eliminate most confounders.

Quasi-Experiments

Randomized Controlled Trials (RCTs)

RCTs ( Table 6 ) are highly effective in resolving a choice between two seemingly reasonable courses of action. RCTs have helped answer some seemingly unresolvable HIP decisions in the past, including:

Randomized Controlled Trails (RCTs)

  • Do embedded clinical librarians OR the availability of a reference service improve physician information-seeking behavior? 96 , 97
  • Does weeding OR not weeding a collection lead to higher usage in a physical book collection? 98
  • Does training in Evidence Based Public Health skills OR the lack of this training lead to an increase in public health practitioners’ formulated questions? 99

Paradoxically, RCTs are relatively uncommon in the HIP evidence base despite their powerful potential to resolve challenging decisions. 100 One explanation might be that, in the minds of the public and many health professionals, RCTs are often associated with pharmaceutical treatments. Researchers, however, have used RCTs far more broadly to resolve questions about devices, lifestyle modifications, or counseling in the realm of health care. Some HIP researchers believe that RCTs are too complex to implement, and some even consider RCTs to be unethical. These misapprehensions should be resolved by a closer reading about RCTs in this section and in the referenced sources.

Some HIPs, in their roles as consumers of research evidence, consider RCTs too difficult to interpret. This misconception might stem from the HIPs’ past involvement in systematic review teams that have evaluated pharmaceutical RCTs. Typically, these teams use risk-of-bias tools, which might appear overly complex to many HIPs. The most commonly used risk-of-bias tool 101 for critically appraising pharmaceutical RCTs appears to be published by Cochrane, particularly its checklist for identifying sources of bias. 102 Those HIPs involved with evaluating pharmaceutical RCTs should familiarize themselves with this resource. This section of Chapter 4 focuses instead on aspects of RCTs that HIPs might encounter in the HIP evidence base.

A few basic concepts and explanations of RCT protocols should alleviate any reluctance to use RCTs in the EBP Process. The first concept of equipoise means that researchers undertook the RCT because they were genuinely uncertain about which course of action among the two choices would lead to the more desired outcomes. Equipoise has practical and ethical dimensions. If prior studies have demonstrated a definitively superior course of action between the two choices, researchers would not invest their time and effort in pursuing an already answered question unless they needed to replicate the study. From an ethical perspective, why would researchers subject control group participants to a clearly inferior choice? 103 , 104 The background or introduction section of an article about RCTs should establish equipoise by drawing on evidence from past studies.

Figure 2 illustrates that an RCT begins with a representative sample of the larger population. Many consumers of RCTs should bear in mind that the number of participants in a study depends on more than a “magic number”; it also relies on the availability of eligible participants and the statistical significance of any differences measured. 105 , 106 , 107 , 108 Editors, peer reviewers, and statistical consultants at peer-reviewed journals play a key role in screening manuscripts for any major statistical problems.

Randomized Controlled Trial.

Recruiting a representative sample can be a challenge for researchers due to the various communication channels used by different people. Consumers of RCTs should be aware of these issues since they affect the applicability of any RCT to one’s local setting. Several studies have documented age, gender, socioeconomic, and ethnic underrepresentation in RCTs. 109 , 110 , 111 , 112 , 113 One approach to addressing this issue is to tailor the incentives for participation based on the appeals to different underrepresented groups. 114 Close collaboration with underrepresented groups through community outreach also can help increase participation. Many RCTs include a table that records the demographic representation of participants in the study, along with the demographic composition of those who dropped out. HIPs evaluating an RCT can scrutinize this table to assess how closely the study’s population aligns with their own user population. RCTs oftentimes screen out potential participants who are unlikely to adhere to the study protocol or who are likely to drop out. Participants who will be unavailable during key study dates might also be removed. HIP researchers might want to exclude potential participants who have prior knowledge of a platform under study or who might be repeating an academic course where they were previously exposed to the same content. These preliminary screening measures cannot anticipate all eventualities, which is why some articles include a CONSORT diagram to provide a comprehensive overview of the study design. 115

RCTs often control for most biases and confounding through randomization . Imagine you’re in the tenth car in a right-hand lane approaching a traffic signal at an intersection, and no one ahead of you uses their turn signal. You want to take a right turn immediately after the upcoming intersection. In this situation, you don’t know which cars will actually turn right and which ones will proceed straight. If you want to stay in the right-hand lane without turning right, you can’t predict who will slow you down by taking a right or who will continue at a faster pace straight ahead to your own turnoff. This scenario is similar to randomization in RCTs because, just like in the traffic situation, you don’t know in advance which participants will be assigned to each course of action. Randomization ensures that each participant has an equal chance of being assigned to either group regardless of the allocation of others before them, effectively eliminating the influence of bias, confounding, or any other unknown factors that could impact the study’s outcomes.

Contamination poses a threat to the effectiveness of the randomization. It occurs when members of the intervention or control groups interact and collaborate, thereby inadvertently altering the intended effects of the study. RCTs normally have an intervention group that receives a new experience, which will possibly lead to more desired outcomes. The intervention can involve accessing new technology, piloting a new teaching style, receiving specialized training content, or other deliberate actions by the researchers. On the other hand, control group members typically receive the established technology, experience the usual teaching techniques, receive standard training content, or have the usual set of experiences.

Contamination might arise when members of the intervention and control groups exchange information about their group’s experiences. Contamination interferes with the researchers deliberately treating the intervention and control groups differently. For example, in an academic setting, contamination might happen if the intervention group receives new training while the control group receives traditional training. In a contamination scenario, members of the two groups would exchange information. When their knowledge or skills are tested at the end of the study, the assessment might not accurately reflect their comparative progress since both groups have been exposed to each other’s training. A Delphi Study generated a list of common sources of contamination in RCTs, including participants’ physical proximity, frequent interaction, and high desirability of the intervention. 116 Information technology can assist researchers in avoiding, or at least countering, contamination by interacting with study participants virtually rather than in a physical environment. Additionally, electronic health records can similarly be employed in studies while minimizing contamination. 117

RCTs often employ concealment (“blinding”) techniques to ensure that participants, researchers, statisticians, and other analysts are unaware of which participants are enrolled in either the intervention or control groups. Concealment prevents participants from deviating from the protocols. Concealment also reduces the likelihood that the researchers, statisticians, or analysts interject their views into the study protocol, leading to unintended effects or biases. 118

Systematic Reviews

Systematic reviews ( Table 7 ) strive to offer a transparent and replicable synthesis of the best evidence to answer a narrowly focused question. They often involve exhaustive searches of the peer-reviewed literature. While many HIPs have participated in teams conducting systematic reviews, these efforts primarily serve health professions outside of HIP subject areas. 119 Systematic reviews can be time-consuming and labor-intensive endeavors. They rely on a number of the same critical appraisal skills covered in this chapter and its appendices to evaluate multiple studies.

Systematic reviews can include evidence produced by any study design except other reviews. Producers of systematic reviews often exclude study designs more prone to biases or confounding when a sufficient number of studies with fewer similar limitations are available. Systematic reviews are popular despite being relatively limited in number. If well-conducted, they can bypass the first three steps of the EBP Process and position the practitioner well to make an informed decision. The narrow scope of systematic reviews, however, does limit their applicability to a broad range of decisions.

Nearly all HIPs have used the findings of systematic reviews for their own EBP questions. 120 Since much of the HIP evidence base exists outside of the peer-reviewed literature, systematic reviews on HIP subjects can include grey literature, such as presented papers or posters from conferences or white papers from organizations. The MEDLINE database has a filter for selecting systematic reviews as an article type when searching the peer-reviewed literature. Unfortunately, this filter sometimes mistakenly includes meta-analyses and narrative review articles due to the likely confusion among indexers regarding the differences between these article types. It is important to note that meta-analyses are not even a design type; instead, they are a statistical method used to aggregate data sets from more than one study. They can be used for comparative study or systematic review study design types, but some people equate them solely to systematic reviews.

Narrative reviews, on the other hand, are an article type that provides a broad overview of a topic and often lacks the more rigorous features of a systematic review. Scoping reviews have increased in popularity in recent years but have a descriptive purpose that contrasts with systematic reviews. Sutton et al 121 have published an impressive inventory of the many types of reviews that might be confused with systematic reviews. The authors of systematic reviews themselves might contribute to the confusion by mislabeling these studies. The Library, Information Science, and Technology Abstracts database does not offer a filter for systematic reviews, so a keyword approach should be used when searching, followed by a manual screening of the resulting references.

Systematic reviews offer the potential to avoid many of the biases and pitfalls described at the beginning of this chapter.

In actuality, they can fall short of this potential to varying degrees, ranging from minor to monumental ways. The question being addressed needs to be narrowly focused to make the subsequent process manageable, which might disqualify some systematic reviews from application to HIPs’ actual EBP questions. The literature search might not be comprehensive, either due to limited sources searched or inadequately executed searches, leading to the possibility of missing important evidence. The searches might not be documented well enough to be reproduced by other researchers. The exclusion and inclusion criteria for identified studies might not calibrate with the needs of HIPs. The critical appraisal in some systematic reviews might exclude reviewed studies for trivial deficiencies or include studies with major flaws. The recommendations of some systematic reviews, therefore, might not be supported by the identified best available evidence.

Levels of Evidence

The Levels of Evidence, also known as “Hierarchies of Evidence,” are valuable sources of guidance in EBP. They serve as a reminder to busy practitioners that study designs at lower levels have difficulty in avoiding, controlling, or compensating for the many forms of bias or confounding that can affect research studies. Study designs at higher levels tend to be better at controlling biases. For example, higher-level study designs like RCTs can effectively control confounding. Table 8 organizes study designs according to EBP question type and arrays them into their approximate levels. In Table 8 , the “Intervention Question” column recognizes that a properly conducted systematic review incorporating multiple studies is generally more desirable for making an EBP decision compared to a case report. This is because the latter can be vulnerable to many forms of bias and confounding that rely on findings from a single instance. A systematic review, on the other hand, is ranked higher than even an RCT because it combines all available evidence from multiple studies and subjects them to a critical review, leading to a recommendation for making a decision.

Levels of Evidence: An Approximate Hierarchy Linked to Question Type

There are several important caveats to consider when using the Levels of Evidence. As noted earlier in this chapter, no perfect research study exists, and even higher-level of evidence research studies can have weaknesses. Hypothetically, an RCT could be so poorly executed that a well-conducted case report on the same topic could outshine it. While this is possible, it is highly unlikely due to the superior design of an RCT for controlling confounding or biases. Sometimes, a case report might be slightly more relevant than an RCT in answering an Intervention-type of EBP question. For these reasons, one cannot abandon their critical thinking skills even with a tacit acceptance of the Levels of Evidence.

The Levels of Evidence have been widely endorsed by HIP leaders for many years. 122 , 123 They undergo occasional adjustments, but their general organizing principles of controlling biases and other pitfalls remain intact. On balance, two of my otherwise respected colleagues have misinterpreted aspects of the early Levels of Evidence and made that the basis of their criticism. 124 , 125 A fair reading of the evolution of the Levels of Evidence over the years 126 , 127 , 128 , 129 should convince most readers that, when coupled with critical thinking, the underlying principles of the Levels of Evidence continue to provide HIPs with sound guidance.

Critical Appraisal Sheets

The critical appraisal sheets appended to this chapter are intended to serve as a guide for HIPs as they engage in critical appraisal of their potential evidence. The development of these sheets has been a culmination of over 20 years of effort. They draw upon my doctoral training in research methods, as well as my extensive experience conducting research using various study designs. Additionally, I have insights from multiple authorities. While it is impossible to credit all sources that have influenced the development of these sheets over the years, I have cited the readily recognized ones at the end of this sentence. 130 , 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141

  • Critical Appraisal Worksheets

Appendix 1: Case Reports

Instructions: Answer the following questions to critically appraise this piece of evidence.

Appendix 2: Interviews

Appendix 3: descriptive surveys.

Instructions: Answer the following questions to critically appraise this piece of evidence

Appendix 4: Cohort Studies

Appendix 5: quasi-experiments, appendix 6: randomized controlled trials, appendix 7: systematic reviews.

This is an open access publication. Except where otherwise noted, this work is distributed under the terms of a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 DEED), a copy of which is available at https://creativecommons.org/licenses/by-nc-sa/4.0/ .

This open access peer-reviewed Book is brought to you at no cost to you by the Health Sciences Center at UNM Digital Repository. It has been accepted for inclusion in the Faculty Book Display Case by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected].

  • Cite this Page Eldredge J. Evidence Based Practice: A Decision-Making Guide for Health Information Professionals [Internet]. Albuquerque (NM): University of New Mexico Health Sciences Library and Informatics Center; 2024. Critical Appraisal.
  • PDF version of this title (32M)

In this Page

Related information.

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Critical Appraisal - Evidence Based Practice Critical Appraisal - Evidence Based Practice

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Systematic Review Critical Appraisal

    literature review vs critical appraisal

  2. Critical appraisal tools used.

    literature review vs critical appraisal

  3. 10 Examples: How to Write an Article Review in 2023

    literature review vs critical appraisal

  4. What is a Systematic Literature Review?

    literature review vs critical appraisal

  5. How To Write An Introduction For A Critical

    literature review vs critical appraisal

  6. PPT

    literature review vs critical appraisal

VIDEO

  1. CRITICAL APPRAISAL OF LITERATURE AND EVIDENCE BASED DENTISTRY

  2. Literature Review Tip 7 I Dr Dee

  3. Critical appraisal and literature review

  4. 'What is Criticism?' by Roland Barthes, Notes and Summary, MA English SEM 2, Poststructuralism, UGC

  5. Literature Review Vs Systematic Review

  6. Lecture 4: Critical Readings and Literature Review Analysis While Writing a Research Paper

COMMENTS

  1. Difference between a Literature Review and a Critical Review

    It can cover and discuss the main ideas or arguments in a book or an article, or it can review a specific concept, theme, theoretical perspective or key construct found in the existing literature. However, the key feature that distinguishes a critical review from a literature review is that the former is more than just a summary of different ...

  2. Research Guides: Systematic Reviews: Types of Literature Reviews

    Qualitative, narrative synthesis. Thematic analysis, may include conceptual models. Rapid review. Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research. Completeness of searching determined by time constraints.

  3. Critical Appraisal

    Selection of a valid critical appraisal tool, testing the tool with several of the selected studies, and involving two or more reviewers in the appraisal are good practices to follow. 1. Purssell E, McCrae N. How to Perform a Systematic Literature Review: A Guide for Healthcare Researchers, Practitioners and Students. 1st ed. Springer; 2020.

  4. What is critical appraisal?

    In the context of a literature search, critical appraisal is the process of systematically evaluating and assessing the research you have found in order to determine its quality and validity. It is essential to evidence-based practice. More formally, critical appraisal is a systematic evaluation of research papers in order to answer the ...

  5. Critical appraisal of published literature

    Critical appraisal of published literature - PMC

  6. Full article: Critical appraisal

    Critical appraisal 'The notion of systematic review - looking at the totality of evidence - is quietly one of the most important innovations in medicine over the past 30 years' (Goldacre, Citation 2011, p. xi).These sentiments apply equally to sport and exercise psychology; systematic review or evidence synthesis provides transparent and methodical procedures that assist reviewers in ...

  7. The importance of critical appraisal

    Critical appraisal will help us to identify the aspects of a study that make it strong or weak. We should reflect this level of certainty - and the product of our critical appraisal - in our own academic writing and in our clinical thinking. In some instances, the strengths and weaknesses of a study are straight forward to identify.

  8. Writing a literature review

    A formal literature review is an evidence-based, in-depth analysis of a subject. There are many reasons for writing one and these will influence the length and style of your review, but in essence a literature review is a critical appraisal of the current collective knowledge on a subject. Rather than just being an exhaustive list of all that ...

  9. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  10. How to appraise the literature: basic principles for the busy clinician

    A Systematic Literature Review of the Information-Seeking Behaviour of Dentists in Developed Countries. J Dent Educ 2016; 80: 569-577. Critical Appraisal Skills Programme.

  11. A guide to critical appraisal of evidence

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  12. Critical Appraisal of Clinical Research

    Critical Appraisal of Clinical Research - PMC

  13. Evaluating Sources & Literature Reviews

    A good literature review evaluates a wide variety of sources (academic articles, scholarly books, government/NGO reports). It also evaluates literature reviews that study similar topics. This page offers you a list of resources and tips on how to evaluate the sources that you may use to write your review.

  14. LSBU Library: Literature Searching: Critical Appraisal

    Key Points About Critical Appraisal Tools. They aim to assess the trustworthiness, relevance, and results of published papers by examining different components of the research process. The content and criteria assessed by these tools can vary significantly, as there is a lack of consensus on the essential items for critical appraisal.

  15. Systematic reviews: Structure, form and content

    The systematic, transparent searching techniques outlined in this article can be adopted and adapted for use in other forms of literature review (Grant & Booth 2009), for example, while the critical appraisal tools highlighted are appropriate for use in other contexts in which the reliability and applicability of medical research require ...

  16. Critical Appraisal: Assessing the Quality of Studies

    Abstract. There is great variation in the type and quality of research evidence. Having completed your search and assembled your studies, the next step is to critically appraise the studies to ascertain their quality. Ultimately you will be making a judgement about the overall evidence, but that comes later. You will see throughout this chapter ...

  17. Dissecting the literature: the importance of critical appraisal

    Dissecting the literature: the importance of critical appraisal

  18. Types of Review Articles

    Types of Literature Reviews: Critically Appraised Topic (CATs) : A critically appraised topic (or CAT) is a short summary of evidence on a topic of interest, usually focused around a clinical question. A CAT is like a shorter and less rigorous version of a systematic review, summarizing the best available research evidence on a topic.

  19. What is the difference between systematic review and critical

    Literature reviews and systematic reviews are types of review articles. Both types of articles help researchers stay updated about latest research in the field. They also contribute to the advancement of the field in that they help other researchers identify gaps in existing literature. Literature reviews are also known as critical literature ...

  20. Critical Appraisal Tools and Reporting Guidelines

    Critical Appraisal Tools and Reporting Guidelines

  21. Making sense of the literature: an introduction to critical appraisal

    One is published by the Critical Appraisal Skills Programme (CASP) which can be used to formulate thoughts when critically appraising the literature. 10 In broad terms, there are ten areas which ...

  22. Critical appraisal of the literature. Why do we care?

    Critical appraisal is the systematic evaluation of clinical research papers that helps us establish if the results are valid and if they could be used to inform medical decision in a given local population and context. There are several published guidelines for critically appraising the scientific literature, most of which are structured as ...

  23. Evidence Synthesis Guide : Risk of Bias by Study Design

    Systematic Reviews: Critical Appraisal by Study Design

  24. Critical Appraisal

    Finally, a thorough literature review of other research studies, ... The critical appraisal sheets appended to this chapter are intended to serve as a guide for HIPs as they engage in critical appraisal of their potential evidence. The development of these sheets has been a culmination of over 20 years of effort.