• Business Intelligence Reporting
  • Data Driven
  • Data Analysis Method
  • Business Model
  • Business Analysis
  • Quantitative Research
  • Business Analytics
  • Marketing Analytics
  • Data Integration
  • Digital Transformation Strategy
  • Online Training
  • Local Training Events

5 Strengths and 5 Limitations of Qualitative Research

Lauren Christiansen

Lauren Christiansen

Insight into qualitative research.

Anyone who reviews a bunch of numbers knows how impersonal that feels. What do numbers really reveal about a person's beliefs, motives, and thoughts? While it's critical to collect statistical information to identify business trends and inefficiencies, stats don't always tell the full story. Why does the customer like this product more than the other one? What motivates them to post this particular hashtag on social media? How do employees actually feel about the new supply chain process? To answer more personal questions that delve into the human experience, businesses often employ a qualitative research process.

10 Key Strengths and Limitations of Qualitative Research

Qualitative research helps entrepreneurs and established companies understand the many factors that drive consumer behavior. Because most organizations collect and analyze quantitative data, they don't always know exactly how a target market feels and what it wants. It helps researchers when they can observe a small sample size of consumers in a comfortable environment, ask questions, and let them speak. Research methodology varies depending on the industry and type of business needs. Many companies employ mixed methods to extract the insights they require to improve decision-making. While both quantitative research and qualitative methods are effective, there are limitations to both. Quantitative research is expensive, time-consuming, and presents a limited understanding of consumer needs. However, qualitative research methods generate less verifiable information as all qualitative data is based on experience. Businesses should use a combination of both methods to overcome any associated limitations.

Strengths of Qualitative Research

strengths of qualitative research 1615326031 1948

  • Captures New Beliefs - Qualitative research methods extrapolate any evolving beliefs within a market. This may include who buys a product/service, or how employees feel about their employers.
  • Fewer Limitations - Qualitative studies are less stringent than quantitative ones. Outside the box answers to questions, opinions, and beliefs are included in data collection and data analysis.
  • More Versatile - Qualitative research is much easier at times for researchers. They can adjust questions, adapt to circumstances that change or change the environment to optimize results.
  • Greater Speculation - Researchers can speculate more on what answers to drill down into and how to approach them. They can use instinct and subjective experience to identify and extract good data.
  • More Targeted - This research process can target any area of the business or concern it may have. Researchers can concentrate on specific target markets to collect valuable information. This takes less time and requires fewer resources than quantitative studies.

Limitations of Qualitative Research

limitations of qualitative research 1615326031 6006

  • Sample Sizes - Businesses need to find a big enough group of participants to ensure results are accurate. A sample size of 15 people is not enough to show a reliable picture of how consumers view a product. If it is not possible to find a large enough sample size, the data collected may be insufficient.
  • Bias - For internal qualitative studies, employees may be biased. For example, workers may give a popular answer that colleagues agree with rather than a true opinion. This can negatively influence the outcome of the study.
  • Self-Selection Bias - Businesses that call on volunteers to answer questions worry that the people who respond are not reflective of the greater group. It is better if the company selects individuals at random for research studies, particularly if they are employees. However, this changes the process from qualitative to quantitative methods.
  • Artificial - It isn't typical to observe consumers in stores, gather a focus group together, or ask employees about their experiences at work. This artificiality may impact the findings, as it is outside the norm of regular behavior and interactions.
  • Quality - Questions It's hard to know whether researcher questions are quality or not because they are all subjective. Researchers need to ask how and why individuals feel the way they do to receive the most accurate answers.

Key Takeaways on Strengths and Limitations of Qualitative Research

  • Qualitative research helps entrepreneurs and small businesses understand what drives human behavior. It is also used to see how employees feel about workflows and tasks.
  • Companies can extract insights from qualitative research to optimize decision-making and improve products or services.
  • Qualitative research captures new beliefs, has fewer limitations, is more versatile, and is more targeted. It also allows researchers to speculate and insert themselves more into the research study.
  • Qualitative research has many limitations which include possible small sample sizes, potential bias in answers, self-selection bias, and potentially poor questions from researchers. It also can be artificial because it isn't typical to observe participants in focus groups, ask them questions at work, or invite them to partake in this type of research method.

Must-Read Content

the top qualitative research methods for business success 1614973632 2872

The Top Qualitative Research Methods for Business Success

5 qualitative research examples in action 1615229352 8092

5 Qualitative Research Examples in Action

7 types of qualitative research to look out for 1615316589 8331

7 Types of Qualitative Research to Look Out For

what is qualitative research really 1615241204 2538

What is Qualitative Research, Really?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

the limitations of qualitative research

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved June 7, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 18, Issue 2
  • Issues of validity and reliability in qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Helen Noble 1 ,
  • Joanna Smith 2
  • 1 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • 2 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
  • Correspondence to Dr Helen Noble School of Nursing and Midwifery, Queens's University Belfast, Medical Biology Centre, 97 Lisburn Rd, Belfast BT9 7BL, UK; helen.noble{at}qub.ac.uk

https://doi.org/10.1136/eb-2015-102054

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evaluating the quality of research is essential if findings are to be utilised in practice and incorporated into care delivery. In a previous article we explored ‘bias’ across research designs and outlined strategies to minimise bias. 1 The aim of this article is to further outline rigour, or the integrity in which a study is conducted, and ensure the credibility of findings in relation to qualitative research. Concepts such as reliability, validity and generalisability typically associated with quantitative research and alternative terminology will be compared in relation to their application to qualitative research. In addition, some of the strategies adopted by qualitative researchers to enhance the credibility of their research are outlined.

Are the terms reliability and validity relevant to ensuring credibility in qualitative research?

Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research. 2–4 In the broadest context these terms are applicable, with validity referring to the integrity and application of the methods undertaken and the precision in which the findings accurately reflect the data, while reliability describes consistency within the employed analytical procedures. 4 However, if qualitative methods are inherently different from quantitative methods in terms of philosophical positions and purpose, then alterative frameworks for establishing rigour are appropriate. 3 Lincoln and Guba 5 offer alternative criteria for demonstrating rigour within qualitative research namely truth value, consistency and neutrality and applicability. Table 1 outlines the differences in terminology and criteria used to evaluate qualitative research.

  • View inline

Terminology and criteria used to evaluate the credibility of research findings

What strategies can qualitative researchers adopt to ensure the credibility of the study findings?

Unlike quantitative researchers, who apply statistical methods for establishing validity and reliability of research findings, qualitative researchers aim to design and incorporate methodological strategies to ensure the ‘trustworthiness’ of the findings. Such strategies include:

Accounting for personal biases which may have influenced findings; 6

Acknowledging biases in sampling and ongoing critical reflection of methods to ensure sufficient depth and relevance of data collection and analysis; 3

Meticulous record keeping, demonstrating a clear decision trail and ensuring interpretations of data are consistent and transparent; 3 , 4

Establishing a comparison case/seeking out similarities and differences across accounts to ensure different perspectives are represented; 6 , 7

Including rich and thick verbatim descriptions of participants’ accounts to support findings; 7

Demonstrating clarity in terms of thought processes during data analysis and subsequent interpretations 3 ;

Engaging with other researchers to reduce research bias; 3

Respondent validation: includes inviting participants to comment on the interview transcript and whether the final themes and concepts created adequately reflect the phenomena being investigated; 4

Data triangulation, 3 , 4 whereby different methods and perspectives help produce a more comprehensive set of findings. 8 , 9

Table 2 provides some specific examples of how some of these strategies were utilised to ensure rigour in a study that explored the impact of being a family carer to patients with stage 5 chronic kidney disease managed without dialysis. 10

Strategies for enhancing the credibility of qualitative research

In summary, it is imperative that all qualitative researchers incorporate strategies to enhance the credibility of a study during research design and implementation. Although there is no universally accepted terminology and criteria used to evaluate qualitative research, we have briefly outlined some of the strategies that can enhance the credibility of study findings.

  • Sandelowski M
  • Lincoln YS ,
  • Barrett M ,
  • Mayan M , et al
  • Greenhalgh T
  • Lingard L ,

Twitter Follow Joanna Smith at @josmith175 and Helen Noble at @helnoble

Competing interests None.

Read the full text or download the PDF:

Jump to navigation

Home

Cochrane Training

Chapter 21: qualitative evidence.

Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas

Key Points:

  • A qualitative evidence synthesis (commonly referred to as QES) can add value by providing decision makers with additional evidence to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.
  • A qualitative evidence synthesis can be undertaken and integrated with a corresponding intervention review; or
  • Undertaken using a mixed-method design that integrates a qualitative evidence synthesis with an intervention review in a single protocol.
  • Methods for qualitative evidence synthesis are complex and continue to develop. Authors should always consult current methods guidance at methods.cochrane.org/qi .

This chapter should be cited as: Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, Garside R, Hannes K, Pantoja T, Thomas J. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

21.1 Introduction

The potential contribution of qualitative evidence to decision making is well-established (Glenton et al 2016, Booth 2017, Carroll 2017). A synthesis of qualitative evidence can inform understanding of how interventions work by:

  • increasing understanding of a phenomenon of interest (e.g. women’s conceptualization of what good antenatal care looks like);
  • identifying associations between the broader environment within which people live and the interventions that are implemented;
  • increasing understanding of the values and attitudes toward, and experiences of, health conditions and interventions by those who implement or receive them; and
  • providing a detailed understanding of the complexity of interventions and implementation, and their impacts and effects on different subgroups of people and the influence of individual and contextual characteristics within different contexts.

The aim of this chapter is to provide authors (who already have experience of undertaking qualitative research and qualitative evidence synthesis) with additional guidance on undertaking a qualitative evidence synthesis that is subsequently integrated with an intervention review. This chapter draws upon guidance presented in a series of six papers published in the Journal of Clinical Epidemiology (Cargo et al 2018, Flemming et al 2018, Harden et al 2018, Harris et al 2018, Noyes et al 2018a, Noyes et al 2018b) and from a further World Health Organization series of papers published in BMJ Global Health, which extend guidance to qualitative evidence syntheses conducted within a complex intervention and health systems and decision making context (Booth et al 2019a, Booth et al 2019b, Flemming et al 2019, Noyes et al 2019, Petticrew et al 2019).The qualitative evidence synthesis and integration methods described in this chapter supplement Chapter 17 on methods for addressing intervention complexity. Authors undertaking qualitative evidence syntheses should consult these papers and chapters for more detailed guidance.

21.2 Designs for synthesizing and integrating qualitative evidence with intervention reviews

There are two main designs for synthesizing qualitative evidence with evidence of the effects of interventions:

  • Sequential reviews: where one or more existing intervention review(s) has been published on a similar topic, it is possible to do a sequential qualitative evidence synthesis and then integrate its findings with those of the intervention review to create a mixed-method review. For example, Lewin and colleagues (Lewin et al (2010) and Glenton and colleagues (Glenton et al (2013) undertook sequential reviews of lay health worker programmes using separate protocols and then integrated the findings.  
  • Convergent mixed-methods review: where no pre-existing intervention review exists, it is possible to do a full convergent ‘mixed-methods’ review where the trials and qualitative evidence are synthesized separately, creating opportunities for them to ‘speak’ to each other during development, and then integrated within a third synthesis. For example, Hurley and colleagues (Hurley et al (2018) undertook an intervention review and a qualitative evidence synthesis following a single protocol.

It is increasingly common for sequential and convergent reviews to be conducted by some or all of the same authors; if not, it is critical that authors working on the qualitative evidence synthesis and intervention review work closely together to identify and create sufficient points of integration to enable a third synthesis that integrates the two reviews, or the conduct of a mixed-method review (Noyes et al 2018a) (see Figure 21.2.a ). This consideration also applies where an intervention review has already been published and there is no prior relationship with the qualitative evidence synthesis authors. We recommend that at least one joint author works across both reviews to facilitate development of the qualitative evidence synthesis protocol, conduct of the synthesis, and subsequent integration of the qualitative evidence synthesis with the intervention review within a mixed-methods review.

Figure 21.2.a Considering context and points of contextual integration with the intervention review or within a mixed-method review

the limitations of qualitative research

21.3 Defining qualitative evidence and studies

We use the term ‘qualitative evidence synthesis’ to acknowledge that other types of qualitative evidence (or data) can potentially enrich a synthesis, such as narrative data derived from qualitative components of mixed-method studies or free text from questionnaire surveys. We would not, however, consider a questionnaire survey to be a qualitative study and qualitative data from questionnaires should not usually be privileged over relevant evidence from qualitative studies. When thinking about qualitative evidence, specific terminology is used to describe the level of conceptual and contextual detail. Qualitative evidence that includes higher or lower levels of conceptual detail is described as ‘rich’ or ‘poor’. Associated terms ‘thick’ or ‘thin’ are best used to refer to higher or lower levels of contextual detail. Review authors can potentially develop a stronger synthesis using rich and thick qualitative evidence but, in reality, they will identify diverse conceptually rich and poor and contextually thick and thin studies. Developing a clear picture of the type and conceptual richness of available qualitative evidence strongly influences the choice of methodology and subsequent methods. We recommend that authors undertake scoping searches to determining the type and richness of available qualitative evidence before selecting their methodology and methods.

A qualitative study is a research study that uses a qualitative method of data collection and analysis. Review authors should include the studies that enable them to answer their review question. When selecting qualitative studies in a review about intervention effects, two types of qualitative study are available: those that collect data from the same participants as the included trials, known as ‘trial siblings’; and those that address relevant issues about the intervention, but as separate items of research – not connected to any included trials. Both can provide useful information, with trial sibling studies obviously closer in terms of their precise contexts to the included trials (Moore et al 2015), and non-sibling studies possibly contributing perspectives not present in the trials (Noyes et al 2016b).

21.4 Planning a qualitative evidence synthesis linked to an intervention review

The Cochrane Qualitative and Implementation Methods Group (QIMG) website provides links to practical guidance and key steps for authors who are considering a qualitative evidence synthesis ( methods.cochrane.org/qi ). The RETREAT framework outlines seven key considerations that review authors should systematically work through when planning a review (Booth et al 2016, Booth et al 2018) ( Box 21.4.a ). Flemming and colleagues (Flemming et al (2019) further explain how to factor in such considerations when undertaking a qualitative evidence synthesis within a complex intervention and decision making context when complexity is an important consideration.

Box 21.4.a RETREAT considerations when selecting an appropriate method for qualitative synthesis

first, consider the complexity of the review question. Which elements contribute most to complexity (e.g. the condition, the intervention or the context)?
 

Which elements should be prioritized as the focal point for attention? (Squires et al 2013, Kelly et al 2017).
 

consider the philosophical foundations of the primary studies. Would it be appropriate to favour a method such as thematic synthesis that it is less reliant on epistemological considerations? (Barnett-Page and Thomas 2009).
 

– consider what type of qualitative evidence synthesis will be feasible and manageable within the time frame available (Booth et al 2016).
 

– consider whether the ambition of the review matches the available resources. Will the extent of the scope and the sampling approach of the review need to be limited? (Benoot et al 2016, Booth et al 2016).
 

consider access to expertise, both within the review team and among a wider group of advisors. Does the available expertise match the qualitative evidence synthesis approach chosen? (Booth et al 2016).
 

consider the intended audience and purpose of the review. Does the approach to question formulation, the scope of the review and the intended outputs meet their needs? (Booth et al 2016).
 

consider the type of data present in typical studies for inclusion. To what extent are candidate studies conceptually rich and contextually thick in their detail?

21.5 Question development

The review question is critical to development of the qualitative evidence synthesis (Harris et al 2018). Question development affords a key point for integration with the intervention review. Complementary guidance supports novel thinking about question development, application of question development frameworks and the types of questions to be addressed by a synthesis of qualitative evidence (Cargo et al 2018, Harris et al 2018, Noyes et al 2018a, Booth et al 2019b, Flemming et al 2019).

Research questions for quantitative reviews are often mapped using structures such as PICO. Some qualitative reviews adopt this structure, or use an adapted variation of such a structure (e.g. SPICE (Setting, Perspective, Intervention or Phenomenon of Interest, Comparison, Evaluation) or SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type); (Cooke et al 2012). Booth and colleagues (Booth et al (2019b) propose an extended question framework (PerSPecTIF) to describe both wider context and immediate setting that is particularly suited to qualitative evidence synthesis and complex intervention reviews (see Table 21.5.a ).

Detailed attention to the question and specification of context at an early stage is critical to many aspects of qualitative synthesis (see Petticrew et al (2019) and Booth et al (2019a) for a more detailed discussion). By specifying the context a review team is able to identify opportunities for integration with the intervention review, or opportunities for maximizing use and interpretation of evidence as a mixed-method review progresses (see Figure 21.2.a ), and informs both the interpretation of the observed effects and assessment of the strength of the evidence available in addressing the review question (Noyes et al 2019). Subsequent application of GRADE CERQual (Lewin et al 2015, Lewin et al 2018), an approach to assess the confidence in synthesized qualitative findings, requires further specification of context in the review question.

Table 21.5.a PerSPecTIF Question formulation framework for qualitative evidence syntheses (Booth et al (2019b). Reproduced with permission of BMJ Publishing Group

Perspective

Setting

Phenomenon of interest/ Problem

Environment

Comparison (optional)

Time/ Timing

Findings

From the perspective of a pregnant woman

In the setting of rural communities

How does facility-based care

Within an environment of poor transport infrastructure and distantly located facilities

Compare with traditional birth attendants at home

Up to and including delivery

In relation to the woman’s perceptions and experiences?

21.6 Questions exploring intervention implementation

Additional guidance is available on formulation of questions to understand and assess intervention implementation (Cargo et al 2018). A strong understanding of how an intervention is thought to work, and how it should be implemented in practice, will enable a critical consideration of whether any observed lack of effect might be due to a poorly conceptualized intervention (i.e. theory failure) or a poor intervention implementation (i.e. implementation failure). Heterogeneity needs to be considered for both the underlying theory and the ways in which the intervention was implemented. An a priori scoping review (Levac et al 2010), concept analysis (Walker and Avant 2005), critical review (Grant and Booth 2009) or textual narrative synthesis (Barnett-Page and Thomas 2009) can be undertaken to classify interventions and/or to identify the programme theory, logic model or implementation measures and processes. The intervention Complexity Assessment Tool for Systematic Reviews iCAT_SR (Lewin et al 2017) may be helpful in classifying complexity in interventions and developing associated questions.

An existing intervention model or framework may be used within a new topic or context. The ‘best-fit framework’ approach to synthesis (Carroll et al 2013) can be used to establish the degree to which the source context (from where the framework was derived) resembles the new target context (see Figure 21.2.a ). In the absence of an explicit programme theory and detail of how implementation relates to outcomes, an a priori realist review, meta-ethnography or meta-interpretive review can be undertaken (Booth et al 2016). For example, Downe and colleagues (Downe et al (2016) undertook an initial meta-ethnography review to develop an understanding of the outcomes of importance to women receiving antenatal care.

However, these additional activities are very resource-intensive and are only recommended when the review team has sufficient resources to supplement the planned qualitative evidence syntheses with an additional explanatory review. Where resources are less plentiful a review team could engage with key stakeholders to articulate and develop programme theory (Kelly et al 2017, De Buck et al 2018).

21.6.1 Using logic models and theories to support question development

Review authors can develop a more comprehensive representation of question features through use of logic models, programme theories, theories of change, templates and pathways (Anderson et al 2011, Kneale et al 2015, Noyes et al 2016a) (see also Chapter 17, Section 17.2.1  and Chapter 2, Section 2.5.1 ). These different forms of social theory can be used to visualize and map the research question, its context, components, influential factors and possible outcomes (Noyes et al 2016a, Rehfuess et al 2018).

21.6.2 Stakeholder engagement

Finally, review authors need to engage stakeholders, including consumers affected by the health issue and interventions, or likely users of the review from clinical or policy contexts. From the preparatory stage, this consultation can ensure that the review scope and question is appropriate and resulting products address implementation concerns of decision makers (Kelly et al 2017, Harris et al 2018).

21.7 Searching for qualitative evidence

In comparison with identification of quantitative studies (see also Chapter 4 ), procedures for retrieval of qualitative research remain relatively under-developed. Particular challenges in retrieval are associated with non-informative titles and abstracts, diffuse terminology, poor indexing and the overwhelming prevalence of quantitative studies within data sources (Booth et al 2016).

Principal considerations when planning a search for qualitative studies, and the evidence that underpins them, have been characterized using a 7S framework from Sampling and Sources through Structured questions, Search procedures, Strategies and filters and Supplementary strategies to Standards for Reporting (Booth et al 2016).

A key decision, aligned to the purpose of the qualitative evidence synthesis is whether to use the comprehensive, exhaustive approaches that characterize quantitative searches or whether to use purposive sampling that is more sensitive to the qualitative paradigm (Suri 2011). The latter, which is used when the intent is to generate an interpretative understanding, for example, when generating theory, draws upon a versatile toolkit that includes theoretical sampling, maximum variation sampling and intensity sampling. Sources of qualitative evidence are more likely to include book chapters, theses and grey literature reports than standard quantitative study reports, and so a search strategy should place extra emphasis on these sources. Local databases may be particularly valuable given the criticality of context (Stansfield et al 2012).

Another key decision is whether to use study filters or simply to conduct a topic-based search where qualitative studies are identified at the study selection stage. Search filters for qualitative studies lack the specificity of their quantitative counterparts. Nevertheless, filters may facilitate efficient retrieval by study type (e.g. qualitative (Rogers et al 2018) or mixed methods (El Sherif et al 2016) or by perspective (e.g. patient preferences (Selva et al 2017)) particularly where the quantitative literature is overwhelmingly large and thus increases the number needed to retrieve. Poor indexing of qualitative studies makes citation searching (forward and backward) and the Related Articles features of electronic databases particularly useful (Cooper et al 2017). Further guidance on searching for qualitative evidence is available (Booth et al 2016, Noyes et al 2018a). The CLUSTER method has been proposed as a specific named method for tracking down associated or sibling reports (Booth et al 2013). The BeHEMoTh approach has been developed for identifying explicit use of theory (Booth and Carroll 2015).

21.7.1 Searching for process evaluations and implementation evidence

Four potential approaches are available to identify process evaluations.

  • Identify studies at the point of study selection rather than through tailored search strategies. This involves conducting a sensitive topic search without any study design filter (Harden et al 1999), and identifying all study designs of interest during the screening process. This approach can be feasible when a review question involves multiple publication types (e.g. randomized trial, qualitative research and economic evaluations), which then do not require separate searches.  
  • Restrict included process evaluations to those conducted within randomized trials, which can be identified using standard search filters (see Chapter 4, Section 4.4.7 ). This method relies on reports of process evaluations also describing the surrounding randomized trial in enough detail to be identified by the search filter.  
  • Use unevaluated filter terms (such as ‘process evaluation’, ‘program(me) evaluation’, ‘feasibility study’, ‘implementation’ or ‘proof of concept’ etc) to retrieve process evaluations or implementation data. Approaches using strings of terms associated with the study type or purpose are considered experimental. There is a need to develop and test such filters. It is likely that such filters may be derived from the study type (process evaluation), the data type (process data) or the application (implementation) (Robbins et al 2011).  
  • Minimize reliance on topic-based searching and rely on citations-based approaches to identify linked reports, published or unpublished, of a particular study (Booth et al 2013) which may provide implementation or process data (Bonell et al 2013).

More detailed guidance is provided by Cargo and colleagues (Cargo et al (2018).

21.8 Assessing methodological strengths and limitations of qualitative studies

Assessment of the methodological strengths and limitations of qualitative research remains contested within the primary qualitative research community (Garside 2014). However, within systematic reviews and evidence syntheses it is considered essential, even when studies are not to be excluded on the basis of quality (Carroll et al 2013). One review found almost 100 appraisal tools for assessing primary qualitative studies (Munthe-Kaas et al 2019). Limitations included a focus on reporting rather than conduct and the presence of items that are separate from, or tangential to, consideration of study quality (e.g. ethical approval).

Authors should distinguish between assessment of study quality and assessment of risk of bias by focusing on assessment of methodological strengths and limitations as a marker of study rigour (what we term a ‘risk to rigour’ approach (Noyes et al 2019)). In the absence of a definitive risk to rigour tool, we recommend that review authors select from published, commonly used and validated tools that focus on the assessment of the methodological strengths and limitations of qualitative studies (see Box 21.8.a ). Pragmatically, we consider a ‘validated’ tool as one that has been subjected to evaluation. Issues such as inter-rater reliability are afforded less importance given that identification of complementary or conflicting perspectives on risk to rigour is considered more useful than achievement of consensus per se (Noyes et al 2019).

The CASP tool for qualitative research (as one example) maps onto the domains in Box 21.8.a (CASP 2013). Tools not meeting the criterion of focusing on assessment of methodological strengths and limitations include those that integrate assessment of the quality of reporting (such as scoring of the title and abstract, etc) into an overall assessment of methodological strengths and limitations. As with other risk of bias assessment tools, we strongly recommend against the application of scores to domains or calculation of total quality scores. We encourage review authors to discuss the studies and their assessments of ‘risk to rigour’ for each paper and how the study’s methodological limitations may affect review findings (Noyes et al 2019). We further advise that qualitative ‘sensitivity analysis’, exploring the robustness of the synthesis and its vulnerability to methodologically limited studies, be routinely applied regardless of the review authors’ overall confidence in synthesized findings (Carroll et al 2013). Evidence suggests that qualitative sensitivity analysis is equally advisable for mixed methods studies from which the qualitative component is extracted (Verhage and Boels 2017).

Box 21.8.a Example domains that provide an assessment of methodological strengths and limitations to determine study rigour

Clear aims and research question
 

Congruence between the research aims/question and research design/method(s)
 

Rigour of case and or participant identification, sampling and data collection to address the question
 

Appropriate application of the method
 

Richness/conceptual depth of findings
 

Exploration of deviant cases and alternative explanations
 

Reflexivity of the researchers*
 

*Reflexivity encourages qualitative researchers and reviewers to consider the actual and potential impacts of the researcher on the context, research participants and the interpretation and reporting of data and findings (Newton et al 2012). Being reflexive entails making conflicts of interest transparent, discussing the impact of the reviewers and their decisions on the review process and findings and making transparent any issues discussed and subsequent decisions.

Adapted from Noyes et al (2019) and Alvesson and Sköldberg (2009)

21.8.1 Additional assessment of methodological strengths and limitations of process evaluation and intervention implementation evidence

Few assessment tools explicitly address rigour in process evaluation or implementation evidence. For qualitative primary studies, the 8-item process evaluation tool developed by the EPPI-Centre (Rees et al 2009, Shepherd et al 2010) can be used to supplement tools selected to assess methodological strengths and limitations and risks to rigour in primary qualitative studies. One of these items, a question on usefulness (framed as ‘how well the intervention processes were described and whether or not the process data could illuminate why or how the interventions worked or did not work’ ) offers a mechanism for exploring process mechanisms (Cargo et al 2018).

21.9 Selecting studies to synthesize

Decisions about inclusion or exclusion of studies can be more complex in qualitative evidence syntheses compared to reviews of trials that aim to include all relevant studies. Decisions on whether to include all studies or to select a sample of studies depend on a range of general and review specific criteria that Noyes and colleagues (Noyes et al (2019) outline in detail. The number of qualitative studies selected needs to be consistent with a manageable synthesis, and the contexts of the included studies should enable integration with the trials in the effectiveness analysis (see Figure 21.2.a ). The guiding principle is transparency in the reporting of all decisions and their rationale.

21.10 Selecting a qualitative evidence synthesis and data extraction method

Authors will typically find that they cannot select an appropriate synthesis method until the pool of available qualitative evidence has been thoroughly scoped. Flexible options concerning choice of method may need to be articulated in the protocol.

The INTEGRATE-HTA guidance on selecting methodology and methods for qualitative evidence synthesis and health technology assessment offers a useful starting point when selecting a method of synthesis (Booth et al 2016, Booth et al 2018). Some methods are designed primarily to develop findings at a descriptive level and thus directly feed into lines of action for policy and practice. Others hold the capacity to develop new theory (e.g. meta-ethnography and theory building approaches to thematic synthesis). Noyes and colleagues (Noyes et al (2019) and Flemming and colleagues (Flemming et al (2019) elaborate on key issues for consideration when selecting a method that is particularly suited to a Cochrane Review and decision making context (see Table 21.10.a ). Three qualitative evidence synthesis methods (thematic synthesis, framework synthesis and meta-ethnography) are recommended to produce syntheses that can subsequently be integrated with an intervention review or analysis.

Table 21.10.a Recommended methods for undertaking a qualitative evidence synthesis for subsequent integration with an intervention review, or as part of a mixed-method review (adapted from an original source developed by convenors (Flemming et al 2019, Noyes et al 2019))

Thematic synthesis

(Thomas and Harden 2008)

Most accessible form of synthesis. Clear approach, can be used with ‘thin’ data to produce descriptive themes and with ‘thicker’ data to develop descriptive themes in to more in-depth analytic themes. Themes are then integrated within the quantitative synthesis.

May be limited in interpretive ‘power’ and risks over-simplistic use and thus not truly informing decision making such as guidelines. Complex synthesis process that requires an experienced team. Theoretical findings may combine empirical evidence, expert opinion and conjecture to form hypotheses. More work is needed on how GRADE CERQual to assess confidence in synthesized qualitative findings (see Section ) can be applied to theoretical findings. May lack clarity on how higher-level findings translate into actionable points.

Framework synthesis

(Oliver et al 2008, Dixon-Woods 2011)

Best-fit framework synthesis

(Carroll et al 2011)

Works well within reviews of complex interventions by accommodating complexity within the framework, including representation of theory. The framework allows a clear mechanism for integration of qualitative and quantitative evidence in an aggregative way – see Noyes et al (2018a). Works well where there is broad agreement about the nature of interventions and their desired impacts.

Requires identification, selection and justification of framework. A framework may be revealed as inappropriate only once extraction/synthesis is underway. Risk of simplistically forcing data into a framework for expedience.

Meta-ethnography

(Noblit and Hare 1988)

Primarily interpretive synthesis method leading to creation of descriptive as well as new high order constructs. Descriptive and theoretical findings can help inform decision making such as guidelines. Explicit reporting standards have been developed.

Complex methodology and synthesis process that requires highly experienced team. Can take more time and resources than other methodologies. Theoretical findings may combine empirical evidence, expert opinion and conjecture to form hypotheses. May not satisfy requirements for an audit trail (although new reporting guidelines will help overcome this (France et al 2019). More work is needed to determine how CERQual can be applied to theoretical findings. May be unclear how higher-level findings translate into actionable points.

21.11 Data extraction

Qualitative findings may take the form of quotations from participants, subthemes and themes identified by the study’s authors, explanations, hypotheses or new theory, or observational excerpts and author interpretations of these data (Sandelowski and Barroso 2002). Findings may be presented as a narrative, or summarized and displayed as tables, infographics or logic models and potentially located in any part of the paper (Noyes et al 2019).

Methods for qualitative data extraction vary according to the synthesis method selected. Data extraction is not sequential and linear; often, it involves moving backwards and forwards between review stages. Review teams will need regular meetings to discuss and further interrogate the evidence and thereby achieve a shared understanding. It may be helpful to draw on a key stakeholder group to help in interpreting the evidence and in formulating key findings. Additional approaches (such as subgroup analysis) can be used to explore evidence from specific contexts further.

Irrespective of the review type and choice of synthesis method, we consider it best practice to extract detailed contextual and methodological information on each study and to report this information in a table of ‘Characteristics of included studies’ (see Table 21.11.a ). The template for intervention description and replication TIDieR checklist (Hoffmann et al 2014) and ICAT_SR tool may help with specifying key information for extraction (Lewin et al 2017). Review authors must ensure that they preserve the context of the primary study data during the extraction and synthesis process to prevent misinterpretation of primary studies (Noyes et al 2019).

Table 21.11.a Contextual and methodological information for inclusion within a table of ‘Characteristics of included studies’. From Noyes et al (2019). Reproduced with permission of BMJ Publishing Group

Context and participants

Important elements of study context, relevant to addressing the review question and locating the context of the primary study; for example, the study setting, population characteristics, participants and participant characteristics, the intervention delivered (if appropriate), etc.

Study design and methods used

Methodological design and approach taken by the study; methods for identifying the sample recruitment; the specific data collection and analysis methods utilized; and any theoretical models used to interpret or contextualize the findings.

Noyes and colleagues (Noyes et al (2019) provide additional guidance and examples of the various methods of data extraction. It is usual for review authors to select one method. In summary, extraction methods can be grouped as follows.

  • Using a bespoke universal, standardized or adapted data extraction template Review authors can develop their own review-specific data extraction template, or select a generic data extraction template by study type (e.g. templates developed by the National Institute for Health and Clinical Excellence (National Institute for Health Care Excellence 2012).
  • Using an a priori theory or predetermined framework to extract data Framework synthesis, and its subvariant ‘Best Fit’ Framework approach, involve extracting data from primary studies against an a priori framework in order to better understand a phenomenon of interest (Carroll et al 2011, Carroll et al 2013). For example, Glenton and colleagues (Glenton et al (2013) extracted data against a modified SURE Framework (2011) to synthesize factors affecting the implementation of lay health worker interventions. The SURE framework enumerates possible factors that may influence the implementation of health system interventions (SURE (Supporting the Use of Research Evidence) Collaboration 2011, Glenton et al 2013). Use of the ‘PROGRESS’ (place of residence, race/ethnicity/culture/language, occupation, gender/sex, religion, education, socioeconomic status, and social capital) framework also helps to ensure that data extraction maintains an explicit equity focus (O'Neill et al 2014). A logic model can also be used as a framework for data extraction.
  • Using a software program to code original studies inductively A wide range of software products have been developed by systematic review organizations (such as EPPI-Reviewer (Thomas et al 2010)). Most software for the analysis of primary qualitative data – such as NVivo ( www.qsrinternational.com/nvivo/home ) and others – can be used to code studies in a systematic review (Houghton et al 2017). For example, one method of data extraction and thematic synthesis involves coding the original studies using a software program to build inductive descriptive themes and a theoretical explanation of phenomena of interest (Thomas and Harden 2008). Thomas and Harden (2008) provide a worked example to demonstrate coding and developing a new understanding of children’s choices and motivations to eating fruit and vegetables from included primary studies.

21.12 Assessing the confidence in qualitative synthesized findings

The GRADE system has long featured in assessing the certainty of quantitative findings and application of its qualitative counterpart, GRADE-CERQual, is recommended for Cochrane qualitative evidence syntheses (Lewin et al 2015). CERQual has four components (relevance, methodological limitations, adequacy and coherence) which are used to formulate an overall assessment of confidence in the synthesized qualitative finding. Guidance on its components and reporting requirements have been published in a series in Implementation Science (Lewin et al 2018).

21.13 Methods for integrating the qualitative evidence synthesis with an intervention review

A range of methods and tools is available for data integration or mixed-method synthesis (Harden et al 2018, Noyes et al 2019). As noted at the beginning of this chapter, review authors can integrate a qualitative evidence synthesis with an existing intervention review published on a similar topic (sequential approach), or conduct a new intervention review and qualitative evidence syntheses in parallel before integration (convergent approach). Irrespective of whether the qualitative synthesis is sequential or convergent to the intervention review, we recommend that qualitative and quantitative evidence be synthesized separately using appropriate methods before integration (Harden et al 2018). The scope for integration can be more limited with a pre-existing intervention review unless review authors have access to the data underlying the intervention review report.

Harden and colleagues and Noyes and colleagues outline the following methods and tools for integration with an intervention review (Harden et al 2018, Noyes et al 2019):

  • Juxtaposing findings in a matrix Juxtaposition is driven by the findings from the qualitative evidence synthesis (e.g. intervention components related to the acceptability or feasibility of the interventions) and these findings form one side of the matrix. Findings on intervention effects (e.g. improves outcome, no difference in outcome, uncertain effects) form the other side of the matrix. Quantitative studies are grouped according to findings on intervention effects and the presence or absence of features specified by the hypotheses generated from the qualitative synthesis (Candy et al 2011). Observed patterns in the matrix are used to explain differences in the findings of the quantitative studies and to identify gaps in research (van Grootel et al 2017). (See, for example, (Ames et al 2017, Munabi-Babigumira et al 2017, Hurley et al 2018)
  • Analysing programme theory Theories articulating how interventions are expected to work are analysed. Findings from quantitative studies, testing the effects of interventions, and from qualitative and process evaluation evidence are used together to examine how the theories work in practice (Greenhalgh et al 2007). The value of different theories is assessed or new/revised theory developed. Factors that enhance or reduce intervention effectiveness are also identified.
  • Using logic models or other types of conceptual framework A logic model (Glenton et al 2013) or other type of conceptual framework, which represents the processes by which an intervention produces change provides a common scaffold for integrating findings across different types of evidence (Booth and Carroll 2015). Frameworks can be specified a priori from the literature or through stakeholder engagement or newly developed during the review. Findings from quantitative studies testing the effects of interventions and those from qualitative evidence are used to develop and/or further refine the model.
  • Testing hypotheses derived from syntheses of qualitative evidence Quantitative studies are grouped according to the presence or absence of the proposition specified by the hypotheses to be tested and subgroup analysis is used to explore differential findings on the effects of interventions (Thomas et al 2004).
  • Qualitative comparative analysis (QCA) Findings from a qualitative synthesis are used to identify the range of features that are important for successful interventions, and the mechanisms through which these features operate. A QCA then tests whether or not the features are associated with effective interventions (Kahwati et al 2016). The analysis unpicks multiple potential pathways to effectiveness accommodating scenarios where the same intervention feature is associated both with effective and less effective interventions, depending on context. QCA offers potential for use in integration; unlike the other methods and tools presented here it does not yet have sufficient methodological guidance available. However, exemplar reviews using QCA are available (Thomas et al 2014, Harris et al 2015, Kahwati et al 2016).

Review authors can use the above methods in combination (e.g. patterns observed through juxtaposing findings within a matrix can be tested using subgroup analysis or QCA). Analysing programme theory, using logic models and QCA would require members of the review team with specific skills in these methods. Using subgroup analysis and QCA are not suitable when limited evidence is available (Harden et al 2018, Noyes et al 2019). (See also Chapter 17 on intervention complexity.)

21.14 Reporting the protocol and qualitative evidence synthesis

Reporting standards and tools designed for intervention reviews (such as Cochrane’s MECIR standards ( http://methods.cochrane.org/mecir ) or the PRISMA Statement (Liberati et al 2009), may not be appropriate for qualitative evidence syntheses or an integrated mixed-method review. Additional guidance on how to choose, adapt or create a hybrid reporting tool is provided as a 5-point ‘decision flowchart’ ( Figure 21.14.a ) (Flemming et al 2018). Review authors should consider whether: a specific set of reporting guidance is available (e.g. eMERGe for meta-ethnographies (France et al 2015)); whether generic guidance (e.g. ENTREQ (Tong et al 2012)) is suitable; or whether additional checklists or tools are appropriate for reporting a specific aspect of the review.

Figure 21.14.a Decision flowchart for choice of reporting approach for syntheses of qualitative, implementation or process evaluation evidence (Flemming et al 2018). Reproduced with permission of Elsevier

the limitations of qualitative research

21.15 Chapter information

Authors: Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas

Acknowledgements: This chapter replaces Chapter 20 in the first edition of this Handbook (2008) and subsequent Version 5.2. We would like to thank the previous Chapter 20 authors Jennie Popay and Alan Pearson. Elements of this chapter draw on previous supplemental guidance produced by the Cochrane Qualitative and Implementation Methods Group Convenors, to which Simon Lewin contributed.

Funding: JT is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

21.16 References

Ames HM, Glenton C, Lewin S. Parents' and informal caregivers' views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence. Cochrane Database of Systematic Reviews 2017; 2 : CD011787.

Anderson LM, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P. Using logic models to capture complexity in systematic reviews. Research Synthesis Methods 2011; 2 : 33-42.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Medical Research Methodology 2009; 9 : 59.

Benoot C, Hannes K, Bilsen J. The use of purposeful sampling in a qualitative evidence synthesis: a worked example on sexual adjustment to a cancer trajectory. BMC Medical Research Methodology 2016; 16 : 21.

Bonell C, Jamal F, Harden A, Wells H, Parry W, Fletcher A, Petticrew M, Thomas J, Whitehead M, Campbell R, Murphy S, Moore L. Public Health Research. Systematic review of the effects of schools and school environment interventions on health: evidence mapping and synthesis . Southampton (UK): NIHR Journals Library; 2013.

Booth A, Harris J, Croot E, Springett J, Campbell F, Wilkins E. Towards a methodology for cluster searching to provide conceptual and contextual "richness" for systematic reviews of complex interventions: case study (CLUSTER). BMC Medical Research Methodology 2013; 13 : 118.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of 'best fit' framework synthesis for studies of improvement in healthcare. BMJ Quality and Safety 2015; 24 : 700-708.

Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessment for complex interventions 2016. https://www.integrate-hta.eu/wp-content/uploads/2016/02/Guidance-on-choosing-qualitative-evidence-synthesis-methods-for-use-in-HTA-of-complex-interventions.pdf

Booth A. Qualitative evidence synthesis. In: Facey K, editor. Patient involvement in Health Technology Assessment . Singapore: Springer; 2017. p. 187-199.

Booth A, Noyes J, Flemming K, Gehardus A, Wahlster P, Jan van der Wilt G, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. Journal of Clinical Epidemiology 2018; 99 : 41-52.

Booth A, Moore G, Flemming K, Garside R, Rollins N, Tuncalp Ö, Noyes J. Taking account of context in systematic reviews and guidelines considering a complexity perspective. BMJ Global Health 2019a; 4 : e000840.

Booth A, Noyes J, Flemming K, Moore G, Tuncalp Ö, Shakibazadeh E. Formulating questions to address the acceptability and feasibility of complex interventions in qualitative evidence synthesis. BMJ Global Health 2019b; 4 : e001107.

Candy B, King M, Jones L, Oliver S. Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Medical Research Methodology 2011; 11 : 124.

Cargo M, Harris J, Pantoja T, Booth A, Harden A, Hannes K, Thomas J, Flemming K, Garside R, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 4: methods for assessing evidence on intervention implementation. Journal of Clinical Epidemiology 2018; 97 : 59-69.

Carroll C, Booth A, Cooper K. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Medical Research Methodology 2011; 11 : 29.

Carroll C, Booth A, Leaviss J, Rick J. "Best fit" framework synthesis: refining the method. BMC Medical Research Methodology 2013; 13 : 37.

Carroll C. Qualitative evidence synthesis to improve implementation of clinical guidelines. BMJ 2017; 356 : j80.

CASP. Making sense of evidence: 10 questions to help you make sense of qualitative research: Public Health Resource Unit, England; 2013. http://media.wix.com/ugd/dded87_29c5b002d99342f788c6ac670e49f274.pdf .

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qualitative Health Research 2012; 22 : 1435-1443.

Cooper C, Booth A, Britten N, Garside R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Systematic Reviews 2017; 6 : 234.

De Buck E, Hannes K, Cargo M, Van Remoortel H, Vande Veegaete A, Mosler HJ, Govender T, Vandekerckhove P, Young T. Engagement of stakeholders in the development of a Theory of Change for handwashing and sanitation behaviour change. International Journal of Environmental Research and Public Health 2018; 28 : 8-22.

Dixon-Woods M. Using framework-based synthesis for conducting reviews of qualitative studies. BMC Medicine 2011; 9 : 39.

Downe S, Finlayson K, Tuncalp, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG: An International Journal of Obstetrics and Gynaecology 2016; 123 : 529-539.

El Sherif R, Pluye P, Gore G, Granikov V, Hong QN. Performance of a mixed filter to identify relevant studies for mixed studies reviews. Journal of the Medical Library Association 2016; 104 : 47-51.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. Journal of Clinical Epidemiology 2018; 97 : 79-85.

Flemming K, Booth A, Garside R, Tuncalp O, Noyes J. Qualitative evidence synthesis for complex interventions and guideline development: clarification of the purpose, designs and relevant methods. BMJ Global Health 2019; 4 : e000882.

France EF, Ring N, Noyes J, Maxwell M, Jepson R, Duncan E, Turley R, Jones D, Uny I. Protocol-developing meta-ethnography reporting guidelines (eMERGe). BMC Medical Research Methodology 2015; 15 : 103.

France EF, Cunningham M, Ring N, Uny I, Duncan EAS, Jepson RG, Maxwell M, Roberts RJ, Turley RL, Booth A, Britten N, Flemming K, Gallagher I, Garside R, Hannes K, Lewin S, Noblit G, Pope C, Thomas J, Vanstone M, Higginbottom GMA, Noyes J. Improving reporting of Meta-Ethnography: The eMERGe Reporting Guidance BMC Medical Research Methodology 2019; 19 : 25.

Garside R. Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innovation: The European Journal of Social Science Research 2014; 27 : 67-79.

Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2013; 10 : CD010414.

Glenton C, Lewin S, Norris S. Chapter 15: Using evidence from qualitative research to develop WHO guidelines. In: Norris S, editor. World Health Organization Handbook for Guideline Development . 2nd. ed. Geneva: WHO; 2016.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal 2009; 26 : 91-108.

Greenhalgh T, Kristjansson E, Robinson V. Realist review to understand the efficacy of school feeding programmes. BMJ 2007; 335 : 858.

Harden A, Oakley A, Weston R. A review of the effectiveness and appropriateness of peer-delivered health promotion for young people. London: Institute of Education, University of London; 1999.

Harden A, Thomas J, Cargo M, Harris J, Pantoja T, Flemming K, Booth A, Garside R, Hannes K, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. Journal of Clinical Epidemiology 2018; 97 : 70-78.

Harris JL, Booth A, Cargo M, Hannes K, Harden A, Flemming K, Garside R, Pantoja T, Thomas J, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. Journal of Clinical Epidemiology 2018; 97 : 39-48.

Harris KM, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self management interventions for asthma in children and adolescents: a mixed methods systematic review (Protocol). Cochrane Database of Systematic Reviews 2015; 4 : CD011651.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, Altman DG, Barbour V, Macdonald H, Johnston M, Lamb SE, Dixon-Woods M, McCulloch P, Wyatt JC, Chan AW, Michie S. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014; 348 : g1687.

Houghton C, Murphy K, Meehan B, Thomas J, Brooker D, Casey D. From screening to synthesis: using nvivo to enhance transparency in qualitative evidence synthesis. Journal of Clinical Nursing 2017; 26 : 873-881.

Hurley M, Dickson K, Hallett R, Grant R, Hauari H, Walsh N, Stansfield C, Oliver S. Exercise interventions and patient beliefs for people with hip, knee or hip and knee osteoarthritis: a mixed methods review. Cochrane Database of Systematic Reviews 2018; 4 : CD010842.

Kahwati L, Jacobs S, Kane H, Lewis M, Viswanathan M, Golin CE. Using qualitative comparative analysis in a systematic review of a complex intervention. Systematic Reviews 2016; 5 : 82.

Kelly MP, Noyes J, Kane RL, Chang C, Uhl S, Robinson KA, Springs S, Butler ME, Guise JM. AHRQ series on complex intervention systematic reviews-paper 2: defining complexity, formulating scope, and questions. Journal of Clinical Epidemiology 2017; 90 : 11-18.

Kneale D, Thomas J, Harris K. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PloS One 2015; 10 : e0142187.

Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implementation Science 2010; 5 : 69.

Lewin S, Munabi-Babigumira S, Glenton C, Daniels K, Bosch-Capblanch X, van Wyk BE, Odgaard-Jensen J, Johansen M, Aja GN, Zwarenstein M, Scheel IB. Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database of Systematic Reviews 2010; 3 : CD004015.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Medicine 2015; 12 : e1001895.

Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, Reeves BC, Tugwell P, Hannes K, Rehfuess EA, Welch V, McKenzie JE, Burford B, Petkovic J, Anderson LM, Harris J, Noyes J. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Medical Research Methodology 2017; 17 : 76.

Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, Bohren MA, Tuncalp O, Colvin CJ, Garside R, Carlsen B, Langlois EV, Noyes J. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implementation Science 2018; 13 : 2.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009; 339 : b2700.

Moore G, Audrey S, Barker M, Bond L, Bonell C, Harderman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015; 350 : h1258.

Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2017; 11 : CD011558.

Munthe-Kaas H, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Medical Research Methodology 2019; 19 : 113.

National Institute for Health Care Excellence. NICE Process and Methods Guides. Methods for the Development of NICE Public Health Guidance . London: National Institute for Health and Care Excellence (NICE); 2012.

Newton BJ, Rothlingova Z, Gutteridge R, LeMarchand K, Raphael JH. No room for reflexivity? Critical reflections following a systematic review of qualitative research. Journal of Health Psychology 2012; 17 : 866-885.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies . Newbury Park: Sage Publications, Inc; 1988.

Noyes J, Hendry M, Booth A, Chandler J, Lewin S, Glenton C, Garside R. Current use was established and Cochrane guidance on selection of social theories for systematic reviews of complex interventions was developed. Journal of Clinical Epidemiology 2016a; 75 : 78-92.

Noyes J, Hendry M, Lewin S, Glenton C, Chandler J, Rashidian A. Qualitative "trial-sibling" studies and "unrelated" qualitative studies contributed to complex intervention reviews. Journal of Clinical Epidemiology 2016b; 74 : 133-143.

Noyes J, Booth A, Flemming K, Garside R, Harden A, Lewin S, Pantoja T, Hannes K, Cargo M, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. Journal of Clinical Epidemiology 2018a; 97 : 49-58.

Noyes J, Booth A, Cargo M, Flemming K, Garside R, Hannes K, Harden A, Harris J, Lewin S, Pantoja T, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 1: introduction. Journal of Clinical Epidemiology 2018b; 97 : 35-38.

Noyes J, Booth A, Moore G, Flemming K, Tuncalp O, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Global Health 2019; 4 (Suppl 1) : e000893.

O'Neill J, Tabish H, Welch V, Petticrew M, Pottie K, Clarke M, Evans T, Pardo Pardo J, Waters E, White H, Tugwell P. Applying an equity lens to interventions: using PROGRESS ensures consideration of socially stratifying factors to illuminate inequities in health. Journal of Clinical Epidemiology 2014; 67 : 56-64.

Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley A, Gabbay J, Stein K, Buchanan P, Gyte G. A multidimensional conceptual framework for analysing public involvement in health services research. Health Expectations 2008; 11 : 72-84.

Petticrew M, Knai C, Thomas J, Rehfuess E, Noyes J, Gerhardus A, Grimshaw J, Rutter H. Implications of a complexity perspective for systematic reviews and guideline development in health decision making. BMJ Global Health 2019; 4 (Suppl 1) : e000899.

Rees R, Oliver K, Woodman J, Thomas J. Children's views about obesity, body size, shape and weight. A systematic review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2009.

Rehfuess EA, Booth A, Brereton L, Burns J, Gerhardus A, Mozygemba K, Oortwijn W, Pfadenhauer LM, Tummers M, van der Wilt GJ, Rohwer A. Towards a taxonomy of logic models in systematic reviews and health technology assessments: A priori, staged, and iterative approaches. Research Synthesis Methods 2018; 9 : 13-24.

Robbins SCC, Ward K, Skinner SR. School-based vaccination: a systematic review of process evaluations. Vaccine 2011; 29 : 9588-9599.

Rogers M, Bethel A, Abbott R. Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: a comparison of search strategies. Research Synthesis Methods 2018; 9 : 579-586.

Sandelowski M, Barroso J. Finding the findings in qualitative studies. Journal of Nursing Scholarship 2002; 34 : 213-219.

Selva A, Sola I, Zhang Y, Pardo-Hernandez H, Haynes RB, Martinez Garcia L, Navarro T, Schünemann H, Alonso-Coello P. Development and use of a content search strategy for retrieving studies on patients' views and preferences. Health and Quality of Life Outcomes 2017; 15 : 126.

Shepherd J, Kavanagh J, Picot J, Cooper K, Harden A, Barnett-Page E, Jones J, Clegg A, Hartwell D, Frampton GK, Price A. The effectiveness and cost-effectiveness of behavioural interventions for the prevention of sexually transmitted infections in young people aged 13-19: a systematic review and economic evaluation. Health Technology Assessment 2010; 14 : 1-206, iii-iv.

Squires JE, Valentine JC, Grimshaw JM. Systematic reviews of complex interventions: framing the review question. Journal of Clinical Epidemiology 2013; 66 : 1215-1222.

Stansfield C, Kavanagh J, Rees R, Gomersall A, Thomas J. The selection of search sources influences the findings of a systematic review of people's views: a case study in public health. BMC Medical Research Methodology 2012; 12 : 55.

SURE (Supporting the Use of Research Evidence) Collaboration. SURE Guides for Preparing and Using Evidence-based Policy Briefs: 5 Identifying and Addressing Barriers to Implementing the Policy Options. Version 2.1, updated November 2011.  https://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/SURE-Guides-v2.1/Collectedfiles/sure_guides.html

Suri H. Purposeful sampling in qualitative research synthesis. Qualitative Research Journal 2011; 11 : 63-75.

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J. Integrating qualitative research with trials in systematic reviews. BMJ 2004; 328 : 1010-1012.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology 2008; 8 : 45.

Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis [Software]. EPPI-Centre Software. Social Science Research Unit, Institute of Education, University of London UK; 2010. https://eppi.ioe.ac.uk/CMS/Default.aspx?alias=eppi.ioe.ac.uk/cms/er4& .

Thomas J, O'Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Systematic Reviews 2014; 3 : 67.

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology 2012; 12 : 181.

van Grootel L, van Wesel F, O'Mara-Eves A, Thomas J, Hox J, Boeije H. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: broadening the matrix approach. Research Synthesis Methods 2017; 8 : 303-311.

Verhage A, Boels D. Critical appraisal of mixed methods research studies in a systematic scoping review on plural policing: assessing the impact of excluding inadequately reported studies by means of a sensitivity analysis. Quality & Quantity 2017; 51 : 1449-1468.

Walker LO, Avant KC. Strategies for theory construction in nursing . Upper Saddle River (NJ): Pearson Prentice Hall; 2005.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

the limitations of qualitative research

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

87k Accesses

33 Citations

72 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

the limitations of qualitative research

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

the limitations of qualitative research

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident Due to planned maintenance there will be periods of time where the website may be unavailable. We apologise for any inconvenience.

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

the limitations of qualitative research

  • > Journals
  • > BJPsych Bulletin
  • > The Psychiatrist
  • > Volume 37 Issue 6
  • > Qualitative research: its value and applicability

the limitations of qualitative research

Article contents

What questions are best answered using qualitative research, countering some misconceptions, in conclusion, qualitative research: its value and applicability.

Published online by Cambridge University Press:  02 January 2018

Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals, and the meanings attached to them. The effective application of qualitative methods to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. Qualitative approaches have particular potential in psychiatry research, singularly and in combination with quantitative methods. This article outlines the nature and potential application of qualitative research as well as attempting to counter a number of misconceptions.

Qualitative research has a rich tradition in the social sciences. Since the late 19th century, researchers interested in studying the social behaviour and cultures of humankind have perceived limitations in trying to explain the phenomena they encounter in purely quantifiable, measurable terms. Anthropology, in its social and cultural forms, was one of the foremost disciplines in developing what would later be termed a qualitative approach, founded as it was on ethnographic studies which sought an understanding of the culture of people from other societies, often hitherto unknown and far removed in geography. Reference Bernard 1 Early researchers would spend extended periods of time living in societies, observing, noting and photographing the minutia of daily life, with the most committed often learning the language of peoples they observed, in the hope of gaining greater acceptance by them and a more detailed understanding of the cultural norms at play. All academic disciplines concerned with human and social behaviour, including anthropology, sociology and psychology, now make extensive use of qualitative research methods whose systematic application was first developed by these colonial-era social scientists.

Their methods, involving observation, participation and discussion of the individuals and groups being studied, as well as reading related textual and visual media and artefacts, form the bedrock of all qualitative social scientific inquiry. The general aim of qualitative research is thus to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of those studied, and the meanings attached to them. Reference Bryman 2 Researchers interested in finding out why people behave the way they do; how people are affected by events, how attitudes and opinions are formed; how and why cultures and practices have developed in the way they have, might well consider qualitative methods to answer their questions.

It is fair to say that clinical and health-related research is still dominated by quantitative methods, of which the randomised controlled trial, focused on hypothesis-testing through experiment controlled by randomisation, is perhaps the quintessential method. Qualitative approaches may seem obscure to the uninitiated when directly compared with the experimental, quantitative methods used in clinical research. There is increasing recognition among researchers in these fields, however, that qualitative methods such as observation, in-depth interviews, focus groups, consensus methods, case studies and the interpretation of texts can be more effective than quantitative approaches in exploring complex phenomena and as such are valuable additions to the methodological armoury available to them. Reference Denzin and Lincoln 3

In considering what kind of research questions are best answered using a qualitative approach, it is important to remember that, first and foremost, unlike quantitative research, inquiry conducted in the qualitative tradition seeks to answer the question ‘What?’ as opposed to ‘How often?’. Qualitative methods are designed to reveal what is going on by describing and interpreting phenomena; they do not attempt to measure how often an event or association occurs. Research conducted using qualitative methods is normally done with an intent to preserve the inherent complexities of human behaviour as opposed to assuming a reductive view of the subject in order to count and measure the occurrence of phenomena. Qualitative research normally takes an inductive approach, moving from observation to hypothesis rather than hypothesis-testing or deduction, although the latter is perfectly possible.

When conducting research in this tradition, the researcher should, if possible, avoid separating the stages of study design, data collection and analysis, but instead weave backwards and forwards between the raw data and the process of conceptualisation, thereby making sense of the data throughout the period of data collection. Although there are inevitable tensions among methodologists concerned with qualitative practice, there is broad consensus that a priori categories and concepts reflecting a researcher's own preconceptions should not be imposed on the process of data collection and analysis. The emphasis should be on capturing and interpreting research participants' true perceptions and/or behaviours.

Using combined approaches

The polarity between qualitative and quantitative research has been largely assuaged, to the benefit of all disciplines which now recognise the value, and compatibility, of both approaches. Indeed, there can be particular value in using quantitative methods in combination with qualitative methods. Reference Barbour 4 In the exploratory stages of a research project, qualitative methodology can be used to clarify or refine the research question, to aid conceptualisation and to generate a hypothesis. It can also help to identify the correct variables to be measured, as researchers have been known to measure before they fully understand the underlying issues pertaining to a study and, as a consequence, may not always target the most appropriate factors. Qualitative work can be valuable in the interpretation, qualification or illumination of quantitative research findings. This is particularly helpful when focusing on anomalous results, as they test the main hypothesis formulated. Qualitative methods can also be used in combination with quantitative methods to triangulate findings and support the validation process, for example, where three or more methods are used and the results compared for similarity (e.g. a survey, interviews and a period of observation in situ ).

‘There is little value in qualitative research findings because we cannot generalise from them’

Generalisability refers to the extent that the account can be applied to other people, times and settings other than those actually studied. A common criticism of qualitative research is that the results of a study are rarely, if ever, generalisable to a larger population because the sample groups are small and the participants are not chosen randomly. Such criticism fails to recognise the distinctiveness of qualitative research where sampling is concerned. In quantitative research, the intent is to secure a large random sample that is representative of the general population, with the purpose of eliminating individual variations, focusing on generalisations and thereby allowing for statistical inference of results that are applicable across an entire population. In qualitative research, generalisability is based on the assumption that it is valuable to begin to understand similar situations or people, rather than being representative of the target population. Qualitative research is rarely based on the use of random samples, so the kinds of reference to wider populations made on the basis of surveys cannot be used in qualitative analysis.

Qualitative researchers utilise purposive sampling, whereby research participants are selected deliberately to test a particular theoretical premise. The purpose of sampling here is not to identify a random subgroup of the general population from which statistically significant results can be extrapolated, but rather to identify, in a systematic way, individuals that possess relevant characteristics for the question being considered. Reference Strauss and Corbin 5 The researchers must instead ensure that any reference to people and settings beyond those in the study are justified, which is normally achieved by defining, in detail, the type of settings and people to whom the explanation or theory applies based on the identification of similar settings and people in the study. The intent is to permit a detailed examination of the phenomenon, resulting in a text-rich interpretation that can deepen our understanding and produce a plausible explanation of the phenomenon under study. The results are not intended to be statistically generalisable, although any theory they generate might well be.

‘Qualitative research cannot really claim reliability or validity’

In quantitative research, reliability is the extent to which different observers, or the same observers on different occasions, make the same observations or collect the same data about the same object of study. The changing nature of social phenomena scrutinised by qualitative researchers inevitably makes the possibility of the same kind of reliability problematic in their work. A number of alternative concepts to reliability have been developed by qualitative methodologists, however, known collectively as forms of trustworthiness. Reference Guba 6

One way to demonstrate trustworthiness is to present detailed evidence in the form of quotations from interviews and field notes, along with thick textual descriptions of episodes, events and settings. To be trustworthy, qualitative analysis should also be auditable, making it possible to retrace the steps leading to a certain interpretation or theory to check that no alternatives were left unexamined and that no researcher biases had any avoidable influence on the results. Usually, this involves the recording of information about who did what with the data and in what order so that the origin of interpretations can be retraced.

In general, within the research traditions of the natural sciences, findings are validated by their repeated replication, and if a second investigator cannot replicate the findings when they repeat the experiment then the original results are questioned. If no one else can replicate the original results then they are rejected as fatally flawed and therefore invalid. Natural scientists have developed a broad spectrum of procedures and study designs to ensure that experiments are dependable and that replication is possible. In the social sciences, particularly when using qualitative research methods, replication is rarely possible given that, when observed or questioned again, respondents will almost never say or do precisely the same things. Whether results have been successfully replicated is always a matter of interpretation. There are, however, procedures that, if followed, can significantly reduce the possibility of producing analyses that are partial or biased. Reference Altheide, Johnson, Denzin and Lincoln 7

Triangulation is one way of doing this. It essentially means combining multiple views, approaches or methods in an investigation to obtain a more accurate interpretation of the phenomena, thereby creating an analysis of greater depth and richness. As the process of analysing qualitative data normally involves some form of coding, whereby data are broken down into units of analysis, constant comparison can also be used. Constant comparison involves checking the consistency and accuracy of interpretations and especially the application of codes by constantly comparing one interpretation or code with others both of a similar sort and in other cases and settings. This in effect is a form of interrater reliability, involving multiple researchers or teams in the coding process so that it is possible to compare how they have coded the same passages and where there are areas of agreement and disagreement so that consensus can be reached about a code's definition, improving consistency and rigour. It is also good practice in qualitative analysis to look constantly for outliers – results that are out of line with your main findings or any which directly contradict what your explanations might predict, re-examining the data to try to find a way of explaining the atypical finding to produce a modified and more complex theory and explanation.

Qualitative research has been established for many decades in the social sciences and encompasses a valuable set of methodological tools for data collection, analysis and interpretation. Their effective application to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. The use of qualitative approaches to research in psychiatry has particular potential, singularly and in combination with quantitative methods. Reference Crabb and Chur-Hansen 8 When devising research questions in the specialty, careful thought should always be given to the most appropriate methodology, and consideration given to the great depth and richness of empirical evidence which a robust qualitative approach is able to provide.

Declaration of interest

Crossref logo

This article has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 37, Issue 6
  • Steven J. Agius (a1)
  • DOI: https://doi.org/10.1192/pb.bp.113.042770

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Qualitative Methods
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The word qualitative implies an emphasis on the qualities of entities and on processes and meanings that are not experimentally examined or measured [if measured at all] in terms of quantity, amount, intensity, or frequency. Qualitative researchers stress the socially constructed nature of reality, the intimate relationship between the researcher and what is studied, and the situational constraints that shape inquiry. Such researchers emphasize the value-laden nature of inquiry. They seek answers to questions that stress how social experience is created and given meaning. In contrast, quantitative studies emphasize the measurement and analysis of causal relationships between variables, not processes. Qualitative forms of inquiry are considered by many social and behavioral scientists to be as much a perspective on how to approach investigating a research problem as it is a method.

Denzin, Norman. K. and Yvonna S. Lincoln. “Introduction: The Discipline and Practice of Qualitative Research.” In The Sage Handbook of Qualitative Research . Norman. K. Denzin and Yvonna S. Lincoln, eds. 3 rd edition. (Thousand Oaks, CA: Sage, 2005), p. 10.

Characteristics of Qualitative Research

Below are the three key elements that define a qualitative research study and the applied forms each take in the investigation of a research problem.

  • Naturalistic -- refers to studying real-world situations as they unfold naturally; non-manipulative and non-controlling; the researcher is open to whatever emerges [i.e., there is a lack of predetermined constraints on findings].
  • Emergent -- acceptance of adapting inquiry as understanding deepens and/or situations change; the researcher avoids rigid designs that eliminate responding to opportunities to pursue new paths of discovery as they emerge.
  • Purposeful -- cases for study [e.g., people, organizations, communities, cultures, events, critical incidences] are selected because they are “information rich” and illuminative. That is, they offer useful manifestations of the phenomenon of interest; sampling is aimed at insight about the phenomenon, not empirical generalization derived from a sample and applied to a population.

The Collection of Data

  • Data -- observations yield a detailed, "thick description" [in-depth understanding]; interviews capture direct quotations about people’s personal perspectives and lived experiences; often derived from carefully conducted case studies and review of material culture.
  • Personal experience and engagement -- researcher has direct contact with and gets close to the people, situation, and phenomenon under investigation; the researcher’s personal experiences and insights are an important part of the inquiry and critical to understanding the phenomenon.
  • Empathic neutrality -- an empathic stance in working with study respondents seeks vicarious understanding without judgment [neutrality] by showing openness, sensitivity, respect, awareness, and responsiveness; in observation, it means being fully present [mindfulness].
  • Dynamic systems -- there is attention to process; assumes change is ongoing, whether the focus is on an individual, an organization, a community, or an entire culture, therefore, the researcher is mindful of and attentive to system and situational dynamics.

The Analysis

  • Unique case orientation -- assumes that each case is special and unique; the first level of analysis is being true to, respecting, and capturing the details of the individual cases being studied; cross-case analysis follows from and depends upon the quality of individual case studies.
  • Inductive analysis -- immersion in the details and specifics of the data to discover important patterns, themes, and inter-relationships; begins by exploring, then confirming findings, guided by analytical principles rather than rules.
  • Holistic perspective -- the whole phenomenon under study is understood as a complex system that is more than the sum of its parts; the focus is on complex interdependencies and system dynamics that cannot be reduced in any meaningful way to linear, cause and effect relationships and/or a few discrete variables.
  • Context sensitive -- places findings in a social, historical, and temporal context; researcher is careful about [even dubious of] the possibility or meaningfulness of generalizations across time and space; emphasizes careful comparative case study analysis and extrapolating patterns for possible transferability and adaptation in new settings.
  • Voice, perspective, and reflexivity -- the qualitative methodologist owns and is reflective about her or his own voice and perspective; a credible voice conveys authenticity and trustworthiness; complete objectivity being impossible and pure subjectivity undermining credibility, the researcher's focus reflects a balance between understanding and depicting the world authentically in all its complexity and of being self-analytical, politically aware, and reflexive in consciousness.

Berg, Bruce Lawrence. Qualitative Research Methods for the Social Sciences . 8th edition. Boston, MA: Allyn and Bacon, 2012; Denzin, Norman. K. and Yvonna S. Lincoln. Handbook of Qualitative Research . 2nd edition. Thousand Oaks, CA: Sage, 2000; Marshall, Catherine and Gretchen B. Rossman. Designing Qualitative Research . 2nd ed. Thousand Oaks, CA: Sage Publications, 1995; Merriam, Sharan B. Qualitative Research: A Guide to Design and Implementation . San Francisco, CA: Jossey-Bass, 2009.

Basic Research Design for Qualitative Studies

Unlike positivist or experimental research that utilizes a linear and one-directional sequence of design steps, there is considerable variation in how a qualitative research study is organized. In general, qualitative researchers attempt to describe and interpret human behavior based primarily on the words of selected individuals [a.k.a., “informants” or “respondents”] and/or through the interpretation of their material culture or occupied space. There is a reflexive process underpinning every stage of a qualitative study to ensure that researcher biases, presuppositions, and interpretations are clearly evident, thus ensuring that the reader is better able to interpret the overall validity of the research. According to Maxwell (2009), there are five, not necessarily ordered or sequential, components in qualitative research designs. How they are presented depends upon the research philosophy and theoretical framework of the study, the methods chosen, and the general assumptions underpinning the study. Goals Describe the central research problem being addressed but avoid describing any anticipated outcomes. Questions to ask yourself are: Why is your study worth doing? What issues do you want to clarify, and what practices and policies do you want it to influence? Why do you want to conduct this study, and why should the reader care about the results? Conceptual Framework Questions to ask yourself are: What do you think is going on with the issues, settings, or people you plan to study? What theories, beliefs, and prior research findings will guide or inform your research, and what literature, preliminary studies, and personal experiences will you draw upon for understanding the people or issues you are studying? Note to not only report the results of other studies in your review of the literature, but note the methods used as well. If appropriate, describe why earlier studies using quantitative methods were inadequate in addressing the research problem. Research Questions Usually there is a research problem that frames your qualitative study and that influences your decision about what methods to use, but qualitative designs generally lack an accompanying hypothesis or set of assumptions because the findings are emergent and unpredictable. In this context, more specific research questions are generally the result of an interactive design process rather than the starting point for that process. Questions to ask yourself are: What do you specifically want to learn or understand by conducting this study? What do you not know about the things you are studying that you want to learn? What questions will your research attempt to answer, and how are these questions related to one another? Methods Structured approaches to applying a method or methods to your study help to ensure that there is comparability of data across sources and researchers and, thus, they can be useful in answering questions that deal with differences between phenomena and the explanation for these differences [variance questions]. An unstructured approach allows the researcher to focus on the particular phenomena studied. This facilitates an understanding of the processes that led to specific outcomes, trading generalizability and comparability for internal validity and contextual and evaluative understanding. Questions to ask yourself are: What will you actually do in conducting this study? What approaches and techniques will you use to collect and analyze your data, and how do these constitute an integrated strategy? Validity In contrast to quantitative studies where the goal is to design, in advance, “controls” such as formal comparisons, sampling strategies, or statistical manipulations to address anticipated and unanticipated threats to validity, qualitative researchers must attempt to rule out most threats to validity after the research has begun by relying on evidence collected during the research process itself in order to effectively argue that any alternative explanations for a phenomenon are implausible. Questions to ask yourself are: How might your results and conclusions be wrong? What are the plausible alternative interpretations and validity threats to these, and how will you deal with these? How can the data that you have, or that you could potentially collect, support or challenge your ideas about what’s going on? Why should we believe your results? Conclusion Although Maxwell does not mention a conclusion as one of the components of a qualitative research design, you should formally conclude your study. Briefly reiterate the goals of your study and the ways in which your research addressed them. Discuss the benefits of your study and how stakeholders can use your results. Also, note the limitations of your study and, if appropriate, place them in the context of areas in need of further research.

Chenail, Ronald J. Introduction to Qualitative Research Design. Nova Southeastern University; Heath, A. W. The Proposal in Qualitative Research. The Qualitative Report 3 (March 1997); Marshall, Catherine and Gretchen B. Rossman. Designing Qualitative Research . 3rd edition. Thousand Oaks, CA: Sage, 1999; Maxwell, Joseph A. "Designing a Qualitative Study." In The SAGE Handbook of Applied Social Research Methods . Leonard Bickman and Debra J. Rog, eds. 2nd ed. (Thousand Oaks, CA: Sage, 2009), p. 214-253; Qualitative Research Methods. Writing@CSU. Colorado State University; Yin, Robert K. Qualitative Research from Start to Finish . 2nd edition. New York: Guilford, 2015.

Strengths of Using Qualitative Methods

The advantage of using qualitative methods is that they generate rich, detailed data that leave the participants' perspectives intact and provide multiple contexts for understanding the phenomenon under study. In this way, qualitative research can be used to vividly demonstrate phenomena or to conduct cross-case comparisons and analysis of individuals or groups.

Among the specific strengths of using qualitative methods to study social science research problems is the ability to:

  • Obtain a more realistic view of the lived world that cannot be understood or experienced in numerical data and statistical analysis;
  • Provide the researcher with the perspective of the participants of the study through immersion in a culture or situation and as a result of direct interaction with them;
  • Allow the researcher to describe existing phenomena and current situations;
  • Develop flexible ways to perform data collection, subsequent analysis, and interpretation of collected information;
  • Yield results that can be helpful in pioneering new ways of understanding;
  • Respond to changes that occur while conducting the study ]e.g., extended fieldwork or observation] and offer the flexibility to shift the focus of the research as a result;
  • Provide a holistic view of the phenomena under investigation;
  • Respond to local situations, conditions, and needs of participants;
  • Interact with the research subjects in their own language and on their own terms; and,
  • Create a descriptive capability based on primary and unstructured data.

Anderson, Claire. “Presenting and Evaluating Qualitative Research.” American Journal of Pharmaceutical Education 74 (2010): 1-7; Denzin, Norman. K. and Yvonna S. Lincoln. Handbook of Qualitative Research . 2nd edition. Thousand Oaks, CA: Sage, 2000; Merriam, Sharan B. Qualitative Research: A Guide to Design and Implementation . San Francisco, CA: Jossey-Bass, 2009.

Limitations of Using Qualitative Methods

It is very much true that most of the limitations you find in using qualitative research techniques also reflect their inherent strengths . For example, small sample sizes help you investigate research problems in a comprehensive and in-depth manner. However, small sample sizes undermine opportunities to draw useful generalizations from, or to make broad policy recommendations based upon, the findings. Additionally, as the primary instrument of investigation, qualitative researchers are often embedded in the cultures and experiences of others. However, cultural embeddedness increases the opportunity for bias generated from conscious or unconscious assumptions about the study setting to enter into how data is gathered, interpreted, and reported.

Some specific limitations associated with using qualitative methods to study research problems in the social sciences include the following:

  • Drifting away from the original objectives of the study in response to the changing nature of the context under which the research is conducted;
  • Arriving at different conclusions based on the same information depending on the personal characteristics of the researcher;
  • Replication of a study is very difficult;
  • Research using human subjects increases the chance of ethical dilemmas that undermine the overall validity of the study;
  • An inability to investigate causality between different research phenomena;
  • Difficulty in explaining differences in the quality and quantity of information obtained from different respondents and arriving at different, non-consistent conclusions;
  • Data gathering and analysis is often time consuming and/or expensive;
  • Requires a high level of experience from the researcher to obtain the targeted information from the respondent;
  • May lack consistency and reliability because the researcher can employ different probing techniques and the respondent can choose to tell some particular stories and ignore others; and,
  • Generation of a significant amount of data that cannot be randomized into manageable parts for analysis.

Research Tip

Human Subject Research and Institutional Review Board Approval

Almost every socio-behavioral study requires you to submit your proposed research plan to an Institutional Review Board. The role of the Board is to evaluate your research proposal and determine whether it will be conducted ethically and under the regulations, institutional polices, and Code of Ethics set forth by the university. The purpose of the review is to protect the rights and welfare of individuals participating in your study. The review is intended to ensure equitable selection of respondents, that you have met the requirements for obtaining informed consent , that there is clear assessment and minimization of risks to participants and to the university [read: no lawsuits!], and that privacy and confidentiality are maintained throughout the research process and beyond. Go to the USC IRB website for detailed information and templates of forms you need to submit before you can proceed. If you are  unsure whether your study is subject to IRB review, consult with your professor or academic advisor.

Chenail, Ronald J. Introduction to Qualitative Research Design. Nova Southeastern University; Labaree, Robert V. "Working Successfully with Your Institutional Review Board: Practical Advice for Academic Librarians." College and Research Libraries News 71 (April 2010): 190-193.

Another Research Tip

Finding Examples of How to Apply Different Types of Research Methods

SAGE publications is a major publisher of studies about how to design and conduct research in the social and behavioral sciences. Their SAGE Research Methods Online and Cases database includes contents from books, articles, encyclopedias, handbooks, and videos covering social science research design and methods including the complete Little Green Book Series of Quantitative Applications in the Social Sciences and the Little Blue Book Series of Qualitative Research techniques. The database also includes case studies outlining the research methods used in real research projects. This is an excellent source for finding definitions of key terms and descriptions of research design and practice, techniques of data gathering, analysis, and reporting, and information about theories of research [e.g., grounded theory]. The database covers both qualitative and quantitative research methods as well as mixed methods approaches to conducting research.

SAGE Research Methods Online and Cases

NOTE :  For a list of online communities, research centers, indispensable learning resources, and personal websites of leading qualitative researchers, GO HERE .

For a list of scholarly journals devoted to the study and application of qualitative research methods, GO HERE .

  • << Previous: 6. The Methodology
  • Next: Quantitative Methods >>
  • Last Updated: May 30, 2024 9:38 AM
  • URL: https://libguides.usc.edu/writingguide

How to Write Limitations of the Study (with examples)

This blog emphasizes the importance of recognizing and effectively writing about limitations in research. It discusses the types of limitations, their significance, and provides guidelines for writing about them, highlighting their role in advancing scholarly research.

Updated on August 24, 2023

a group of researchers writing their limitation of their study

No matter how well thought out, every research endeavor encounters challenges. There is simply no way to predict all possible variances throughout the process.

These uncharted boundaries and abrupt constraints are known as limitations in research . Identifying and acknowledging limitations is crucial for conducting rigorous studies. Limitations provide context and shed light on gaps in the prevailing inquiry and literature.

This article explores the importance of recognizing limitations and discusses how to write them effectively. By interpreting limitations in research and considering prevalent examples, we aim to reframe the perception from shameful mistakes to respectable revelations.

What are limitations in research?

In the clearest terms, research limitations are the practical or theoretical shortcomings of a study that are often outside of the researcher’s control . While these weaknesses limit the generalizability of a study’s conclusions, they also present a foundation for future research.

Sometimes limitations arise from tangible circumstances like time and funding constraints, or equipment and participant availability. Other times the rationale is more obscure and buried within the research design. Common types of limitations and their ramifications include:

  • Theoretical: limits the scope, depth, or applicability of a study.
  • Methodological: limits the quality, quantity, or diversity of the data.
  • Empirical: limits the representativeness, validity, or reliability of the data.
  • Analytical: limits the accuracy, completeness, or significance of the findings.
  • Ethical: limits the access, consent, or confidentiality of the data.

Regardless of how, when, or why they arise, limitations are a natural part of the research process and should never be ignored . Like all other aspects, they are vital in their own purpose.

Why is identifying limitations important?

Whether to seek acceptance or avoid struggle, humans often instinctively hide flaws and mistakes. Merging this thought process into research by attempting to hide limitations, however, is a bad idea. It has the potential to negate the validity of outcomes and damage the reputation of scholars.

By identifying and addressing limitations throughout a project, researchers strengthen their arguments and curtail the chance of peer censure based on overlooked mistakes. Pointing out these flaws shows an understanding of variable limits and a scrupulous research process.

Showing awareness of and taking responsibility for a project’s boundaries and challenges validates the integrity and transparency of a researcher. It further demonstrates the researchers understand the applicable literature and have thoroughly evaluated their chosen research methods.

Presenting limitations also benefits the readers by providing context for research findings. It guides them to interpret the project’s conclusions only within the scope of very specific conditions. By allowing for an appropriate generalization of the findings that is accurately confined by research boundaries and is not too broad, limitations boost a study’s credibility .

Limitations are true assets to the research process. They highlight opportunities for future research. When researchers identify the limitations of their particular approach to a study question, they enable precise transferability and improve chances for reproducibility. 

Simply stating a project’s limitations is not adequate for spurring further research, though. To spark the interest of other researchers, these acknowledgements must come with thorough explanations regarding how the limitations affected the current study and how they can potentially be overcome with amended methods.

How to write limitations

Typically, the information about a study’s limitations is situated either at the beginning of the discussion section to provide context for readers or at the conclusion of the discussion section to acknowledge the need for further research. However, it varies depending upon the target journal or publication guidelines. 

Don’t hide your limitations

It is also important to not bury a limitation in the body of the paper unless it has a unique connection to a topic in that section. If so, it needs to be reiterated with the other limitations or at the conclusion of the discussion section. Wherever it is included in the manuscript, ensure that the limitations section is prominently positioned and clearly introduced.

While maintaining transparency by disclosing limitations means taking a comprehensive approach, it is not necessary to discuss everything that could have potentially gone wrong during the research study. If there is no commitment to investigation in the introduction, it is unnecessary to consider the issue a limitation to the research. Wholly consider the term ‘limitations’ and ask, “Did it significantly change or limit the possible outcomes?” Then, qualify the occurrence as either a limitation to include in the current manuscript or as an idea to note for other projects. 

Writing limitations

Once the limitations are concretely identified and it is decided where they will be included in the paper, researchers are ready for the writing task. Including only what is pertinent, keeping explanations detailed but concise, and employing the following guidelines is key for crafting valuable limitations:

1) Identify and describe the limitations : Clearly introduce the limitation by classifying its form and specifying its origin. For example:

  • An unintentional bias encountered during data collection
  • An intentional use of unplanned post-hoc data analysis

2) Explain the implications : Describe how the limitation potentially influences the study’s findings and how the validity and generalizability are subsequently impacted. Provide examples and evidence to support claims of the limitations’ effects without making excuses or exaggerating their impact. Overall, be transparent and objective in presenting the limitations, without undermining the significance of the research. 

3) Provide alternative approaches for future studies : Offer specific suggestions for potential improvements or avenues for further investigation. Demonstrate a proactive approach by encouraging future research that addresses the identified gaps and, therefore, expands the knowledge base.

Whether presenting limitations as an individual section within the manuscript or as a subtopic in the discussion area, authors should use clear headings and straightforward language to facilitate readability. There is no need to complicate limitations with jargon, computations, or complex datasets.

Examples of common limitations

Limitations are generally grouped into two categories , methodology and research process .

Methodology limitations

Methodology may include limitations due to:

  • Sample size
  • Lack of available or reliable data
  • Lack of prior research studies on the topic
  • Measure used to collect the data
  • Self-reported data

methodology limitation example

The researcher is addressing how the large sample size requires a reassessment of the measures used to collect and analyze the data.

Research process limitations

Limitations during the research process may arise from:

  • Access to information
  • Longitudinal effects
  • Cultural and other biases
  • Language fluency
  • Time constraints

research process limitations example

The author is pointing out that the model’s estimates are based on potentially biased observational studies.

Final thoughts

Successfully proving theories and touting great achievements are only two very narrow goals of scholarly research. The true passion and greatest efforts of researchers comes more in the form of confronting assumptions and exploring the obscure.

In many ways, recognizing and sharing the limitations of a research study both allows for and encourages this type of discovery that continuously pushes research forward. By using limitations to provide a transparent account of the project's boundaries and to contextualize the findings, researchers pave the way for even more robust and impactful research in the future.

Charla Viera, MS

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

the limitations of qualitative research

Advertisement

Qualitative vs. Quantitative: Key Differences in Research Types

  • Share Content on Facebook
  • Share Content on LinkedIn
  • Share Content on Flipboard
  • Share Content on Reddit
  • Share Content via Email

Colleagues sit on a sofa and have a casual meeting with coffee and a laptop

Let's say you want to learn how a group will vote in an election. You face a classic decision of gathering qualitative vs. quantitative data.

With one method, you can ask voters open-ended questions that encourage them to share how they feel, what issues matter to them and the reasons they will vote in a specific way. With the other, you can ask closed-ended questions, giving respondents a list of options. You will then turn that information into statistics.

Neither method is more right than the other, but they serve different purposes. Learn more about the key differences between qualitative and quantitative research and how you can use them.

What Is Qualitative Research?

What is quantitative research, qualitative vs. quantitative research: 3 key differences, benefits of combining qualitative and quantitative research.

Qualitative research aims to explore and understand the depth, context and nuances of human experiences, behaviors and phenomena. This methodological approach emphasizes gathering rich, nonnumerical information through methods such as interviews, focus groups , observations and content analysis.

In qualitative research, the emphasis is on uncovering patterns and meanings within a specific social or cultural context. Researchers delve into the subjective aspects of human behavior , opinions and emotions.

This approach is particularly valuable for exploring complex and multifaceted issues, providing a deeper understanding of the intricacies involved.

Common qualitative research methods include open-ended interviews, where participants can express their thoughts freely, and thematic analysis, which involves identifying recurring themes in the data.

Examples of How to Use Qualitative Research

The flexibility of qualitative research allows researchers to adapt their methods based on emerging insights, fostering a more organic and holistic exploration of the research topic. This is a widely used method in social sciences, psychology and market research.

Here are just a few ways you can use qualitative research.

  • To understand the people who make up a community : If you want to learn more about a community, you can talk to them or observe them to learn more about their customs, norms and values.
  • To examine people's experiences within the healthcare system : While you can certainly look at statistics to gauge if someone feels positively or negatively about their healthcare experiences, you may not gain a deep understanding of why they feel that way. For example, if a nurse went above and beyond for a patient, they might say they are content with the care they received. But if medical professional after medical professional dismissed a person over several years, they will have more negative comments.
  • To explore the effectiveness of your marketing campaign : Marketing is a field that typically collects statistical data, but it can also benefit from qualitative research. For example, if you have a successful campaign, you can interview people to learn what resonated with them and why. If you learn they liked the humor because it shows you don't take yourself too seriously, you can try to replicate that feeling in future campaigns.

Types of Qualitative Data Collection

Qualitative data captures the qualities, characteristics or attributes of a subject. It can take various forms, including:

  • Audio data : Recordings of interviews, discussions or any other auditory information. This can be useful when dealing with events from the past. Setting up a recording device also allows a researcher to stay in the moment without having to jot down notes.
  • Observational data : With this type of qualitative data analysis, you can record behavior, events or interactions.
  • Textual data : Use verbal or written information gathered through interviews, open-ended surveys or focus groups to learn more about a topic.
  • Visual data : You can learn new information through images, photographs, videos or other visual materials.

Quantitative research is a systematic empirical investigation that involves the collection and analysis of numerical data. This approach seeks to understand, explain or predict phenomena by gathering quantifiable information and applying statistical methods for analysis.

Unlike qualitative research, which focuses on nonnumerical, descriptive data, quantitative research data involves measurements, counts and statistical techniques to draw objective conclusions.

Examples of How to Use Quantitative Research

Quantitative research focuses on statistical analysis. Here are a few ways you can employ quantitative research methods.

  • Studying the employment rates of a city : Through this research you can gauge whether any patterns exist over a given time period.
  • Seeing how air pollution has affected a neighborhood : If the creation of a highway led to more air pollution in a neighborhood, you can collect data to learn about the health impacts on the area's residents. For example, you can see what percentage of people developed respiratory issues after moving to the neighborhood.

Types of Quantitative Data

Quantitative data refers to numerical information you can measure and count. Here are a few statistics you can use.

  • Heights, yards, volume and more : You can use different measurements to gain insight on different types of research, such as learning the average distance workers are willing to travel for work or figuring out the average height of a ballerina.
  • Temperature : Measure in either degrees Celsius or Fahrenheit. Or, if you're looking for the coldest place in the universe , you may measure in Kelvins.
  • Sales figures : With this information, you can look at a store's performance over time, compare one company to another or learn what the average amount of sales is in a specific industry.

Quantitative and qualitative research methods are both valid and useful ways to collect data. Here are a few ways that they differ.

  • Data collection method : Quantitative research uses standardized instruments, such as surveys, experiments or structured observations, to gather numerical data. Qualitative research uses open-ended methods like interviews, focus groups or content analysis.
  • Nature of data : Quantitative research involves numerical data that you can measure and analyze statistically, whereas qualitative research involves exploring the depth and richness of experiences through nonnumerical, descriptive data.
  • Sampling : Quantitative research involves larger sample sizes to ensure statistical validity and generalizability of findings to a population. With qualitative research, it's better to work with a smaller sample size to gain in-depth insights into specific contexts or experiences.

You can simultaneously study qualitative and quantitative data. This method , known as mixed methods research, offers several benefits, including:

  • A comprehensive understanding : Integration of qualitative and quantitative data provides a more comprehensive understanding of the research problem. Qualitative data helps explain the context and nuances, while quantitative data offers statistical generalizability.
  • Contextualization : Qualitative data helps contextualize quantitative findings by providing explanations into the why and how behind statistical patterns. This deeper understanding contributes to more informed interpretations of quantitative results.
  • Triangulation : Triangulation involves using multiple methods to validate or corroborate findings. Combining qualitative and quantitative data allows researchers to cross-verify results, enhancing the overall validity and reliability of the study.

This article was created in conjunction with AI technology, then fact-checked and edited by a HowStuffWorks editor.

Please copy/paste the following text to properly cite this HowStuffWorks.com article:

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

A qualitative exploration of the challenges providers experience during peripartum management of patients with a body mass index ≥ 50 kg/m2 and recommendations for improvement

  • Kominiarek, Michelle A.
  • Lyleroehr, Madison
  • Torres, Jissell

Background The objective of this research was to conduct a qualitative study among a diverse group of providers to identify their clinical needs, barriers, and adverse safety events in the peripartum care of people with a body mass index (BMI) ≥ 50 kg/m2. Methods Obstetricians, anesthesiologists, certified nurse midwives, nurse practitioners, and nurses were invited to participate in focus group discussions if they were employed at the hospital for >6 months. Key concepts in the focus group guide included: (1) Discussion of challenging situations, (2) Current peripartum management approaches, (3) Patient and family knowledge and counseling, (4) Design and implementation of a guideline (e.g., checklist or toolkit) for peripartum care. The audiotaped focus groups were transcribed verbatim, uploaded to a qualitative analysis software program, and analyzed using inductive and constant comparative approaches. Emerging themes were summarized along with representative quotes. Results Five focus groups of 27 providers were completed in 2023. The themes included staffing (level of experience, nursing-patient ratios, safety concerns), equipment (limitations of transfer mats, need for larger sizes, location for blood pressure cuff, patient embarrassment), titrating oxytocin (lack of guidelines, range of uses), monitoring fetal heart rate and contractions, patient positioning, and communication (lack of patient feedback, need for bias training, need for interdisciplinary relationships). Providers gave examples of items to include in a "BMI cart" and suggestions for a guideline including designated rooms for patients with a BMI ≥ 50 kg/m2, defining nursing ratios and oxytocin titration plans, postpartum incentive spirometer, and touch points with providers (nursing, physicians) at every shift change. Conclusions Providers discussed a range of challenges and described how current approaches to care may negatively affect the peripartum experience and pose threats to safety for patients with a BMI ≥ 50 kg/m2 and their providers. We gathered information on improving equipment and communication among providers.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.8(4); 2019 Aug

Logo of pmeded

Limited by our limitations

Paula t. ross.

Medical School, University of Michigan, Ann Arbor, MI USA

Nikki L. Bibler Zaidi

Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations. Including redundant or irrelevant limitations is an ineffective use of the already limited word count. A meaningful presentation of study limitations should describe the potential limitation, explain the implication of the limitation, provide possible alternative approaches, and describe steps taken to mitigate the limitation. This includes placing research findings within their proper context to ensure readers do not overemphasize or minimize findings. A more complete presentation will enrich the readers’ understanding of the study’s limitations and support future investigation.

Introduction

Regardless of the format scholarship assumes, from qualitative research to clinical trials, all studies have limitations. Limitations represent weaknesses within the study that may influence outcomes and conclusions of the research. The goal of presenting limitations is to provide meaningful information to the reader; however, too often, limitations in medical education articles are overlooked or reduced to simplistic and minimally relevant themes (e.g., single institution study, use of self-reported data, or small sample size) [ 1 ]. This issue is prominent in other fields of inquiry in medicine as well. For example, despite the clinical implications, medical studies often fail to discuss how limitations could have affected the study findings and interpretations [ 2 ]. Further, observational research often fails to remind readers of the fundamental limitation inherent in the study design, which is the inability to attribute causation [ 3 ]. By reporting generic limitations or omitting them altogether, researchers miss opportunities to fully communicate the relevance of their work, illustrate how their work advances a larger field under study, and suggest potential areas for further investigation.

Goals of presenting limitations

Medical education scholarship should provide empirical evidence that deepens our knowledge and understanding of education [ 4 , 5 ], informs educational practice and process, [ 6 , 7 ] and serves as a forum for educating other researchers [ 8 ]. Providing study limitations is indeed an important part of this scholarly process. Without them, research consumers are pressed to fully grasp the potential exclusion areas or other biases that may affect the results and conclusions provided [ 9 ]. Study limitations should leave the reader thinking about opportunities to engage in prospective improvements [ 9 – 11 ] by presenting gaps in the current research and extant literature, thereby cultivating other researchers’ curiosity and interest in expanding the line of scholarly inquiry [ 9 ].

Presenting study limitations is also an ethical element of scientific inquiry [ 12 ]. It ensures transparency of both the research and the researchers [ 10 , 13 , 14 ], as well as provides transferability [ 15 ] and reproducibility of methods. Presenting limitations also supports proper interpretation and validity of the findings [ 16 ]. A study’s limitations should place research findings within their proper context to ensure readers are fully able to discern the credibility of a study’s conclusion, and can generalize findings appropriately [ 16 ].

Why some authors may fail to present limitations

As Price and Murnan [ 8 ] note, there may be overriding reasons why researchers do not sufficiently report the limitations of their study. For example, authors may not fully understand the importance and implications of their study’s limitations or assume that not discussing them may increase the likelihood of publication. Word limits imposed by journals may also prevent authors from providing thorough descriptions of their study’s limitations [ 17 ]. Still another possible reason for excluding limitations is a diffusion of responsibility in which some authors may incorrectly assume that the journal editor is responsible for identifying limitations. Regardless of reason or intent, researchers have an obligation to the academic community to present complete and honest study limitations.

A guide to presenting limitations

The presentation of limitations should describe the potential limitations, explain the implication of the limitations, provide possible alternative approaches, and describe steps taken to mitigate the limitations. Too often, authors only list the potential limitations, without including these other important elements.

Describe the limitations

When describing limitations authors should identify the limitation type to clearly introduce the limitation and specify the origin of the limitation. This helps to ensure readers are able to interpret and generalize findings appropriately. Here we outline various limitation types that can occur at different stages of the research process.

Study design

Some study limitations originate from conscious choices made by the researcher (also known as delimitations) to narrow the scope of the study [ 1 , 8 , 18 ]. For example, the researcher may have designed the study for a particular age group, sex, race, ethnicity, geographically defined region, or some other attribute that would limit to whom the findings can be generalized. Such delimitations involve conscious exclusionary and inclusionary decisions made during the development of the study plan, which may represent a systematic bias intentionally introduced into the study design or instrument by the researcher [ 8 ]. The clear description and delineation of delimitations and limitations will assist editors and reviewers in understanding any methodological issues.

Data collection

Study limitations can also be introduced during data collection. An unintentional consequence of human subjects research is the potential of the researcher to influence how participants respond to their questions. Even when appropriate methods for sampling have been employed, some studies remain limited by the use of data collected only from participants who decided to enrol in the study (self-selection bias) [ 11 , 19 ]. In some cases, participants may provide biased input by responding to questions they believe are favourable to the researcher rather than their authentic response (social desirability bias) [ 20 – 22 ]. Participants may influence the data collected by changing their behaviour when they are knowingly being observed (Hawthorne effect) [ 23 ]. Researchers—in their role as an observer—may also bias the data they collect by allowing a first impression of the participant to be influenced by a single characteristic or impression of another characteristic either unfavourably (horns effect) or favourably (halo effort) [ 24 ].

Data analysis

Study limitations may arise as a consequence of the type of statistical analysis performed. Some studies may not follow the basic tenets of inferential statistical analyses when they use convenience sampling (i.e. non-probability sampling) rather than employing probability sampling from a target population [ 19 ]. Another limitation that can arise during statistical analyses occurs when studies employ unplanned post-hoc data analyses that were not specified before the initial analysis [ 25 ]. Unplanned post-hoc analysis may lead to statistical relationships that suggest associations but are no more than coincidental findings [ 23 ]. Therefore, when unplanned post-hoc analyses are conducted, this should be clearly stated to allow the reader to make proper interpretation and conclusions—especially when only a subset of the original sample is investigated [ 23 ].

Study results

The limitations of any research study will be rooted in the validity of its results—specifically threats to internal or external validity [ 8 ]. Internal validity refers to reliability or accuracy of the study results [ 26 ], while external validity pertains to the generalizability of results from the study’s sample to the larger, target population [ 8 ].

Examples of threats to internal validity include: effects of events external to the study (history), changes in participants due to time instead of the studied effect (maturation), systematic reduction in participants related to a feature of the study (attrition), changes in participant responses due to repeatedly measuring participants (testing effect), modifications to the instrument (instrumentality) and selecting participants based on extreme scores that will regress towards the mean in repeat tests (regression to the mean) [ 27 ].

Threats to external validity include factors that might inhibit generalizability of results from the study’s sample to the larger, target population [ 8 , 27 ]. External validity is challenged when results from a study cannot be generalized to its larger population or to similar populations in terms of the context, setting, participants and time [ 18 ]. Therefore, limitations should be made transparent in the results to inform research consumers of any known or potentially hidden biases that may have affected the study and prevent generalization beyond the study parameters.

Explain the implication(s) of each limitation

Authors should include the potential impact of the limitations (e.g., likelihood, magnitude) [ 13 ] as well as address specific validity implications of the results and subsequent conclusions [ 16 , 28 ]. For example, self-reported data may lead to inaccuracies (e.g. due to social desirability bias) which threatens internal validity [ 19 ]. Even a researcher’s inappropriate attribution to a characteristic or outcome (e.g., stereotyping) can overemphasize (either positively or negatively) unrelated characteristics or outcomes (halo or horns effect) and impact the internal validity [ 24 ]. Participants’ awareness that they are part of a research study can also influence outcomes (Hawthorne effect) and limit external validity of findings [ 23 ]. External validity may also be threatened should the respondents’ propensity for participation be correlated with the substantive topic of study, as data will be biased and not represent the population of interest (self-selection bias) [ 29 ]. Having this explanation helps readers interpret the results and generalize the applicability of the results for their own setting.

Provide potential alternative approaches and explanations

Often, researchers use other studies’ limitations as the first step in formulating new research questions and shaping the next phase of research. Therefore, it is important for readers to understand why potential alternative approaches (e.g. approaches taken by others exploring similar topics) were not taken. In addition to alternative approaches, authors can also present alternative explanations for their own study’s findings [ 13 ]. This information is valuable coming from the researcher because of the direct, relevant experience and insight gained as they conducted the study. The presentation of alternative approaches represents a major contribution to the scholarly community.

Describe steps taken to minimize each limitation

No research design is perfect and free from explicit and implicit biases; however various methods can be employed to minimize the impact of study limitations. Some suggested steps to mitigate or minimize the limitations mentioned above include using neutral questions, randomized response technique, force choice items, or self-administered questionnaires to reduce respondents’ discomfort when answering sensitive questions (social desirability bias) [ 21 ]; using unobtrusive data collection measures (e.g., use of secondary data) that do not require the researcher to be present (Hawthorne effect) [ 11 , 30 ]; using standardized rubrics and objective assessment forms with clearly defined scoring instructions to minimize researcher bias, or making rater adjustments to assessment scores to account for rater tendencies (halo or horns effect) [ 24 ]; or using existing data or control groups (self-selection bias) [ 11 , 30 ]. When appropriate, researchers should provide sufficient evidence that demonstrates the steps taken to mitigate limitations as part of their study design [ 13 ].

In conclusion, authors may be limiting the impact of their research by neglecting or providing abbreviated and generic limitations. We present several examples of limitations to consider; however, this should not be considered an exhaustive list nor should these examples be added to the growing list of generic and overused limitations. Instead, careful thought should go into presenting limitations after research has concluded and the major findings have been described. Limitations help focus the reader on key findings, therefore it is important to only address the most salient limitations of the study [ 17 , 28 ] related to the specific research problem, not general limitations of most studies [ 1 ]. It is important not to minimize the limitations of study design or results. Rather, results, including their limitations, must help readers draw connections between current research and the extant literature.

The quality and rigor of our research is largely defined by our limitations [ 31 ]. In fact, one of the top reasons reviewers report recommending acceptance of medical education research manuscripts involves limitations—specifically how the study’s interpretation accounts for its limitations [ 32 ]. Therefore, it is not only best for authors to acknowledge their study’s limitations rather than to have them identified by an editor or reviewer, but proper framing and presentation of limitations can actually increase the likelihood of acceptance. Perhaps, these issues could be ameliorated if academic and research organizations adopted policies and/or expectations to guide authors in proper description of limitations.

IMAGES

  1. PPT

    the limitations of qualitative research

  2. Limitations Of Qualitative Research

    the limitations of qualitative research

  3. Choosing Between Quantitative vs Qualitative Research

    the limitations of qualitative research

  4. 21 Research Limitations Examples (2024)

    the limitations of qualitative research

  5. PPT

    the limitations of qualitative research

  6. strengths and limitations of qualitative research as mentioned during

    the limitations of qualitative research

VIDEO

  1. Quantitative Research: Its Characteristics, Strengths, and Weaknesses

  2. Advantages, Limitations & Qualitative Characteristics of Accounting Information

  3. Lesson 4: Research-Phrases to use in Writing the Research Methodology #researchtips

  4. Exploring Research Methodologies in the Social Sciences (4 Minutes)

  5. Qualitative Research Methods

  6. Difference between Qualitative & Quantitative Research

COMMENTS

  1. Strengths and Limitations of Qualitative and Quantitative Research Methods

    Jamshed (2014) advocates the use of interviewing and observation as two main methods. to have an in depth and extensive understanding of a complex reality. Qualitative studies ha ve been used in a ...

  2. Presenting and Evaluating Qualitative Research

    The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...

  3. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  4. 5 Strengths and 5 Limitations of Qualitative Research

    This may include who buys a product/service, or how employees feel about their employers. Fewer Limitations - Qualitative studies are less stringent than quantitative ones. Outside the box answers to questions, opinions, and beliefs are included in data collection and data analysis. More Versatile - Qualitative research is much easier at times ...

  5. Challenges in conducting qualitative research in health: A conceptual

    Results: One of the main topics discussed is the nature of qualitative research, its inherent challenges, and how to overcome them. Some of those highlighted here include: identification of the research problem, formation of the research question/aim, and selecting an appropriate methodology and research design, which are the main concerns of qualitative researchers and need to be handled ...

  6. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    Qualitative research is a thorough and complex activity because the researcher is a subject inserted in the society under study; he/she performs the research and simultaneously suffers its influence, being thus confronted with ethical issues. ... and to know how to face one's own limitations (Bishop & Shepherd, 2011; Guillemin & Guillan, 2015 ...

  7. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences ...

  8. Generic Qualitative Approaches: Pitfalls and Benefits of Methodological

    As qualitative research has evolved, researchers in the field have struggled with a persistent tension between a need for both methodological flexibility and structure (Holloway & Todres, 2003).In the development of qualitative research, three major methodologies are discussed most frequently and are often viewed as foundational: phenomenology, ethnography, and grounded theory (Holloway ...

  9. What Is Qualitative Research?

    Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc. ... Disadvantages of qualitative research. Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  10. Issues of validity and reliability in qualitative research

    Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research.2-4 In the broadest context these terms are applicable, with validity referring to the integrity and ...

  11. Chapter 21: Qualitative evidence

    Assessment of the methodological strengths and limitations of qualitative research remains contested within the primary qualitative research community (Garside 2014). However, within systematic reviews and evidence syntheses it is considered essential, even when studies are not to be excluded on the basis of quality (Carroll et al 2013). ...

  12. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  13. How to use and assess qualitative research methods

    Qualitative research is defined as "the study of the nature of phenomena", including "their quality, ... Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. ...

  14. Qualitative research: its value and applicability

    Qualitative research has a rich tradition in the social sciences. Since the late 19th century, researchers interested in studying the social behaviour and cultures of humankind have perceived limitations in trying to explain the phenomena they encounter in purely quantifiable, measurable terms.

  15. Revisiting Bias in Qualitative Research: Reflections on Its

    Qualitative research is perhaps often viewed as being at the bottom of the hierarchy of evidence for informing (and thus having impact on) health policy and practice, a hierarchy predicated on level of bias. Seeing "bias" as a problem to be managed during the process and reporting of qualitative research may be a way of trying to establish ...

  16. Limitations of the Study

    Sample Size Limitations in Qualitative Research. Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework.

  17. Qualitative Methods

    Qualitative Research Methods for the Social Sciences. 8th edition. Boston, MA: Allyn and Bacon, 2012; Denzin, Norman. K. and Yvonna S. Lincoln. ... It is very much true that most of the limitations you find in using qualitative research techniques also reflect their inherent strengths. For example, small sample sizes help you investigate ...

  18. How to Write Limitations of the Study (with examples)

    Common types of limitations and their ramifications include: Theoretical: limits the scope, depth, or applicability of a study. Methodological: limits the quality, quantity, or diversity of the data. Empirical: limits the representativeness, validity, or reliability of the data. Analytical: limits the accuracy, completeness, or significance of ...

  19. Qualitative vs. Quantitative: Key Differences in Research Types

    This method, known as mixed methods research, offers several benefits, including: A comprehensive understanding: Integration of qualitative and quantitative data provides a more comprehensive understanding of the research problem. Qualitative data helps explain the context and nuances, while quantitative data offers statistical generalizability.

  20. Qualitative Methods in Health Care Research

    Significance of Qualitative Research. The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality.Health interventions, explanatory health models, and medical-social ...

  21. Case Study Research Method in Psychology

    Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history). In psychology, case studies are ...

  22. Influence of load shedding in South African higher education: a

    The study utilized a qualitative research method and relied heavily on secondary data. It employed a data cleaning strategy and a triangulation approach to ensure data validity and minimized bias. ... This involved critically evaluating sources for potential biases or conflicts of interest, acknowledging any limitations or gaps in the research ...

  23. Validity, reliability, and generalizability in qualitative research

    Most qualitative research studies, if not all, are meant to study a specific issue or phenomenon in a certain population or ethnic group, of a focused locality in a particular context, hence generalizability of qualitative research findings is usually not an expected attribute. However, with rising trend of knowledge synthesis from qualitative ...

  24. Qualitative Thematic Analysis in a Mixed Methods Study: Guidelines and

    Qualitative thematic analysis is a commonly used and widely applicable form of qualitative analysis, though it can be challenging to implement. Due to its use across research questions, qualitative traditions, and fields, thematic analysis is also prevalent in mixed methods studies.

  25. A qualitative exploration of the challenges providers ...

    Background The objective of this research was to conduct a qualitative study among a diverse group of providers to identify their clinical needs, barriers, and adverse safety events in the peripartum care of people with a body mass index (BMI) ≥ 50 kg/m2. Methods Obstetricians, anesthesiologists, certified nurse midwives, nurse practitioners, and nurses were invited to participate in focus ...

  26. Limited by our limitations

    Regardless of the format scholarship assumes, from qualitative research to clinical trials, all studies have limitations. Limitations represent weaknesses within the study that may influence outcomes and conclusions of the research. ... The limitations of any research study will be rooted in the validity of its results—specifically threats to ...