© RWJF 2008
P.O. Box 2316 College Road East and Route 1
Princeton, NJ 08543
-->Citation: Cohen D, Crabtree B. "Qualitative Research Guidelines Project." July 2006.
Student resources, chapter summary, chapter 11 • qualitative data collection and analysis.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Loraine busetto.
1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany
Associated data.
Not applicable.
This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.
The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.
Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].
Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.
While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].
Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.
Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig. 1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.
Iterative research process
While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].
The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].
Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.
Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].
Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].
Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.
As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.
Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig. 2 .
Possible combination of data collection methods
Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project
The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].
To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig. 3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].
From data collection to data analysis
Attributions for icons: see Fig. Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project
Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].
Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig. 4 .
Three common mixed methods designs
In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.
A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.
Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].
While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].
The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].
This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).
Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].
Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.
Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.
Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].
In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.
The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.
Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.
For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.
While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.
The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].
Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.
The main take-away points of this paper are summarised in Table Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.
Take-away-points
• Assessing complex multi-component interventions or systems (of change) • What works for whom when, how and why? • Focussing on intervention improvement | • Document study • Observations (participant or non-participant) • Interviews (especially semi-structured) • Focus groups | • Transcription of audio-recordings and field notes into transcripts and protocols • Coding of protocols • Using qualitative data management software |
• Combinations of quantitative and/or qualitative methods, e.g.: • : quali and quanti in parallel • : quanti followed by quali • : quali followed by quanti | • Checklists • Reflexivity • Sampling strategies • Piloting • Co-coding • Member checking • Stakeholder involvement | • Protocol adherence • Sample size • Randomization • Interrater reliability, variability and other “objectivity checks” • Not being quantitative research |
Abbreviations.
EVT | Endovascular treatment |
RCT | Randomised Controlled Trial |
SOP | Standard Operating Procedure |
SRQR | Standards for Reporting Qualitative Research |
LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.
no external funding.
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.
Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.
Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.
Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.
Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.
Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.
Approach | What does it involve? |
---|---|
Grounded theory | Researchers collect rich data on a topic of interest and develop theories . |
Researchers immerse themselves in groups or organizations to understand their cultures. | |
Action research | Researchers and participants collaboratively link theory to practice to drive social change. |
Phenomenological research | Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences. |
Narrative research | Researchers examine how stories are told to understand how participants perceive and make sense of their experiences. |
Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.
Professional editors proofread and edit your paper by focusing on:
See an example
Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:
Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.
For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.
Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.
Most types of qualitative data analysis share the same five steps:
There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.
Approach | When to use | Example |
---|---|---|
To describe and categorize common words, phrases, and ideas in qualitative data. | A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps. | |
To identify and interpret patterns and themes in qualitative data. | A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity. | |
To examine the content, structure, and design of texts. | A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade. | |
To study communication and how language is used to achieve effects in specific contexts. | A political scientist could use discourse analysis to study how politicians generate trust in election campaigns. |
Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:
The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.
Data collection occurs in real-world contexts or in naturalistic ways.
Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.
Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.
Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:
The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.
Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.
Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .
Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
There are five common approaches to qualitative research :
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved June 13, 2024, from https://www.scribbr.com/methodology/qualitative-research/
Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Last updated
5 February 2023
Reviewed by
Cathy Heath
Short on time? Get an AI generated summary of this article instead
This type of sampling is often used in qualitative research , allowing the researcher to focus on specific areas of interest and gather in-depth data on those topics. In this article, we will explore the concept of purposive sampling in more detail and discuss the advantages and limitations of using this approach in research studies.
Analyze qualitative data faster and surface more actionable insights
Purposive sampling is a technique used in qualitative research to select a specific group of individuals or units for analysis. Participants are chosen “on purpose,” not randomly. It is also known as judgmental sampling or selective sampling.
In purposive sampling, the researcher has a specific purpose or objective in mind when selecting the sample. Therefore, the sample is selected based on the characteristics or attributes that the researcher is interested in studying.
For example, suppose a researcher is interested in studying the experiences of individuals living with chronic pain. In that case, they might use purposive sampling to select a sample of individuals who have been diagnosed with chronic pain.
Purposive sampling is often used in qualitative research , as it allows the researcher to focus on specific areas of interest and gather in-depth data on those topics. It is also commonly used in small-scale studies with limited sample size.
Purposive sampling should be used when you have a clear idea of the specific attributes you're interested in studying and want to select a sample that accurately represents those characteristics.
Purposive sampling can be particularly useful in the following situations:
When the population of interest is small
For interest in studying a specific subgroup within the population
To study a rare or unusual phenomenon
It's important to note that purposive sampling is not suitable for all research studies and should be used cautiously. As the sample is not selected randomly, the results of the study may not be generalizable to the larger population, and the researcher must consider the potential for bias in the sample selection.
There are several important principles of purposive sampling that you should consider when using this approach in your research studies:
Clearly defined purpose - The purpose of the study should be clearly defined, and the sample should be selected based on the characteristics or attributes that you're interested in studying.
Representative sample - The sample should be representative of the characteristics or attributes being studied.
Bias - Biases can come into play when anything other than random sampling is used, so be aware of any potential biases and take steps to minimize them.
Expertise - Having expertise in the topic being studied is an important part of sample selection. Without a solid understanding of the characteristics being selected, the population might not be as representative as it should be.
The steps to conducting a study using purposive sampling will vary depending on the topic and preferences of the researchers involved. The five steps of purposive sampling as a general framework are:
Define the purpose of the study
Identify the sample of individuals or units
Obtain informed consent from individuals
Collect the data using appropriate research methods
Analyze the data
Researchers can use several different types of purposive sampling methods , depending on what they're interested in studying and the specific research question they are trying to answer. In the list below, we'll discuss the various types of purposive sampling methods and provide examples of when each method might be used in research.
Maximum variation sampling involves selecting a sample of individuals or units representing the maximum range of variation within the characteristics or attributes the researcher is interested in studying. This type of sampling is used to understand the widest possible diversity of experiences or viewpoints within the population.
Homogeneous sampling involves selecting what is often a more narrow sample of individuals or units that are similar or have the same characteristics or attributes. This type of sampling is used to study a specific subgroup within the population in depth.
Typical case sampling involves selecting a sample of individuals or units that are representative of the typical experiences or characteristics of the population. This type of sampling is used to understand the most common or average experiences or characteristics within the population.
Extreme case sampling involves selecting a sample of individuals or units that are considered extreme or unusual in the characteristics or attributes the researcher is interested in studying. This type of sampling is used to understand unusual or exceptional experiences or characteristics within the population and are often viewed as outliers in a wider population.
Critical case sampling involves selecting a sample of individuals or units that are important or central to the research question or the population being studied. This type of sampling is used to understand key experiences or characteristics within the population.
Expert sampling involves selecting a sample of individuals or units that have specialized knowledge or expertise in the topic or issue being studied. This type of sampling is used to gather insights and understanding from experts in the field, which can be used to develop follow-up studies.
Purposive sampling and convenience sampling are similar in that both involve the selection of a sample based on the researcher's judgment rather than using a random sampling method. However, there are some key differences between the two approaches.
In purposive sampling, the sample is selected based on the defined purpose of the study and is intended to be representative of the characteristics or attributes in which the researcher is interested.
Convenience sampling, on the other hand, involves selecting a sample of individuals or units that are readily available or easily accessible to the researcher. The sample is not selected based on any particular characteristics or attributes, but rather in terms of convenience for the researcher.
There are several advantages to using purposive sampling in research studies, including:
Representative sample - allows the researcher to select a sample highly representative of the characteristics or attributes they are interested in studying, relatively quickly, This can be particularly useful when the population of interest is small or when the researcher is interested in studying a specific subgroup within the population.
In-depth data - often used in qualitative research, which allows the researcher to gather in-depth data on specific topics or issues. This can provide valuable insights and understanding of the research question.
Practicality - practical and efficient in comparison to other sampling methods, particularly in small-scale studies with limited sample sizes.
Flexibility - flexibility in the selection of the sample, which can be useful when the researcher is studying a rare or unusual phenomenon.
Cost - can be less expensive than other sampling methods, as it does not require a random selection process.
It's important to note that purposive sampling has limitations and should be used with caution. Some of the disadvantages of purposive sampling are listed below:
Limited generalizability - As the sample is not selected randomly, the study’s results may not be generalizable to the larger population. Other risk factors are producing lop-sided research, where some subgroups are omitted or excluded.
Bias - Purposive sampling is subjective and relies on the researcher's judgment, which can introduce bias into the study. The researcher may unconsciously select individuals or units that fit their expectations or preconceived notions, which can affect the study’s validity. Participants can also manipulate the insights they give.
Sampling error - Sampling error, or the difference between the sample and the population, is more likely to occur in purposive sampling because the sample is not selected randomly. This can affect the reliability and accuracy of the study.
Limited sample size - Purposive sampling is often used in small-scale studies with limited sample sizes. This can affect the statistical power of the study and make it more difficult to detect significant differences or relationships.
Ethical considerations - The researcher must ensure that the study is conducted ethically and that the rights of the participants are protected. This may require obtaining informed consent from the individuals in the sample and safeguarding their privacy.
One of the main challenges to the use of purposive sampling in research studies is the limited generalizability of the findings. Because the sample is not selected randomly, it may not be representative of the broader population, and study results may not be applicable to other groups or populations. This can limit the usefulness and impact of the study, making it more challenging to draw conclusions about the larger population.
Each of the disadvantages listed in the previous section contributes to this problem. Researchers who wish to use purposive sampling need to be aware of the method’s weaknesses and actively take steps to avoid or mitigate them.
Purposive sampling is used in research studies when the researcher has a clear idea of the characteristics or attributes they are interested in studying and wants to select a sample that is representative of those characteristics. It is often used in qualitative research to gather in-depth data on specific topics or issues.
An example of purposive sampling might be a researcher studying the experiences of individuals living with chronic pain, and therefore selecting a sample of individuals who have been diagnosed with chronic pain.
Purposive sampling is often used in qualitative research, as it allows the researcher to gather in-depth data on specific topics or issues. It may also be used in small-scale studies with a limited sample size.
The sample size for purposive sampling will depend on the research question and the characteristics or attributes the researcher is interested in studying. Generally, a sample size of 30 individuals is often considered sufficient for qualitative research, although larger sample sizes of 100 or more may be needed in some cases.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 6 February 2023
Last updated: 15 January 2024
Last updated: 6 October 2023
Last updated: 5 February 2023
Last updated: 16 April 2023
Last updated: 7 March 2023
Last updated: 9 March 2023
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.
Users report unexpectedly high data usage, especially during streaming sessions.
Users find it hard to navigate from the home page to relevant playlists in the app.
It would be great to have a sleep timer feature, especially for bedtime listening.
I need better filters to find the songs or artists I’m looking for.
Get started for free
Root out friction in every digital experience, super-charge conversion rates, and optimise digital self-service
Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve
Increase revenue and loyalty with real-time insights and recommendations delivered straight to teams on the ground
Know how your people feel and empower managers to improve employee engagement, productivity, and retention
Take action in the moments that matter most along the employee journey and drive bottom line growth
Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people
Get faster, richer insights with qual and quant tools that make powerful market research available to everyone
Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts
Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market
Meet the operating system for experience management
Popular Use Cases
The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results.
Sampling methods, types & techniques.
15 min read Your comprehensive guide to the different sampling methods available to researchers – and how to know which is right for your research.
In survey research, sampling is the process of using a subset of a population to represent the whole population. To help illustrate this further, let’s look at data sampling methods with examples below.
Let’s say you wanted to do some research on everyone in North America. To ask every person would be almost impossible. Even if everyone said “yes”, carrying out a survey across different states, in different languages and timezones, and then collecting and processing all the results , would take a long time and be very costly.
Sampling allows large-scale research to be carried out with a more realistic cost and time-frame because it uses a smaller number of individuals in the population with representative characteristics to stand in for the whole.
However, when you decide to sample, you take on a new task. You have to decide who is part of your sample list and how to choose the people who will best represent the whole population. How you go about that is what the practice of sampling is all about.
Free eBook: 2024 Market Research Trends
Although the idea of sampling is easiest to understand when you think about a very large population, it makes sense to use sampling methods in research studies of all types and sizes. After all, if you can reduce the effort and cost of doing a study, why wouldn’t you? And because sampling allows you to research larger target populations using the same resources as you would smaller ones, it dramatically opens up the possibilities for research.
Sampling is a little like having gears on a car or bicycle. Instead of always turning a set of wheels of a specific size and being constrained by their physical properties, it allows you to translate your effort to the wheels via the different gears, so you’re effectively choosing bigger or smaller wheels depending on the terrain you’re on and how much work you’re able to do.
Sampling allows you to “gear” your research so you’re less limited by the constraints of cost, time, and complexity that come with different population sizes.
It allows us to do things like carrying out exit polls during elections, map the spread and effects rates of epidemics across geographical areas, and carry out nationwide census research that provides a snapshot of society and culture.
Sampling strategies in research vary widely across different disciplines and research areas, and from study to study.
There are two major types of sampling methods: probability and non-probability sampling.
As we delve into these categories, it’s essential to understand the nuances and applications of each method to ensure that the chosen sampling strategy aligns with the research goals.
There’s a wide range of probability sampling methods to explore and consider. Here are some of the best-known options.
With simple random sampling , every element in the population has an equal chance of being selected as part of the sample. It’s something like picking a name out of a hat. Simple random sampling can be done by anonymising the population – e.g. by assigning each item or person in the population a number and then picking numbers at random.
Pros: Simple random sampling is easy to do and cheap. Designed to ensure that every member of the population has an equal chance of being selected, it reduces the risk of bias compared to non-random sampling.
Cons: It offers no control for the researcher and may lead to unrepresentative groupings being picked by chance.
With systematic sampling the random selection only applies to the first item chosen. A rule then applies so that every nth item or person after that is picked.
Best practice is to sort your list in a random way to ensure that selections won’t be accidentally clustered together. This is commonly achieved using a random number generator. If that’s not available you might order your list alphabetically by first name and then pick every fifth name to eliminate bias, for example.
Next, you need to decide your sampling interval – for example, if your sample will be 10% of your full list, your sampling interval is one in 10 – and pick a random start between one and 10 – for example three. This means you would start with person number three on your list and pick every tenth person.
Pros: Systematic sampling is efficient and straightforward, especially when dealing with populations that have a clear order. It ensures a uniform selection across the population.
Cons: There’s a potential risk of introducing bias if there’s an unrecognised pattern in the population that aligns with the sampling interval.
Stratified sampling involves random selection within predefined groups. It’s a useful method for researchers wanting to determine what aspects of a sample are highly correlated with what’s being measured. They can then decide how to subdivide (stratify) it in a way that makes sense for the research.
For example, you want to measure the height of students at a college where 80% of students are female and 20% are male. We know that gender is highly correlated with height, and if we took a simple random sample of 200 students (out of the 2,000 who attend the college), we could by chance get 200 females and not one male. This would bias our results and we would underestimate the height of students overall. Instead, we could stratify by gender and make sure that 20% of our sample (40 students) are male and 80% (160 students) are female.
Pros: Stratified sampling enhances the representation of all identified subgroups within a population, leading to more accurate results in heterogeneous populations.
Cons: This method requires accurate knowledge about the population’s stratification, and its design and execution can be more intricate than other methods.
With cluster sampling, groups rather than individual units of the target population are selected at random for the sample. These might be pre-existing groups, such as people in certain zip codes or students belonging to an academic year.
Cluster sampling can be done by selecting the entire cluster, or in the case of two-stage cluster sampling, by randomly selecting the cluster itself, then selecting at random again within the cluster.
Pros: Cluster sampling is economically beneficial and logistically easier when dealing with vast and geographically dispersed populations.
Cons: Due to potential similarities within clusters, this method can introduce a greater sampling error compared to other methods.
The non-probability sampling methodology doesn’t offer the same bias-removal benefits as probability sampling, but there are times when these types of sampling are chosen for expediency or simplicity. Here are some forms of non-probability sampling and how they work.
People or elements in a sample are selected on the basis of their accessibility and availability. If you are doing a research survey and you work at a university, for example, a convenience sample might consist of students or co-workers who happen to be on campus with open schedules who are willing to take your questionnaire .
This kind of sample can have value, especially if it’s done as an early or preliminary step, but significant bias will be introduced.
Pros: Convenience sampling is the most straightforward method, requiring minimal planning, making it quick to implement.
Cons: Due to its non-random nature, the method is highly susceptible to biases, and the results are often lacking in their application to the real world.
Like the probability-based stratified sampling method, this approach aims to achieve a spread across the target population by specifying who should be recruited for a survey according to certain groups or criteria.
For example, your quota might include a certain number of males and a certain number of females. Alternatively, you might want your samples to be at a specific income level or in certain age brackets or ethnic groups.
Pros: Quota sampling ensures certain subgroups are adequately represented, making it great for when random sampling isn’t feasible but representation is necessary.
Cons: The selection within each quota is non-random and researchers’ discretion can influence the representation, which both strongly increase the risk of bias.
Participants for the sample are chosen consciously by researchers based on their knowledge and understanding of the research question at hand or their goals.
Also known as judgment sampling, this technique is unlikely to result in a representative sample , but it is a quick and fairly easy way to get a range of results or responses.
Pros: Purposive sampling targets specific criteria or characteristics, making it ideal for studies that require specialised participants or specific conditions.
Cons: It’s highly subjective and based on researchers’ judgment, which can introduce biases and limit the study’s real-world application.
With this approach, people recruited to be part of a sample are asked to invite those they know to take part, who are then asked to invite their friends and family and so on. The participation radiates through a community of connected individuals like a snowball rolling downhill.
Pros: Especially useful for hard-to-reach or secretive populations, snowball sampling is effective for certain niche studies.
Cons: The method can introduce bias due to the reliance on participant referrals, and the choice of initial seeds can significantly influence the final sample.
Choosing the right sampling method is a pivotal aspect of any research process, but it can be a stumbling block for many.
Here’s a structured approach to guide your decision.
If you aim to get a general sense of a larger group, simple random or stratified sampling could be your best bet. For focused insights or studying unique communities, snowball or purposive sampling might be more suitable.
The nature of the group you’re studying can guide your method. For a diverse group with different categories, stratified sampling can ensure all segments are covered. If they’re widely spread geographically , cluster sampling becomes useful. If they’re arranged in a certain sequence or order, systematic sampling might be effective.
Your available time, budget and ease of accessing participants matter. Convenience or quota sampling can be practical for quicker studies, but they come with some trade-offs. If reaching everyone in your desired group is challenging, snowball or purposive sampling can be more feasible.
Decide if you want your findings to represent a much broader group. For a wider representation, methods that include everyone fairly (like probability sampling ) are a good option. For specialised insights into specific groups, non-probability sampling methods can be more suitable.
Before fully committing, discuss your chosen method with others in your field and consider a test run.
Using a sample is a kind of short-cut. If you could ask every single person in a population to take part in your study and have each of them reply, you’d have a highly accurate (and very labor-intensive) project on your hands.
But since that’s not realistic, sampling offers a “good-enough” solution that sacrifices some accuracy for the sake of practicality and ease. How much accuracy you lose out on depends on how well you control for sampling error, non-sampling error, and bias in your survey design . Our blog post helps you to steer clear of some of these issues.
Finding the best sample size for your target population is something you’ll need to do again and again, as it’s different for every study.
To make life easier, we’ve provided a sample size calculator . To use it, you need to know your:
If any of those terms are unfamiliar, have a look at our blog post on determining sample size for details of what they mean and how to find them.
In the ever-evolving business landscape, relying on the most recent market research is paramount. Reflecting on 2022, brands and businesses can harness crucial insights to outmaneuver challenges and seize opportunities.
Equip yourself with this knowledge by exploring Qualtrics’ ‘2022 Market Research Global Trends’ report.
Delve into this comprehensive study to unearth:
Find out how Qualtrics XM can help you conduct world-class research
Sampling and non-sampling errors 10 min read, how to determine sample size 16 min read, convenience sampling 15 min read, non-probability sampling 17 min read, probability sampling 8 min read, stratified random sampling 13 min read, simple random sampling 10 min read, request demo.
Ready to learn more about Qualtrics?
IMAGES
VIDEO
COMMENTS
minded researchers, non-random sampling is the second-choice approach as it creates potential issues of 'bias'. However, in qualitative research the central resource through which sampling decisions are made is a focus on specific people, situations or sites
Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...
Qualitative research focuses in-depth on small samples, even a single sampling unit (n = 1), selected purposefully for the study (Patton, 1990). The reliability and
A qualitative sampling plan describes how many observations, interviews, focus-group discussions or cases are needed to ensure that the findings will contribute rich data. In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined ...
The sample is the specific group of individuals that you will collect data from. Sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population). Sample size is how many individuals (or units) are included in your sample.
Principles of Purposeful Sampling. Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources (Patton, 2002).This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ...
that are extracted from different levels of study. Key Words: Qualitative Research, Sampling Designs, Random Sampling, Purposive Sampling, and Sample Size Setting the Scene According to Denzin and Lincoln (2005), qualitative researchers must confront three crises; representation, legitimation, and praxis. The crisis of representation refers to
Rely largely on random sampling methods. Based on purposive sampling methods. ... In health sciences research, ethnography focuses on narrating and interpreting the health behaviors of a culture-sharing group. 'Culture-sharing group' in an ethnography represents any 'group of people who share common meanings, customs or experiences ...
This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific research question (RQ) guides the methodology (the study design or approach).It defines the participants, location, and actions to ...
The research note was prepared based on experience in qualitative research sampling gained, among others, during running the project financed by the National Science Centre (no. 2017/27/B/HS4/01051). CRediT authorship contribution statement. Katarzyna Czernek-Marszałek: Writing - review & editing, Writing - original draft, Conceptualization.
In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals.
A core part of qualitative research, these resources will help you develop proper sampling models. Describes a sampling method for qualitative research. This article by Thomas Lunsford and Brenda Rae Lunsford is the first of a 2-part overview of sampling. A quick method for generating random numbers for sampling.
Key Takeaways: Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling. Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results. It's crucial to consider the potential impact on the bias, sample diversity, and generalizability when ...
Qualitative research focuses in-depth on small samples, even a single sampling unit (n = 1), selected purposefully for the study (Patton, 1990). The reliability and generalizability of the findings of qualitative research rely heavily on the information provided by the participants of the sample.
Abstract. In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals.
Qualitative researchers tend to say that qualitative research is not generalisable, but is representative. 1 While qualitative studies often include non-random sampling, simple random sampling can be conducted when it is important to select a random set of participants from a large population, in which everyone has the same chance of being ...
Random Sampling. Definition. A systematic process of selecting subjects or units for examination and analysis that does not take contextual or local features into account. When is it used? Random sampling is typically used in experimental and quasi-experimental designs. Random sampling typically involves the generation of large samples.
Purposive sampling is widely used in qualitative research, when you want to focus in depth on a certain phenomenon. There are five key steps involved in drawing a purposive sample. Step 1: Define your research problem. Start by deciding your research problem: a specific issue, challenge, or gap in knowledge you aim to address in your research.
Chapter Summary. Sampling techniques in qualitative research are intentional, as opposed to random. This type of sampling is known as purposeful sampling. In maximum variation sampling, the researcher selects cases that differ on an important characteristic. Extreme case sampling focuses on the sampling of an outlying case.
To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance. Example: Simple random sampling You want to select a simple random sample of 1000 employees of a social media marketing company. You assign a number to every employee in the company database from 1 to 1000 ...
A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [14, 17, 27]. However, none of these has been elevated to the "gold standard" in the field. ... This is also the reason why most qualitative studies use deliberate instead of random sampling strategies.
Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data. Common approaches include grounded theory, ethnography, action research, phenomenological research, and narrative research.
Purposive sampling isa technique used in qualitative research to select a specific group of individuals or units for analysis. Participants are chosen "on purpose," not randomly. It is also known as judgmental sampling or selective sampling. In purposive sampling, the researcher has a specific purpose or objective in mind when selecting the ...
There's a wide range of probability sampling methods to explore and consider. Here are some of the best-known options. 1. Simple random sampling. With simple random sampling, every element in the population has an equal chance of being selected as part of the sample. It's something like picking a name out of a hat.