Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Sampling Methods | Types, Techniques & Examples

Sampling Methods | Types, Techniques & Examples

Published on September 19, 2019 by Shona McCombes . Revised on June 22, 2023.

When you conduct research about a group of people, it’s rarely possible to collect data from every person in that group. Instead, you select a sample . The sample is the group of individuals who will actually participate in the research.

To draw valid conclusions from your results, you have to carefully decide how you will select a sample that is representative of the group as a whole. This is called a sampling method . There are two primary types of sampling methods that you can use in your research:

  • Probability sampling involves random selection, allowing you to make strong statistical inferences about the whole group.
  • Non-probability sampling involves non-random selection based on convenience or other criteria, allowing you to easily collect data.

You should clearly explain how you selected your sample in the methodology section of your paper or thesis, as well as how you approached minimizing research bias in your work.

Table of contents

Population vs. sample, probability sampling methods, non-probability sampling methods, other interesting articles, frequently asked questions about sampling.

First, you need to understand the difference between a population and a sample , and identify the target population of your research.

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.

The population can be defined in terms of geographical location, age, income, or many other characteristics.

Population vs sample

It is important to carefully define your target population according to the purpose and practicalities of your project.

If the population is very large, demographically mixed, and geographically dispersed, it might be difficult to gain access to a representative sample. A lack of a representative sample affects the validity of your results, and can lead to several research biases , particularly sampling bias .

Sampling frame

The sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).

Sample size

The number of individuals you should include in your sample depends on various factors, including the size and variability of the population and your research design. There are different sample size calculators and formulas depending on what you want to achieve with statistical analysis .

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

quantitative research sampling procedures

Probability sampling means that every member of the population has a chance of being selected. It is mainly used in quantitative research . If you want to produce results that are representative of the whole population, probability sampling techniques are the most valid choice.

There are four main types of probability sample.

Probability sampling

1. Simple random sampling

In a simple random sample, every member of the population has an equal chance of being selected. Your sampling frame should include the whole population.

To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance.

2. Systematic sampling

Systematic sampling is similar to simple random sampling, but it is usually slightly easier to conduct. Every member of the population is listed with a number, but instead of randomly generating numbers, individuals are chosen at regular intervals.

If you use this technique, it is important to make sure that there is no hidden pattern in the list that might skew the sample. For example, if the HR database groups employees by team, and team members are listed in order of seniority, there is a risk that your interval might skip over people in junior roles, resulting in a sample that is skewed towards senior employees.

3. Stratified sampling

Stratified sampling involves dividing the population into subpopulations that may differ in important ways. It allows you draw more precise conclusions by ensuring that every subgroup is properly represented in the sample.

To use this sampling method, you divide the population into subgroups (called strata) based on the relevant characteristic (e.g., gender identity, age range, income bracket, job role).

Based on the overall proportions of the population, you calculate how many people should be sampled from each subgroup. Then you use random or systematic sampling to select a sample from each subgroup.

4. Cluster sampling

Cluster sampling also involves dividing the population into subgroups, but each subgroup should have similar characteristics to the whole sample. Instead of sampling individuals from each subgroup, you randomly select entire subgroups.

If it is practically possible, you might include every individual from each sampled cluster. If the clusters themselves are large, you can also sample individuals from within each cluster using one of the techniques above. This is called multistage sampling .

This method is good for dealing with large and dispersed populations, but there is more risk of error in the sample, as there could be substantial differences between clusters. It’s difficult to guarantee that the sampled clusters are really representative of the whole population.

In a non-probability sample, individuals are selected based on non-random criteria, and not every individual has a chance of being included.

This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias . That means the inferences you can make about the population are weaker than with probability samples, and your conclusions may be more limited. If you use a non-probability sample, you should still aim to make it as representative of the population as possible.

Non-probability sampling techniques are often used in exploratory and qualitative research . In these types of research, the aim is not to test a hypothesis about a broad population, but to develop an initial understanding of a small or under-researched population.

Non probability sampling

1. Convenience sampling

A convenience sample simply includes the individuals who happen to be most accessible to the researcher.

This is an easy and inexpensive way to gather initial data, but there is no way to tell if the sample is representative of the population, so it can’t produce generalizable results. Convenience samples are at risk for both sampling bias and selection bias .

2. Voluntary response sampling

Similar to a convenience sample, a voluntary response sample is mainly based on ease of access. Instead of the researcher choosing participants and directly contacting them, people volunteer themselves (e.g. by responding to a public online survey).

Voluntary response samples are always at least somewhat biased , as some people will inherently be more likely to volunteer than others, leading to self-selection bias .

3. Purposive sampling

This type of sampling, also known as judgement sampling, involves the researcher using their expertise to select a sample that is most useful to the purposes of the research.

It is often used in qualitative research , where the researcher wants to gain detailed knowledge about a specific phenomenon rather than make statistical inferences, or where the population is very small and specific. An effective purposive sample must have clear criteria and rationale for inclusion. Always make sure to describe your inclusion and exclusion criteria and beware of observer bias affecting your arguments.

4. Snowball sampling

If the population is hard to access, snowball sampling can be used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people. The downside here is also representativeness, as you have no way of knowing how representative your sample is due to the reliance on participants recruiting others. This can lead to sampling bias .

5. Quota sampling

Quota sampling relies on the non-random selection of a predetermined number or proportion of units. This is called a quota.

You first divide the population into mutually exclusive subgroups (called strata) and then recruit sample units until you reach your quota. These units share specific characteristics, determined by you prior to forming your strata. The aim of quota sampling is to control what or who makes up your sample.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Sampling Methods | Types, Techniques & Examples. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/sampling-methods/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, population vs. sample | definitions, differences & examples, simple random sampling | definition, steps & examples, sampling bias and how to avoid it | types & examples, what is your plagiarism score.

Logo for JCU Open eBooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

3.4 Sampling Techniques in Quantitative Research

Target population.

The target population includes the people the researcher is interested in conducting the research and generalizing the findings on. 40 For example, if certain researchers are interested in vaccine-preventable diseases in children five years and younger in Australia. The target population will be all children aged 0–5 years residing in Australia. The actual population is a subset of the target population from which the sample is drawn, e.g. children aged 0–5 years living in the capital cities in Australia. The sample is the people chosen for the study from the actual population (Figure 3.9). The sampling process involves choosing people, and it is distinct from the sample. 40 In quantitative research, the sample must accurately reflect the target population, be free from bias in terms of selection, and be large enough to validate or reject the study hypothesis with statistical confidence and minimise random error. 2

quantitative research sampling procedures

Sampling techniques

Sampling in quantitative research is a critical component that involves selecting a representative subset of individuals or cases from a larger population and often employs sampling techniques based on probability theory. 41 The goal of sampling is to obtain a sample that is large enough and representative of the target population. Examples of probability sampling techniques include simple random sampling, stratified random sampling, systematic random sampling and cluster sampling ( shown below ). 2 The key feature of probability techniques is that they involve randomization. There are two main characteristics of probability sampling. All individuals of a population are accessible to the researcher (theoretically), and there is an equal chance that each person in the population will be chosen to be part of the study sample. 41 While quantitative research often uses sampling techniques based on probability theory, some non-probability techniques may occasionally be utilised in healthcare research. 42 Non-probability sampling methods are commonly used in qualitative research. These include purposive, convenience, theoretical and snowballing and have been discussed in detail in chapter 4.

Sample size calculation

In order to enable comparisons with some level of established statistical confidence, quantitative research needs an acceptable sample size. 2 The sample size is the most crucial factor for reliability (reproducibility) in quantitative research. It is important for a study to be powered – the likelihood of identifying a difference if it exists in reality. 2 Small sample-sized studies are more likely to be underpowered, and results from small samples are more likely to be prone to random error. 2 The formula for sample size calculation varies with the study design and the research hypothesis. 2 There are numerous formulae for sample size calculations, but such details are beyond the scope of this book. For further readings, please consult the biostatistics textbook by Hirsch RP, 2021. 43 However, we will introduce a simple formula for calculating sample size for cross-sectional studies with prevalence as the outcome. 2

quantitative research sampling procedures

z   is the statistical confidence; therefore,  z = 1.96 translates to 95% confidence; z = 1.68 translates to 90% confidence

p = Expected prevalence (of health condition of interest)

d = Describes intended precision; d = 0.1 means that the estimate falls +/-10 percentage points of true prevalence with the considered confidence. (e.g. for a prevalence of 40% (0.4), if d=.1, then the estimate will fall between 30% and 50% (0.3 to 0.5).

Example: A district medical officer seeks to estimate the proportion of children in the district receiving appropriate childhood vaccinations. Assuming a simple random sample of a community is to be selected, how many children must be studied if the resulting estimate is to fall within 10% of the true proportion with 95% confidence? It is expected that approximately 50% of the children receive vaccinations

quantitative research sampling procedures

z = 1.96 (95% confidence)

d = 10% = 10/ 100 = 0.1 (estimate to fall within 10%)

p = 50% = 50/ 100 = 0.5

Now we can enter the values into the formula

quantitative research sampling procedures

Given that people cannot be reported in decimal points, it is important to round up to the nearest whole number.

An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

quantitative research sampling procedures

Sampling Methods & Strategies 101

Everything you need to know (including examples)

By: Derek Jansen (MBA) | Expert Reviewed By: Kerryn Warren (PhD) | January 2023

If you’re new to research, sooner or later you’re bound to wander into the intimidating world of sampling methods and strategies. If you find yourself on this page, chances are you’re feeling a little overwhelmed or confused. Fear not – in this post we’ll unpack sampling in straightforward language , along with loads of examples .

Overview: Sampling Methods & Strategies

  • What is sampling in a research context?
  • The two overarching approaches

Simple random sampling

Stratified random sampling, cluster sampling, systematic sampling, purposive sampling, convenience sampling, snowball sampling.

  • How to choose the right sampling method

What (exactly) is sampling?

At the simplest level, sampling (within a research context) is the process of selecting a subset of participants from a larger group . For example, if your research involved assessing US consumers’ perceptions about a particular brand of laundry detergent, you wouldn’t be able to collect data from every single person that uses laundry detergent (good luck with that!) – but you could potentially collect data from a smaller subset of this group.

In technical terms, the larger group is referred to as the population , and the subset (the group you’ll actually engage with in your research) is called the sample . Put another way, you can look at the population as a full cake and the sample as a single slice of that cake. In an ideal world, you’d want your sample to be perfectly representative of the population, as that would allow you to generalise your findings to the entire population. In other words, you’d want to cut a perfect cross-sectional slice of cake, such that the slice reflects every layer of the cake in perfect proportion.

Achieving a truly representative sample is, unfortunately, a little trickier than slicing a cake, as there are many practical challenges and obstacles to achieving this in a real-world setting. Thankfully though, you don’t always need to have a perfectly representative sample – it all depends on the specific research aims of each study – so don’t stress yourself out about that just yet!

With the concept of sampling broadly defined, let’s look at the different approaches to sampling to get a better understanding of what it all looks like in practice.

quantitative research sampling procedures

The two overarching sampling approaches

At the highest level, there are two approaches to sampling: probability sampling and non-probability sampling . Within each of these, there are a variety of sampling methods , which we’ll explore a little later.

Probability sampling involves selecting participants (or any unit of interest) on a statistically random basis , which is why it’s also called “random sampling”. In other words, the selection of each individual participant is based on a pre-determined process (not the discretion of the researcher). As a result, this approach achieves a random sample.

Probability-based sampling methods are most commonly used in quantitative research , especially when it’s important to achieve a representative sample that allows the researcher to generalise their findings.

Non-probability sampling , on the other hand, refers to sampling methods in which the selection of participants is not statistically random . In other words, the selection of individual participants is based on the discretion and judgment of the researcher, rather than on a pre-determined process.

Non-probability sampling methods are commonly used in qualitative research , where the richness and depth of the data are more important than the generalisability of the findings.

If that all sounds a little too conceptual and fluffy, don’t worry. Let’s take a look at some actual sampling methods to make it more tangible.

Need a helping hand?

quantitative research sampling procedures

Probability-based sampling methods

First, we’ll look at four common probability-based (random) sampling methods:

Importantly, this is not a comprehensive list of all the probability sampling methods – these are just four of the most common ones. So, if you’re interested in adopting a probability-based sampling approach, be sure to explore all the options.

Simple random sampling involves selecting participants in a completely random fashion , where each participant has an equal chance of being selected. Basically, this sampling method is the equivalent of pulling names out of a hat , except that you can do it digitally. For example, if you had a list of 500 people, you could use a random number generator to draw a list of 50 numbers (each number, reflecting a participant) and then use that dataset as your sample.

Thanks to its simplicity, simple random sampling is easy to implement , and as a consequence, is typically quite cheap and efficient . Given that the selection process is completely random, the results can be generalised fairly reliably. However, this also means it can hide the impact of large subgroups within the data, which can result in minority subgroups having little representation in the results – if any at all. To address this, one needs to take a slightly different approach, which we’ll look at next.

Stratified random sampling is similar to simple random sampling, but it kicks things up a notch. As the name suggests, stratified sampling involves selecting participants randomly , but from within certain pre-defined subgroups (i.e., strata) that share a common trait . For example, you might divide the population into strata based on gender, ethnicity, age range or level of education, and then select randomly from each group.

The benefit of this sampling method is that it gives you more control over the impact of large subgroups (strata) within the population. For example, if a population comprises 80% males and 20% females, you may want to “balance” this skew out by selecting a random sample from an equal number of males and females. This would, of course, reduce the representativeness of the sample, but it would allow you to identify differences between subgroups. So, depending on your research aims, the stratified approach could work well.

Free Webinar: Research Methodology 101

Next on the list is cluster sampling. As the name suggests, this sampling method involves sampling from naturally occurring, mutually exclusive clusters within a population – for example, area codes within a city or cities within a country. Once the clusters are defined, a set of clusters are randomly selected and then a set of participants are randomly selected from each cluster.

Now, you’re probably wondering, “how is cluster sampling different from stratified random sampling?”. Well, let’s look at the previous example where each cluster reflects an area code in a given city.

With cluster sampling, you would collect data from clusters of participants in a handful of area codes (let’s say 5 neighbourhoods). Conversely, with stratified random sampling, you would need to collect data from all over the city (i.e., many more neighbourhoods). You’d still achieve the same sample size either way (let’s say 200 people, for example), but with stratified sampling, you’d need to do a lot more running around, as participants would be scattered across a vast geographic area. As a result, cluster sampling is often the more practical and economical option.

If that all sounds a little mind-bending, you can use the following general rule of thumb. If a population is relatively homogeneous , cluster sampling will often be adequate. Conversely, if a population is quite heterogeneous (i.e., diverse), stratified sampling will generally be more appropriate.

The last probability sampling method we’ll look at is systematic sampling. This method simply involves selecting participants at a set interval , starting from a random point .

For example, if you have a list of students that reflects the population of a university, you could systematically sample that population by selecting participants at an interval of 8 . In other words, you would randomly select a starting point – let’s say student number 40 – followed by student 48, 56, 64, etc.

What’s important with systematic sampling is that the population list you select from needs to be randomly ordered . If there are underlying patterns in the list (for example, if the list is ordered by gender, IQ, age, etc.), this will result in a non-random sample, which would defeat the purpose of adopting this sampling method. Of course, you could safeguard against this by “shuffling” your population list using a random number generator or similar tool.

Systematic sampling simply involves selecting participants at a set interval (e.g., every 10th person), starting from a random point.

Non-probability-based sampling methods

Right, now that we’ve looked at a few probability-based sampling methods, let’s look at three non-probability methods :

Again, this is not an exhaustive list of all possible sampling methods, so be sure to explore further if you’re interested in adopting a non-probability sampling approach.

First up, we’ve got purposive sampling – also known as judgment , selective or subjective sampling. Again, the name provides some clues, as this method involves the researcher selecting participants using his or her own judgement , based on the purpose of the study (i.e., the research aims).

For example, suppose your research aims were to understand the perceptions of hyper-loyal customers of a particular retail store. In that case, you could use your judgement to engage with frequent shoppers, as well as rare or occasional shoppers, to understand what judgements drive the two behavioural extremes .

Purposive sampling is often used in studies where the aim is to gather information from a small population (especially rare or hard-to-find populations), as it allows the researcher to target specific individuals who have unique knowledge or experience . Naturally, this sampling method is quite prone to researcher bias and judgement error, and it’s unlikely to produce generalisable results, so it’s best suited to studies where the aim is to go deep rather than broad .

Purposive sampling involves the researcher selecting participants using their own judgement, based on the purpose of the study.

Next up, we have convenience sampling. As the name suggests, with this method, participants are selected based on their availability or accessibility . In other words, the sample is selected based on how convenient it is for the researcher to access it, as opposed to using a defined and objective process.

Naturally, convenience sampling provides a quick and easy way to gather data, as the sample is selected based on the individuals who are readily available or willing to participate. This makes it an attractive option if you’re particularly tight on resources and/or time. However, as you’d expect, this sampling method is unlikely to produce a representative sample and will of course be vulnerable to researcher bias , so it’s important to approach it with caution.

Last but not least, we have the snowball sampling method. This method relies on referrals from initial participants to recruit additional participants. In other words, the initial subjects form the first (small) snowball and each additional subject recruited through referral is added to the snowball, making it larger as it rolls along .

Snowball sampling is often used in research contexts where it’s difficult to identify and access a particular population. For example, people with a rare medical condition or members of an exclusive group. It can also be useful in cases where the research topic is sensitive or taboo and people are unlikely to open up unless they’re referred by someone they trust.

Simply put, snowball sampling is ideal for research that involves reaching hard-to-access populations . But, keep in mind that, once again, it’s a sampling method that’s highly prone to researcher bias and is unlikely to produce a representative sample. So, make sure that it aligns with your research aims and questions before adopting this method.

How to choose a sampling method

Now that we’ve looked at a few popular sampling methods (both probability and non-probability based), the obvious question is, “ how do I choose the right sampling method for my study?”. When selecting a sampling method for your research project, you’ll need to consider two important factors: your research aims and your resources .

As with all research design and methodology choices, your sampling approach needs to be guided by and aligned with your research aims, objectives and research questions – in other words, your golden thread. Specifically, you need to consider whether your research aims are primarily concerned with producing generalisable findings (in which case, you’ll likely opt for a probability-based sampling method) or with achieving rich , deep insights (in which case, a non-probability-based approach could be more practical). Typically, quantitative studies lean toward the former, while qualitative studies aim for the latter, so be sure to consider your broader methodology as well.

The second factor you need to consider is your resources and, more generally, the practical constraints at play. If, for example, you have easy, free access to a large sample at your workplace or university and a healthy budget to help you attract participants, that will open up multiple options in terms of sampling methods. Conversely, if you’re cash-strapped, short on time and don’t have unfettered access to your population of interest, you may be restricted to convenience or referral-based methods.

In short, be ready for trade-offs – you won’t always be able to utilise the “perfect” sampling method for your study, and that’s okay. Much like all the other methodological choices you’ll make as part of your study, you’ll often need to compromise and accept practical trade-offs when it comes to sampling. Don’t let this get you down though – as long as your sampling choice is well explained and justified, and the limitations of your approach are clearly articulated, you’ll be on the right track.

quantitative research sampling procedures

Let’s recap…

In this post, we’ve covered the basics of sampling within the context of a typical research project.

  • Sampling refers to the process of defining a subgroup (sample) from the larger group of interest (population).
  • The two overarching approaches to sampling are probability sampling (random) and non-probability sampling .
  • Common probability-based sampling methods include simple random sampling, stratified random sampling, cluster sampling and systematic sampling.
  • Common non-probability-based sampling methods include purposive sampling, convenience sampling and snowball sampling.
  • When choosing a sampling method, you need to consider your research aims , objectives and questions, as well as your resources and other practical constraints .

If you’d like to see an example of a sampling strategy in action, be sure to check out our research methodology chapter sample .

Last but not least, if you need hands-on help with your sampling (or any other aspect of your research), take a look at our 1-on-1 coaching service , where we guide you through each step of the research process, at your own pace.

quantitative research sampling procedures

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Abby

Excellent and helpful. Best site to get a full understanding of Research methodology. I’m nolonger as “clueless “..😉

Takele Gezaheg Demie

Excellent and helpful for junior researcher!

Andrea

Grad Coach tutorials are excellent – I recommend them to everyone doing research. I will be working with a sample of imprisoned women and now have a much clearer idea concerning sampling. Thank you to all at Grad Coach for generously sharing your expertise with students.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • En español – ExME
  • Em português – EME

What are sampling methods and how do you choose the best one?

Posted on 18th November 2020 by Mohamed Khalifa

""

This tutorial will introduce sampling methods and potential sampling errors to avoid when conducting medical research.

Introduction to sampling methods

Examples of different sampling methods, choosing the best sampling method.

It is important to understand why we sample the population; for example, studies are built to investigate the relationships between risk factors and disease. In other words, we want to find out if this is a true association, while still aiming for the minimum risk for errors such as: chance, bias or confounding .

However, it would not be feasible to experiment on the whole population, we would need to take a good sample and aim to reduce the risk of having errors by proper sampling technique.

What is a sampling frame?

A sampling frame is a record of the target population containing all participants of interest. In other words, it is a list from which we can extract a sample.

What makes a good sample?

A good sample should be a representative subset of the population we are interested in studying, therefore, with each participant having equal chance of being randomly selected into the study.

We could choose a sampling method based on whether we want to account for sampling bias; a random sampling method is often preferred over a non-random method for this reason. Random sampling examples include: simple, systematic, stratified, and cluster sampling. Non-random sampling methods are liable to bias, and common examples include: convenience, purposive, snowballing, and quota sampling. For the purposes of this blog we will be focusing on random sampling methods .

Example: We want to conduct an experimental trial in a small population such as: employees in a company, or students in a college. We include everyone in a list and use a random number generator to select the participants

Advantages: Generalisable results possible, random sampling, the sampling frame is the whole population, every participant has an equal probability of being selected

Disadvantages: Less precise than stratified method, less representative than the systematic method

Simple sampling method example in stick men.

Example: Every nth patient entering the out-patient clinic is selected and included in our sample

Advantages: More feasible than simple or stratified methods, sampling frame is not always required

Disadvantages:  Generalisability may decrease if baseline characteristics repeat across every nth participant

Systematic sampling method example in stick men

Example: We have a big population (a city) and we want to ensure representativeness of all groups with a pre-determined characteristic such as: age groups, ethnic origin, and gender

Advantages:  Inclusive of strata (subgroups), reliable and generalisable results

Disadvantages: Does not work well with multiple variables

Stratified sampling method example stick men

Example: 10 schools have the same number of students across the county. We can randomly select 3 out of 10 schools as our clusters

Advantages: Readily doable with most budgets, does not require a sampling frame

Disadvantages: Results may not be reliable nor generalisable

Cluster sampling method example with stick men

How can you identify sampling errors?

Non-random selection increases the probability of sampling (selection) bias if the sample does not represent the population we want to study. We could avoid this by random sampling and ensuring representativeness of our sample with regards to sample size.

An inadequate sample size decreases the confidence in our results as we may think there is no significant difference when actually there is. This type two error results from having a small sample size, or from participants dropping out of the sample.

In medical research of disease, if we select people with certain diseases while strictly excluding participants with other co-morbidities, we run the risk of diagnostic purity bias where important sub-groups of the population are not represented.

Furthermore, measurement bias may occur during re-collection of risk factors by participants (recall bias) or assessment of outcome where people who live longer are associated with treatment success, when in fact people who died were not included in the sample or data analysis (survivors bias).

By following the steps below we could choose the best sampling method for our study in an orderly fashion.

Research objectiveness

Firstly, a refined research question and goal would help us define our population of interest. If our calculated sample size is small then it would be easier to get a random sample. If, however, the sample size is large, then we should check if our budget and resources can handle a random sampling method.

Sampling frame availability

Secondly, we need to check for availability of a sampling frame (Simple), if not, could we make a list of our own (Stratified). If neither option is possible, we could still use other random sampling methods, for instance, systematic or cluster sampling.

Study design

Moreover, we could consider the prevalence of the topic (exposure or outcome) in the population, and what would be the suitable study design. In addition, checking if our target population is widely varied in its baseline characteristics. For example, a population with large ethnic subgroups could best be studied using a stratified sampling method.

Random sampling

Finally, the best sampling method is always the one that could best answer our research question while also allowing for others to make use of our results (generalisability of results). When we cannot afford a random sampling method, we can always choose from the non-random sampling methods.

To sum up, we now understand that choosing between random or non-random sampling methods is multifactorial. We might often be tempted to choose a convenience sample from the start, but that would not only decrease precision of our results, and would make us miss out on producing research that is more robust and reliable.

References (pdf)

' src=

Mohamed Khalifa

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on What are sampling methods and how do you choose the best one?

' src=

Thank you for this overview. A concise approach for research.

' src=

really helps! am an ecology student preparing to write my lab report for sampling.

' src=

I learned a lot to the given presentation.. It’s very comprehensive… Thanks for sharing…

' src=

Very informative and useful for my study. Thank you

' src=

Oversimplified info on sampling methods. Probabilistic of the sampling and sampling of samples by chance does rest solely on the random methods. Factors such as the random visits or presentation of the potential participants at clinics or sites could be sufficiently random in nature and should be used for the sake of efficiency and feasibility. Nevertheless, this approach has to be taken only after careful thoughts. Representativeness of the study samples have to be checked at the end or during reporting by comparing it to the published larger studies or register of some kind in/from the local population.

' src=

Thank you so much Mr.mohamed very useful and informative article

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

quantitative research sampling procedures

How to read a funnel plot

This blog introduces you to funnel plots, guiding you through how to read them and what may cause them to look asymmetrical.

""

Internal and external validity: what are they and how do they differ?

Is this study valid? Can I trust this study’s methods and design? Can I apply the results of this study to other contexts? Learn more about internal and external validity in research to help you answer these questions when you next look at a paper.

""

Cluster Randomized Trials: Concepts

This blog summarizes the concepts of cluster randomization, and the logistical and statistical considerations while designing a cluster randomized controlled trial.

Logo for UEN Digital Press with Pressbooks

Part I: Sampling, Data Collection, & Analysis in Quantitative Research

In this module, we will focus on how quantitative research collects and analyzes data, as well as methods for obtaining sample population.

  • Levels of Measurement
  • Reliability and Validity
  • Population and Samples
  • Common Data Collection Methods
  • Data Analysis
  • Statistical Significance versus Clinical Significance

Objectives:

  • Describe levels of measurement
  • Describe reliability and validity as applied to critical appraisal of research
  • Differentiate methods of obtaining samples for population generalizability
  • Describe common data collection methods in quantitative research
  • Describe various data analysis methods in quantitative research
  • Differentiate statistical significance versus clinical significance

Levels of measurement

Once researchers have collected their data (we will talk about data collection later in this module), they need methods to organize the data before they even start to think about statistical analyses. Statistical operations depend on a variable’s level of measurement. Think about this similarly to shuffling all of your bills in some type of organization before you pay them. With levels of measurement, we are precisely recording variables in a method to help organize them.

There are four levels of measurement:

Nominal:  The data can only be categorized

Ordinal:  The data can be categorized and ranked

Interval:   The data can be categorized, ranked, and evenly spaced

Ratio:   The data can be categorized, ranked, even spaced, and has a natural zero

Going from lowest to highest, the 4 levels of measurement are cumulative. This means that they each take on the properties of lower levels and add new properties.

Graphical user interface, application Description automatically generated

  • A variable is nominal  if the values could be interchanged (e.g. 1 = male, 2 = female OR 1 = female, 2 = male).
  • A variable is ordinal  if there is a quantitative ordering of values AND if there are a small number of values (e.g. excellent, good, fair, poor).
  • A variable is usually considered interval  if it is measured with a composite scale or test.
  • A variable is ratio level if it makes sense to say that one value is twice as much as another (e.g. 100 mg is twice as much as 50 mg) (Polit & Beck, 2021).

Reliability and Validity as Applied to Critical Appraisal of Research

Reliability measures the ability of a measure to consistently measure the same way. Validity measures what it is supposed to  measure. Do we have the need for both in research? Yes! If a variable is measured inaccurately, the data is useless. Let’s talk about why.

For example, let’s set out to measure blood glucose for our study. The validity  is how well the measure can determine the blood glucose. If we used a blood pressure cuff to measure blood glucose, this would not be a valid measure. If we used a blood glucose meter, it would be a more valid measure. It does not stop there, however. What about the meter itself? Has it been calibrated? Are the correct sticks for the meter available? Are they expired? Does the meter have fresh batteries? Are the patient’s hands clean?

Reliability  wants to know: Is the blood glucose meter measuring the same way, every time?

Validity   is asking, “Does the meter measure what it is supposed to measure?” Construct validity: Does the test measure the concept that it’s intended to measure? Content validity: Is the test fully representative of what it aims to measure? Face validity: Does the content of the test appear to be suitable to its aims?

Term

Definition

Importance

Application

Reliability

Measures the ability of a measure to
consistently
measure the same way

This is important for measures of a construct.

 

 

For example, when measuring a patient’s blood pressure, the blood pressure cuff should consistently measure in the same way.  So, when doing every 15-minute vital signs after surgery, the blood pressure cuff should measure consistently every 15 minutes.

 

 

Validity

Measures the concept it is
supposed
 to measure

This is important to be able to measure the intended construct.

For example, a measure of critical thinking is an accurate measure of critical thinking and not expert practice.  

 

Another example:  a measure of stress level should measure stress level, not pain level.

Leibold, 2020

Obtaining Samples for Population Generalizability

In quantitative research, a population is the entire group that the researcher wants to draw conclusions about.

A sample is the specific group that the researcher will actually collect data from. A sample is always a much smaller group of people than the total size of the population. For example, if we wanted to investigate heart failure, there would be no possible way to measure every single human with heart failure. Therefore, researchers will attempt to select a sample of that large population which would most likely reflect (AKA: be a representative sample) the larger population of those with heart failure. Remember, in quantitative research, the results should be generalizable to the population studied.

quantitative research sampling procedures

A researcher will specify population characteristics through eligibility criteria. This means that they consider which characteristics to include ( inclusion criteria ) and which characteristics to exclude ( exclusion criteria ).

For example, if we were studying chemotherapy in breast cancer subjects, we might specify:

  • Inclusion Criteria: Postmenopausal women between the ages of 45 and 75 who have been diagnosed with Stage II breast cancer.
  • Exclusion Criteria: Abnormal renal function tests since we are studying a combination of drugs that may be nephrotoxic. Renal function tests are to be performed to evaluate renal function and the threshold values that would disqualify the prospective subject is serum creatinine above 1.9 mg/dl.

Sampling Designs:

There are two broad classes of sampling in quantitative research: Probability and nonprobability sampling.

Probability sampling : As the name implies, probability sampling means that each eligible individual has a random chance (same probability) of being selected to participate in the study.

There are three types of probability sampling:

Simple random sampling :  Every eligible participant is randomly selected (e.g. drawing from a hat).

Stratified random sampling : Eligible population is first divided into two or more strata (categories) from which randomization occurs (e.g. pollution levels selected from restaurants, bars with ordinances of state laws, and bars with no ordinances).

Systematic sampling : Involves the selection of every __ th eligible participant from a list (e.g. every 9 th  person).

Nonprobability sampling : In nonprobability sampling, eligible participants are selected using a subjective (non-random) method.

There are four types of nonprobability sampling:

Convenience sampling : Participants are selected for inclusion in the sample because they are the easiest for the researcher to access. This can be due to geographical proximity, availability at a given time, or willingness to participate in the research.

Quota sampling : Participants are from a very tailored sample that’s in proportion to some characteristic or trait of a population. For example, the researcher could divide a population by the state they live in, income or education level, or sex. The population is divided into groups (also called strata) and samples are taken from each group to meet a quota.

Consecutive sampling : A sampling technique in which every subject meeting the criteria of inclusion is selected until the required sample size is achieved. Consecutive sampling is defined as a nonprobability technique where samples are picked at the ease of a researcher more like convenience sampling, only with a slight variation. Here, the researcher selects a sample or group of people, conducts research over a period, collects results, and then moves on to another sample.

Purposive sampling : A group of non-probability sampling techniques in which units are selected because they have characteristics that the researcher needs in their sample. In other words, units are selected “on purpose” in purposive sampling.

quantitative research sampling procedures

Common Data Collection Methods in Quantitative Research

There are various methods that researchers use to collect data for their studies. For nurse researchers, existing records are an important data source. Researchers need to decide if they will collect new data or use existing data. There is also a wealth of clinical data that can be used for non-research purposed to help answer clinical questions.

Let’s look at some general data collection methods and data sources in quantitative research.

Existing data  could include medical records, school records, corporate diaries, letters, meeting minutes, and photographs. These are easy to obtain do not require participation from those being studied.

Collecting new data:

Let’s go over a few methods in which researcher can collect new data. These usually requires participation from those being studied.

Self-reports can be obtained via interviews or questionnaires . Closed-ended questions can be asked (“Within the past 6 months, were you ever a member of a fitness gym?” Yes/No) or open-ended questions such as “Why did you decide to join a fitness gym?” Important to remember (this sometimes throws students off) is that conducting interviews and questionnaires does not mean it is qualitative in nature! Do not let that throw you off in assessing whether a published article is quantitative or qualitative. The nature of the questions, however, may help to determine the type of research (quantitative or qualitative), as qualitative questions deal with ascertaining a very organic collection of people’s experiences in open-ended questions. 

Advantages of questionnaires (compared to interviews):

  • Questionnaires are less costly and are advantageous for geographically dispersed samples.
  • Questionnaires offer the possibility of anonymity, which may be crucial in obtaining information about certain opinions or traits.

Advances of interviews (compared to questionnaires):

  • Higher response rates
  • Some people cannot fill out a questionnaire.
  • Opportunities to clarify questions or to determine comprehension
  • Opportunity to collect supplementary data through observation

Psychosocial scales are often utilized within questionnaires or interviews. These can help to obtain attitudes, perceptions, and psychological traits. 

Likert Scales :

  • Consist of several declarative statements ( items ) expressing viewpoints
  • Responses are on an agree/disagree continuum (usually five or seven response options).
  • Responses to items are summed to compute a total scale score.

quantitative research sampling procedures

Visual Analog Scale:

  • Used to measure subjective experiences (e.g., pain, nausea)
  • Measurements are on a straight line measuring 100 mm.
  • End points labeled as extreme limits of sensation

quantitative research sampling procedures

Observational Methods include the observation method of data collection involves seeing people in a certain setting or place at a specific time and day. Essentially, researchers study the behavior of the individuals or surroundings in which they are analyzing. This can be controlled, spontaneous, or participant-based research .

When a researcher utilizes a defined procedure for observing individuals or the environment, this is known as structured observation. When individuals are observed in their natural environment, this is known as naturalistic observation.  In participant observation, the researcher immerses himself or herself in the environment and becomes a member of the group being observed.

Biophysiologic Measures are defined as ‘those physiological and physical variables that require specialized technical instruments and equipment for their measurement’. Biophysiological measures are the most common instruments for collecting data in medical science studies. To collect valid and reliable data, it is critical to apply these measures appropriately.

  • In vivo  refers to when research or work is done with or within an entire, living organism. Examples can include studies in animal models or human clinical trials.
  • In vitro is used to describe work that’s performed outside of a living organism. This usually involves isolated tissues, organs, or cells.

quantitative research sampling procedures

Let’s watch a video about Sampling and Data Collection that I made a couple of years ago.

quantitative research sampling procedures

What are Sampling Methods? Techniques, Types, and Examples

Every type of research includes samples from which inferences are drawn. The sample could be biological specimens or a subset of a specific group or population selected for analysis. The goal is often to conclude the entire population based on the characteristics observed in the sample. Now, the question comes to mind: how does one collect the samples? Answer: Using sampling methods. Various sampling strategies are available to researchers to define and collect samples that will form the basis of their research study.

In a study focusing on individuals experiencing anxiety, gathering data from the entire population is practically impossible due to the widespread prevalence of anxiety. Consequently, a sample is carefully selected—a subset of individuals meant to represent (or not in some cases accurately) the demographics of those experiencing anxiety. The study’s outcomes hinge significantly on the chosen sample, emphasizing the critical importance of a thoughtful and precise selection process. The conclusions drawn about the broader population rely heavily on the selected sample’s characteristics and diversity.

Table of Contents

What is sampling?

Sampling involves the strategic selection of individuals or a subset from a population, aiming to derive statistical inferences and predict the characteristics of the entire population. It offers a pragmatic and practical approach to examining the features of the whole population, which would otherwise be difficult to achieve because studying the total population is expensive, time-consuming, and often impossible. Market researchers use various sampling methods to collect samples from a large population to acquire relevant insights. The best sampling strategy for research is determined by criteria such as the purpose of the study, available resources (time and money), and research hypothesis.

For example, if a pet food manufacturer wants to investigate the positive impact of a new cat food on feline growth, studying all the cats in the country is impractical. In such cases, employing an appropriate sampling technique from the extensive dataset allows the researcher to focus on a manageable subset. This enables the researcher to study the growth-promoting effects of the new pet food. This article will delve into the standard sampling methods and explore the situations in which each is most appropriately applied.

quantitative research sampling procedures

What are sampling methods or sampling techniques?

Sampling methods or sampling techniques in research are statistical methods for selecting a sample representative of the whole population to study the population’s characteristics. Sampling methods serve as invaluable tools for researchers, enabling the collection of meaningful data and facilitating analysis to identify distinctive features of the people. Different sampling strategies can be used based on the characteristics of the population, the study purpose, and the available resources. Now that we understand why sampling methods are essential in research, we review the various sample methods in the following sections.

Types of sampling methods  

quantitative research sampling procedures

Before we go into the specifics of each sampling method, it’s vital to understand terms like sample, sample frame, and sample space. In probability theory, the sample space comprises all possible outcomes of a random experiment, while the sample frame is the list or source guiding sample selection in statistical research. The  sample  represents the group of individuals participating in the study, forming the basis for the research findings. Selecting the correct sample is critical to ensuring the validity and reliability of any research; the sample should be representative of the population. 

There are two most common sampling methods: 

  • Probability sampling: A sampling method in which each unit or element in the population has an equal chance of being selected in the final sample. This is called random sampling, emphasizing the random and non-zero probability nature of selecting samples. Such a sampling technique ensures a more representative and unbiased sample, enabling robust inferences about the entire population. 
  • Non-probability sampling:  Another sampling method is non-probability sampling, which involves collecting data conveniently through a non-random selection based on predefined criteria. This offers a straightforward way to gather data, although the resulting sample may or may not accurately represent the entire population. 

  Irrespective of the research method you opt for, it is essential to explicitly state the chosen sampling technique in the methodology section of your research article. Now, we will explore the different characteristics of both sampling methods, along with various subtypes falling under these categories. 

What is probability sampling?  

The probability sampling method is based on the probability theory, which means that the sample selection criteria involve some random selection. The probability sampling method provides an equal opportunity for all elements or units within the entire sample space to be chosen. While it can be labor-intensive and expensive, the advantage lies in its ability to offer a more accurate representation of the population, thereby enhancing confidence in the inferences drawn in the research.   

Types of probability sampling  

Various probability sampling methods exist, such as simple random sampling, systematic sampling, stratified sampling, and clustered sampling. Here, we provide detailed discussions and illustrative examples for each of these sampling methods: 

Simple Random Sampling

  • Simple random sampling:  In simple random sampling, each individual has an equal probability of being chosen, and each selection is independent of the others. Because the choice is entirely based on chance, this is also known as the method of chance selection. In the simple random sampling method, the sample frame comprises the entire population. 

For example,  A fitness sports brand is launching a new protein drink and aims to select 20 individuals from a 200-person fitness center to try it. Employing a simple random sampling approach, each of the 200 people is assigned a unique identifier. Of these, 20 individuals are then chosen by generating random numbers between 1 and 200, either manually or through a computer program. Matching these numbers with the individuals creates a randomly selected group of 20 people. This method minimizes sampling bias and ensures a representative subset of the entire population under study. 

Systematic Random Sampling

  • Systematic sampling:  The systematic sampling approach involves selecting units or elements at regular intervals from an ordered list of the population. Because the starting point of this sampling method is chosen at random, it is more convenient than essential random sampling. For a better understanding, consider the following example.  

For example, considering the previous model, individuals at the fitness facility are arranged alphabetically. The manufacturer then initiates the process by randomly selecting a starting point from the first ten positions, let’s say 8. Starting from the 8th position, every tenth person on the list is then chosen (e.g., 8, 18, 28, 38, and so forth) until a sample of 20 individuals is obtained.  

Stratified Sampling

  • Stratified sampling: Stratified sampling divides the population into subgroups (strata), and random samples are drawn from each stratum in proportion to its size in the population. Stratified sampling provides improved representation because each subgroup that differs in significant ways is included in the final sample. 

For example, Expanding on the previous simple random sampling example, suppose the manufacturer aims for a more comprehensive representation of genders in a sample of 200 people, consisting of 90 males, 80 females, and 30 others. The manufacturer categorizes the population into three gender strata (Male, Female, and Others). Within each group, random sampling is employed to select nine males, eight females, and three individuals from the others category, resulting in a well-rounded and representative sample of 200 individuals. 

  • Clustered sampling: In this sampling method, the population is divided into clusters, and then a random sample of clusters is included in the final sample. Clustered sampling, distinct from stratified sampling, involves subgroups (clusters) that exhibit characteristics similar to the whole sample. In the case of small clusters, all members can be included in the final sample, whereas for larger clusters, individuals within each cluster may be sampled using the sampling above methods. This approach is referred to as multistage sampling. This sampling method is well-suited for large and widely distributed populations; however, there is a potential risk of sample error because ensuring that the sampled clusters truly represent the entire population can be challenging. 

Clustered Sampling

For example, Researchers conducting a nationwide health study can select specific geographic clusters, like cities or regions, instead of trying to survey the entire population individually. Within each chosen cluster, they sample individuals, providing a representative subset without the logistical challenges of attempting a nationwide survey. 

Use s of probability sampling  

Probability sampling methods find widespread use across diverse research disciplines because of their ability to yield representative and unbiased samples. The advantages of employing probability sampling include the following: 

  • Representativeness  

Probability sampling assures that every element in the population has a non-zero chance of being included in the sample, ensuring representativeness of the entire population and decreasing research bias to minimal to non-existent levels. The researcher can acquire higher-quality data via probability sampling, increasing confidence in the conclusions. 

  • Statistical inference  

Statistical methods, like confidence intervals and hypothesis testing, depend on probability sampling to generalize findings from a sample to the broader population. Probability sampling methods ensure unbiased representation, allowing inferences about the population based on the characteristics of the sample. 

  • Precision and reliability  

The use of probability sampling improves the precision and reliability of study results. Because the probability of selecting any single element/individual is known, the chance variations that may occur in non-probability sampling methods are reduced, resulting in more dependable and precise estimations. 

  • Generalizability  

Probability sampling enables the researcher to generalize study findings to the entire population from which they were derived. The results produced through probability sampling methods are more likely to be applicable to the larger population, laying the foundation for making broad predictions or recommendations. 

  • Minimization of Selection Bias  

By ensuring that each member of the population has an equal chance of being selected in the sample, probability sampling lowers the possibility of selection bias. This reduces the impact of systematic errors that may occur in non-probability sampling methods, where data may be skewed toward a specific demographic due to inadequate representation of each segment of the population. 

What is non-probability sampling?  

Non-probability sampling methods involve selecting individuals based on non-random criteria, often relying on the researcher’s judgment or predefined criteria. While it is easier and more economical, it tends to introduce sampling bias, resulting in weaker inferences compared to probability sampling techniques in research. 

Types of Non-probability Sampling   

Non-probability sampling methods are further classified as convenience sampling, consecutive sampling, quota sampling, purposive or judgmental sampling, and snowball sampling. Let’s explore these types of sampling methods in detail. 

  • Convenience sampling:  In convenience sampling, individuals are recruited directly from the population based on the accessibility and proximity to the researcher. It is a simple, inexpensive, and practical method of sample selection, yet convenience sampling suffers from both sampling and selection bias due to a lack of appropriate population representation. 

Convenience sampling

For example, imagine you’re a researcher investigating smartphone usage patterns in your city. The most convenient way to select participants is by approaching people in a shopping mall on a weekday afternoon. However, this convenience sampling method may not be an accurate representation of the city’s overall smartphone usage patterns as the sample is limited to individuals present at the mall during weekdays, excluding those who visit on other days or never visit the mall.

  • Consecutive sampling: Participants in consecutive sampling (or sequential sampling) are chosen based on their availability and desire to participate in the study as they become available. This strategy entails sequentially recruiting individuals who fulfill the researcher’s requirements. 

For example, In researching the prevalence of stroke in a hospital, instead of randomly selecting patients from the entire population, the researcher can opt to include all eligible patients admitted over three months. Participants are then consecutively recruited upon admission during that timeframe, forming the study sample. 

  • Quota sampling:  The selection of individuals in quota sampling is based on non-random selection criteria in which only participants with certain traits or proportions that are representative of the population are included. Quota sampling involves setting predetermined quotas for specific subgroups based on key demographics or other relevant characteristics. This sampling method employs dividing the population into mutually exclusive subgroups and then selecting sample units until the set quota is reached.  

Quota sampling

For example, In a survey on a college campus to assess student interest in a new policy, the researcher should establish quotas aligned with the distribution of student majors, ensuring representation from various academic disciplines. If the campus has 20% biology majors, 30% engineering majors, 20% business majors, and 30% liberal arts majors, participants should be recruited to mirror these proportions. 

  • Purposive or judgmental sampling: In purposive sampling, the researcher leverages expertise to select a sample relevant to the study’s specific questions. This sampling method is commonly applied in qualitative research, mainly when aiming to understand a particular phenomenon, and is suitable for smaller population sizes. 

Purposive Sampling

For example, imagine a researcher who wants to study public policy issues for a focus group. The researcher might purposely select participants with expertise in economics, law, and public administration to take advantage of their knowledge and ensure a depth of understanding.  

  • Snowball sampling:  This sampling method is used when accessing the population is challenging. It involves collecting the sample through a chain-referral process, where each recruited candidate aids in finding others. These candidates share common traits, representing the targeted population. This method is often used in qualitative research, particularly when studying phenomena related to stigmatized or hidden populations. 

Snowball Sampling

For example, In a study focusing on understanding the experiences and challenges of individuals in hidden or stigmatized communities (e.g., LGBTQ+ individuals in specific cultural contexts), the snowball sampling technique can be employed. The researcher initiates contact with one community member, who then assists in identifying additional candidates until the desired sample size is achieved.

Uses of non-probability sampling  

Non-probability sampling approaches are employed in qualitative or exploratory research where the goal is to investigate underlying population traits rather than generalizability. Non-probability sampling methods are also helpful for the following purposes: 

  • Generating a hypothesis  

In the initial stages of exploratory research, non-probability methods such as purposive or convenience allow researchers to quickly gather information and generate hypothesis that helps build a future research plan.  

  • Qualitative research  

Qualitative research is usually focused on understanding the depth and complexity of human experiences, behaviors, and perspectives. Non-probability methods like purposive or snowball sampling are commonly used to select participants with specific traits that are relevant to the research question.  

  • Convenience and pragmatism  

Non-probability sampling methods are valuable when resource and time are limited or when preliminary data is required to test the pilot study. For example, conducting a survey at a local shopping mall to gather opinions on a consumer product due to the ease of access to potential participants.  

Probability vs Non-probability Sampling Methods  

     
Selection of participants  Random selection of participants from the population using randomization methods  Non-random selection of participants from the population based on convenience or criteria 
Representativeness  Likely to yield a representative sample of the whole population allowing for generalizations  May not yield a representative sample of the whole population; poor generalizability 
Precision and accuracy  Provides more precise and accurate estimates of population characteristics  May have less precision and accuracy due to non-random selection  
Bias   Minimizes selection bias  May introduce selection bias if criteria are subjective and not well-defined 
Statistical inference  Suited for statistical inference and hypothesis testing and for making generalization to the population  Less suited for statistical inference and hypothesis testing on the population 
Application  Useful for quantitative research where generalizability is crucial   Commonly used in qualitative and exploratory research where in-depth insights are the goal 

Frequently asked questions  

  • What is multistage sampling ? Multistage sampling is a form of probability sampling approach that involves the progressive selection of samples in stages, going from larger clusters to a small number of participants, making it suited for large-scale research with enormous population lists.  
  • What are the methods of probability sampling? Probability sampling methods are simple random sampling, stratified random sampling, systematic sampling, cluster sampling, and multistage sampling.
  • How to decide which type of sampling method to use? Choose a sampling method based on the goals, population, and resources. Probability for statistics and non-probability for efficiency or qualitative insights can be considered . Also, consider the population characteristics, size, and alignment with study objectives.
  • What are the methods of non-probability sampling? Non-probability sampling methods are convenience sampling, consecutive sampling, purposive sampling, snowball sampling, and quota sampling.
  • Why are sampling methods used in research? Sampling methods in research are employed to efficiently gather representative data from a subset of a larger population, enabling valid conclusions and generalizations while minimizing costs and time.  

R Discovery is a literature search and research reading platform that accelerates your research discovery journey by keeping you updated on the latest, most relevant scholarly content. With 250M+ research articles sourced from trusted aggregators like CrossRef, Unpaywall, PubMed, PubMed Central, Open Alex and top publishing houses like Springer Nature, JAMA, IOP, Taylor & Francis, NEJM, BMJ, Karger, SAGE, Emerald Publishing and more, R Discovery puts a world of research at your fingertips.  

Try R Discovery Prime FREE for 1 week or upgrade at just US$72 a year to access premium features that let you listen to research on the go, read in your language, collaborate with peers, auto sync with reference managers, and much more. Choose a simpler, smarter way to find and read research – Download the app and start your free 7-day trial today !  

Related Posts

research-paper-appendix

Research Paper Appendix: Format and Examples

Interplatform Capability

How Does R Discovery’s Interplatform Capability Enhance Research Accessibility 

  • Privacy Policy

Research Method

Home » Sampling Methods – Types, Techniques and Examples

Sampling Methods – Types, Techniques and Examples

Table of Contents

Sampling Methods

Sampling refers to the process of selecting a subset of data from a larger population or dataset in order to analyze or make inferences about the whole population.

In other words, sampling involves taking a representative sample of data from a larger group or dataset in order to gain insights or draw conclusions about the entire group.

Sampling Methods

Sampling methods refer to the techniques used to select a subset of individuals or units from a larger population for the purpose of conducting statistical analysis or research.

Sampling is an essential part of the Research because it allows researchers to draw conclusions about a population without having to collect data from every member of that population, which can be time-consuming, expensive, or even impossible.

Types of Sampling Methods

Sampling can be broadly categorized into two main categories:

Probability Sampling

This type of sampling is based on the principles of random selection, and it involves selecting samples in a way that every member of the population has an equal chance of being included in the sample.. Probability sampling is commonly used in scientific research and statistical analysis, as it provides a representative sample that can be generalized to the larger population.

Type of Probability Sampling :

  • Simple Random Sampling: In this method, every member of the population has an equal chance of being selected for the sample. This can be done using a random number generator or by drawing names out of a hat, for example.
  • Systematic Sampling: In this method, the population is first divided into a list or sequence, and then every nth member is selected for the sample. For example, if every 10th person is selected from a list of 100 people, the sample would include 10 people.
  • Stratified Sampling: In this method, the population is divided into subgroups or strata based on certain characteristics, and then a random sample is taken from each stratum. This is often used to ensure that the sample is representative of the population as a whole.
  • Cluster Sampling: In this method, the population is divided into clusters or groups, and then a random sample of clusters is selected. Then, all members of the selected clusters are included in the sample.
  • Multi-Stage Sampling : This method combines two or more sampling techniques. For example, a researcher may use stratified sampling to select clusters, and then use simple random sampling to select members within each cluster.

Non-probability Sampling

This type of sampling does not rely on random selection, and it involves selecting samples in a way that does not give every member of the population an equal chance of being included in the sample. Non-probability sampling is often used in qualitative research, where the aim is not to generalize findings to a larger population, but to gain an in-depth understanding of a particular phenomenon or group. Non-probability sampling methods can be quicker and more cost-effective than probability sampling methods, but they may also be subject to bias and may not be representative of the larger population.

Types of Non-probability Sampling :

  • Convenience Sampling: In this method, participants are chosen based on their availability or willingness to participate. This method is easy and convenient but may not be representative of the population.
  • Purposive Sampling: In this method, participants are selected based on specific criteria, such as their expertise or knowledge on a particular topic. This method is often used in qualitative research, but may not be representative of the population.
  • Snowball Sampling: In this method, participants are recruited through referrals from other participants. This method is often used when the population is hard to reach, but may not be representative of the population.
  • Quota Sampling: In this method, a predetermined number of participants are selected based on specific criteria, such as age or gender. This method is often used in market research, but may not be representative of the population.
  • Volunteer Sampling: In this method, participants volunteer to participate in the study. This method is often used in research where participants are motivated by personal interest or altruism, but may not be representative of the population.

Applications of Sampling Methods

Applications of Sampling Methods from different fields:

  • Psychology : Sampling methods are used in psychology research to study various aspects of human behavior and mental processes. For example, researchers may use stratified sampling to select a sample of participants that is representative of the population based on factors such as age, gender, and ethnicity. Random sampling may also be used to select participants for experimental studies.
  • Sociology : Sampling methods are commonly used in sociological research to study social phenomena and relationships between individuals and groups. For example, researchers may use cluster sampling to select a sample of neighborhoods to study the effects of economic inequality on health outcomes. Stratified sampling may also be used to select a sample of participants that is representative of the population based on factors such as income, education, and occupation.
  • Social sciences: Sampling methods are commonly used in social sciences to study human behavior and attitudes. For example, researchers may use stratified sampling to select a sample of participants that is representative of the population based on factors such as age, gender, and income.
  • Marketing : Sampling methods are used in marketing research to collect data on consumer preferences, behavior, and attitudes. For example, researchers may use random sampling to select a sample of consumers to participate in a survey about a new product.
  • Healthcare : Sampling methods are used in healthcare research to study the prevalence of diseases and risk factors, and to evaluate interventions. For example, researchers may use cluster sampling to select a sample of health clinics to participate in a study of the effectiveness of a new treatment.
  • Environmental science: Sampling methods are used in environmental science to collect data on environmental variables such as water quality, air pollution, and soil composition. For example, researchers may use systematic sampling to collect soil samples at regular intervals across a field.
  • Education : Sampling methods are used in education research to study student learning and achievement. For example, researchers may use stratified sampling to select a sample of schools that is representative of the population based on factors such as demographics and academic performance.

Examples of Sampling Methods

Probability Sampling Methods Examples:

  • Simple random sampling Example : A researcher randomly selects participants from the population using a random number generator or drawing names from a hat.
  • Stratified random sampling Example : A researcher divides the population into subgroups (strata) based on a characteristic of interest (e.g. age or income) and then randomly selects participants from each subgroup.
  • Systematic sampling Example : A researcher selects participants at regular intervals from a list of the population.

Non-probability Sampling Methods Examples:

  • Convenience sampling Example: A researcher selects participants who are conveniently available, such as students in a particular class or visitors to a shopping mall.
  • Purposive sampling Example : A researcher selects participants who meet specific criteria, such as individuals who have been diagnosed with a particular medical condition.
  • Snowball sampling Example : A researcher selects participants who are referred to them by other participants, such as friends or acquaintances.

How to Conduct Sampling Methods

some general steps to conduct sampling methods:

  • Define the population: Identify the population of interest and clearly define its boundaries.
  • Choose the sampling method: Select an appropriate sampling method based on the research question, characteristics of the population, and available resources.
  • Determine the sample size: Determine the desired sample size based on statistical considerations such as margin of error, confidence level, or power analysis.
  • Create a sampling frame: Develop a list of all individuals or elements in the population from which the sample will be drawn. The sampling frame should be comprehensive, accurate, and up-to-date.
  • Select the sample: Use the chosen sampling method to select the sample from the sampling frame. The sample should be selected randomly, or if using a non-random method, every effort should be made to minimize bias and ensure that the sample is representative of the population.
  • Collect data: Once the sample has been selected, collect data from each member of the sample using appropriate research methods (e.g., surveys, interviews, observations).
  • Analyze the data: Analyze the data collected from the sample to draw conclusions about the population of interest.

When to use Sampling Methods

Sampling methods are used in research when it is not feasible or practical to study the entire population of interest. Sampling allows researchers to study a smaller group of individuals, known as a sample, and use the findings from the sample to make inferences about the larger population.

Sampling methods are particularly useful when:

  • The population of interest is too large to study in its entirety.
  • The cost and time required to study the entire population are prohibitive.
  • The population is geographically dispersed or difficult to access.
  • The research question requires specialized or hard-to-find individuals.
  • The data collected is quantitative and statistical analyses are used to draw conclusions.

Purpose of Sampling Methods

The main purpose of sampling methods in research is to obtain a representative sample of individuals or elements from a larger population of interest, in order to make inferences about the population as a whole. By studying a smaller group of individuals, known as a sample, researchers can gather information about the population that would be difficult or impossible to obtain from studying the entire population.

Sampling methods allow researchers to:

  • Study a smaller, more manageable group of individuals, which is typically less time-consuming and less expensive than studying the entire population.
  • Reduce the potential for data collection errors and improve the accuracy of the results by minimizing sampling bias.
  • Make inferences about the larger population with a certain degree of confidence, using statistical analyses of the data collected from the sample.
  • Improve the generalizability and external validity of the findings by ensuring that the sample is representative of the population of interest.

Characteristics of Sampling Methods

Here are some characteristics of sampling methods:

  • Randomness : Probability sampling methods are based on random selection, meaning that every member of the population has an equal chance of being selected. This helps to minimize bias and ensure that the sample is representative of the population.
  • Representativeness : The goal of sampling is to obtain a sample that is representative of the larger population of interest. This means that the sample should reflect the characteristics of the population in terms of key demographic, behavioral, or other relevant variables.
  • Size : The size of the sample should be large enough to provide sufficient statistical power for the research question at hand. The sample size should also be appropriate for the chosen sampling method and the level of precision desired.
  • Efficiency : Sampling methods should be efficient in terms of time, cost, and resources required. The method chosen should be feasible given the available resources and time constraints.
  • Bias : Sampling methods should aim to minimize bias and ensure that the sample is representative of the population of interest. Bias can be introduced through non-random selection or non-response, and can affect the validity and generalizability of the findings.
  • Precision : Sampling methods should be precise in terms of providing estimates of the population parameters of interest. Precision is influenced by sample size, sampling method, and level of variability in the population.
  • Validity : The validity of the sampling method is important for ensuring that the results obtained from the sample are accurate and can be generalized to the population of interest. Validity can be affected by sampling method, sample size, and the representativeness of the sample.

Advantages of Sampling Methods

Sampling methods have several advantages, including:

  • Cost-Effective : Sampling methods are often much cheaper and less time-consuming than studying an entire population. By studying only a small subset of the population, researchers can gather valuable data without incurring the costs associated with studying the entire population.
  • Convenience : Sampling methods are often more convenient than studying an entire population. For example, if a researcher wants to study the eating habits of people in a city, it would be very difficult and time-consuming to study every single person in the city. By using sampling methods, the researcher can obtain data from a smaller subset of people, making the study more feasible.
  • Accuracy: When done correctly, sampling methods can be very accurate. By using appropriate sampling techniques, researchers can obtain a sample that is representative of the entire population. This allows them to make accurate generalizations about the population as a whole based on the data collected from the sample.
  • Time-Saving: Sampling methods can save a lot of time compared to studying the entire population. By studying a smaller sample, researchers can collect data much more quickly than they could if they studied every single person in the population.
  • Less Bias : Sampling methods can reduce bias in a study. If a researcher were to study the entire population, it would be very difficult to eliminate all sources of bias. However, by using appropriate sampling techniques, researchers can reduce bias and obtain a sample that is more representative of the entire population.

Limitations of Sampling Methods

  • Sampling Error : Sampling error is the difference between the sample statistic and the population parameter. It is the result of selecting a sample rather than the entire population. The larger the sample, the lower the sampling error. However, no matter how large the sample size, there will always be some degree of sampling error.
  • Selection Bias: Selection bias occurs when the sample is not representative of the population. This can happen if the sample is not selected randomly or if some groups are underrepresented in the sample. Selection bias can lead to inaccurate conclusions about the population.
  • Non-response Bias : Non-response bias occurs when some members of the sample do not respond to the survey or study. This can result in a biased sample if the non-respondents differ from the respondents in important ways.
  • Time and Cost : While sampling can be cost-effective, it can still be expensive and time-consuming to select a sample that is representative of the population. Depending on the sampling method used, it may take a long time to obtain a sample that is large enough and representative enough to be useful.
  • Limited Information : Sampling can only provide information about the variables that are measured. It may not provide information about other variables that are relevant to the research question but were not measured.
  • Generalization : The extent to which the findings from a sample can be generalized to the population depends on the representativeness of the sample. If the sample is not representative of the population, it may not be possible to generalize the findings to the population as a whole.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Quota Sampling

Quota Sampling – Types, Methods and Examples

Non-probability Sampling

Non-probability Sampling – Types, Methods and...

Probability Sampling

Probability Sampling – Methods, Types and...

Systematic Sampling

Systematic Sampling – Types, Method and Examples

Simple Random Sampling

Simple Random Sampling – Types, Method and...

Convenience Sampling

Convenience Sampling – Method, Types and Examples

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Sampling Methods

Try Qualtrics for free

Sampling methods, types & techniques.

15 min read Your comprehensive guide to the different sampling methods available to researchers – and how to know which is right for your research.

Author: Will Webster

What is sampling?

In survey research, sampling is the process of using a subset of a population to represent the whole population. To help illustrate this further, let’s look at data sampling methods with examples below.

Let’s say you wanted to do some research on everyone in North America. To ask every person would be almost impossible. Even if everyone said “yes”, carrying out a survey across different states, in different languages and timezones, and then collecting and processing all the results , would take a long time and be very costly.

Sampling allows large-scale research to be carried out with a more realistic cost and time-frame because it uses a smaller number of individuals in the population with representative characteristics to stand in for the whole.

However, when you decide to sample, you take on a new task. You have to decide who is part of your sample list and how to choose the people who will best represent the whole population. How you go about that is what the practice of sampling is all about.

population to a sample

Sampling definitions

  • Population: The total number of people or things you are interested in
  • Sample: A smaller number within your population that will represent the whole
  • Sampling: The process and method of selecting your sample

Free eBook: 2024 Market Research Trends

Why is sampling important?

Although the idea of sampling is easiest to understand when you think about a very large population, it makes sense to use sampling methods in research studies of all types and sizes. After all, if you can reduce the effort and cost of doing a study, why wouldn’t you? And because sampling allows you to research larger target populations using the same resources as you would smaller ones, it dramatically opens up the possibilities for research.

Sampling is a little like having gears on a car or bicycle. Instead of always turning a set of wheels of a specific size and being constrained by their physical properties, it allows you to translate your effort to the wheels via the different gears, so you’re effectively choosing bigger or smaller wheels depending on the terrain you’re on and how much work you’re able to do.

Sampling allows you to “gear” your research so you’re less limited by the constraints of cost, time, and complexity that come with different population sizes.

It allows us to do things like carrying out exit polls during elections, map the spread and effects rates of epidemics across geographical areas, and carry out nationwide census research that provides a snapshot of society and culture.

Types of sampling

Sampling strategies in research vary widely across different disciplines and research areas, and from study to study.

There are two major types of sampling methods: probability and non-probability sampling.

  • Probability sampling , also known as random sampling , is a kind of sample selection where randomization is used instead of deliberate choice. Each member of the population has a known, non-zero chance of being selected.
  • Non-probability sampling techniques are where the researcher deliberately picks items or individuals for the sample based on non-random factors such as convenience, geographic availability, or costs.

As we delve into these categories, it’s essential to understand the nuances and applications of each method to ensure that the chosen sampling strategy aligns with the research goals.

Probability sampling methods

There’s a wide range of probability sampling methods to explore and consider. Here are some of the best-known options.

1. Simple random sampling

With simple random sampling , every element in the population has an equal chance of being selected as part of the sample. It’s something like picking a name out of a hat. Simple random sampling can be done by anonymizing the population – e.g. by assigning each item or person in the population a number and then picking numbers at random.

Pros: Simple random sampling is easy to do and cheap. Designed to ensure that every member of the population has an equal chance of being selected, it reduces the risk of bias compared to non-random sampling.

Cons: It offers no control for the researcher and may lead to unrepresentative groupings being picked by chance.

simple random sample

2. Systematic sampling

With systematic sampling the random selection only applies to the first item chosen. A rule then applies so that every nth item or person after that is picked.

Best practice is to sort your list in a random way to ensure that selections won’t be accidentally clustered together. This is commonly achieved using a random number generator. If that’s not available you might order your list alphabetically by first name and then pick every fifth name to eliminate bias, for example.

Next, you need to decide your sampling interval – for example, if your sample will be 10% of your full list, your sampling interval is one in 10 – and pick a random start between one and 10 – for example three. This means you would start with person number three on your list and pick every tenth person.

Pros: Systematic sampling is efficient and straightforward, especially when dealing with populations that have a clear order. It ensures a uniform selection across the population.

Cons: There’s a potential risk of introducing bias if there’s an unrecognized pattern in the population that aligns with the sampling interval.

3. Stratified sampling

Stratified sampling involves random selection within predefined groups. It’s a useful method for researchers wanting to determine what aspects of a sample are highly correlated with what’s being measured. They can then decide how to subdivide (stratify) it in a way that makes sense for the research.

For example, you want to measure the height of students at a college where 80% of students are female and 20% are male. We know that gender is highly correlated with height, and if we took a simple random sample of 200 students (out of the 2,000 who attend the college), we could by chance get 200 females and not one male. This would bias our results and we would underestimate the height of students overall. Instead, we could stratify by gender and make sure that 20% of our sample (40 students) are male and 80% (160 students) are female.

Pros: Stratified sampling enhances the representation of all identified subgroups within a population, leading to more accurate results in heterogeneous populations.

Cons: This method requires accurate knowledge about the population’s stratification, and its design and execution can be more intricate than other methods.

stratified sample

4. Cluster sampling

With cluster sampling, groups rather than individual units of the target population are selected at random for the sample. These might be pre-existing groups, such as people in certain zip codes or students belonging to an academic year.

Cluster sampling can be done by selecting the entire cluster, or in the case of two-stage cluster sampling, by randomly selecting the cluster itself, then selecting at random again within the cluster.

Pros: Cluster sampling is economically beneficial and logistically easier when dealing with vast and geographically dispersed populations.

Cons: Due to potential similarities within clusters, this method can introduce a greater sampling error compared to other methods.

Non-probability sampling methods

The non-probability sampling methodology doesn’t offer the same bias-removal benefits as probability sampling, but there are times when these types of sampling are chosen for expediency or simplicity. Here are some forms of non-probability sampling and how they work.

1. Convenience sampling

People or elements in a sample are selected on the basis of their accessibility and availability. If you are doing a research survey and you work at a university, for example, a convenience sample might consist of students or co-workers who happen to be on campus with open schedules who are willing to take your questionnaire .

This kind of sample can have value, especially if it’s done as an early or preliminary step, but significant bias will be introduced.

Pros: Convenience sampling is the most straightforward method, requiring minimal planning, making it quick to implement.

Cons: Due to its non-random nature, the method is highly susceptible to biases, and the results are often lacking in their application to the real world.

convenience sample

2. Quota sampling

Like the probability-based stratified sampling method, this approach aims to achieve a spread across the target population by specifying who should be recruited for a survey according to certain groups or criteria.

For example, your quota might include a certain number of males and a certain number of females. Alternatively, you might want your samples to be at a specific income level or in certain age brackets or ethnic groups.

Pros: Quota sampling ensures certain subgroups are adequately represented, making it great for when random sampling isn’t feasible but representation is necessary.

Cons: The selection within each quota is non-random and researchers’ discretion can influence the representation, which both strongly increase the risk of bias.

3. Purposive sampling

Participants for the sample are chosen consciously by researchers based on their knowledge and understanding of the research question at hand or their goals.

Also known as judgment sampling, this technique is unlikely to result in a representative sample , but it is a quick and fairly easy way to get a range of results or responses.

Pros: Purposive sampling targets specific criteria or characteristics, making it ideal for studies that require specialized participants or specific conditions.

Cons: It’s highly subjective and based on researchers’ judgment, which can introduce biases and limit the study’s real-world application.

4. Snowball or referral sampling

With this approach, people recruited to be part of a sample are asked to invite those they know to take part, who are then asked to invite their friends and family and so on. The participation radiates through a community of connected individuals like a snowball rolling downhill.

Pros: Especially useful for hard-to-reach or secretive populations, snowball sampling is effective for certain niche studies.

Cons: The method can introduce bias due to the reliance on participant referrals, and the choice of initial seeds can significantly influence the final sample.

snowball sample

What type of sampling should I use?

Choosing the right sampling method is a pivotal aspect of any research process, but it can be a stumbling block for many.

Here’s a structured approach to guide your decision.

1) Define your research goals

If you aim to get a general sense of a larger group, simple random or stratified sampling could be your best bet. For focused insights or studying unique communities, snowball or purposive sampling might be more suitable.

2) Assess the nature of your population

The nature of the group you’re studying can guide your method. For a diverse group with different categories, stratified sampling can ensure all segments are covered. If they’re widely spread geographically , cluster sampling becomes useful. If they’re arranged in a certain sequence or order, systematic sampling might be effective.

3) Consider your constraints

Your available time, budget and ease of accessing participants matter. Convenience or quota sampling can be practical for quicker studies, but they come with some trade-offs. If reaching everyone in your desired group is challenging, snowball or purposive sampling can be more feasible.

4) Determine the reach of your findings

Decide if you want your findings to represent a much broader group. For a wider representation, methods that include everyone fairly (like probability sampling ) are a good option. For specialized insights into specific groups, non-probability sampling methods can be more suitable.

5) Get feedback

Before fully committing, discuss your chosen method with others in your field and consider a test run.

Avoid or reduce sampling errors and bias

Using a sample is a kind of short-cut. If you could ask every single person in a population to take part in your study and have each of them reply, you’d have a highly accurate (and very labor-intensive) project on your hands.

But since that’s not realistic, sampling offers a “good-enough” solution that sacrifices some accuracy for the sake of practicality and ease. How much accuracy you lose out on depends on how well you control for sampling error, non-sampling error, and bias in your survey design . Our blog post helps you to steer clear of some of these issues.

How to choose the correct sample size

Finding the best sample size for your target population is something you’ll need to do again and again, as it’s different for every study.

To make life easier, we’ve provided a sample size calculator . To use it, you need to know your:

  • Population size
  • Confidence level
  • Margin of error (confidence interval)

If any of those terms are unfamiliar, have a look at our blog post on determining sample size for details of what they mean and how to find them.

Unlock the insights of yesterday to shape tomorrow

In the ever-evolving business landscape, relying on the most recent market research is paramount. Reflecting on 2022, brands and businesses can harness crucial insights to outmaneuver challenges and seize opportunities.

Equip yourself with this knowledge by exploring Qualtrics’ ‘2022 Market Research Global Trends’ report.

Delve into this comprehensive study to unearth:

  • How businesses made sense of tricky situations in 2022
  • Tips that really helped improve research results
  • Steps to take your findings and put them into action

Related resources

How to determine sample size 12 min read, selection bias 11 min read, systematic random sampling 15 min read, convenience sampling 18 min read, probability sampling 8 min read, non-probability sampling 17 min read, stratified random sampling 12 min read, request demo.

Ready to learn more about Qualtrics?

Sampling Methods In Reseach: Types, Techniques, & Examples

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.
  • Sampling : the process of selecting a representative group from the population under study.
  • Target population : the total group of individuals from which the sample might be drawn.
  • Sample: a subset of individuals selected from a larger population for study or investigation. Those included in the sample are termed “participants.”
  • Generalizability : the ability to apply research findings from a sample to the broader target population, contingent on the sample being representative of that population.

For instance, if the advert for volunteers is published in the New York Times, this limits how much the study’s findings can be generalized to the whole population, because NYT readers may not represent the entire population in certain respects (e.g., politically, socio-economically).

The Purpose of Sampling

We are interested in learning about large groups of people with something in common in psychological research. We call the group interested in studying our “target population.”

In some types of research, the target population might be as broad as all humans. Still, in other types of research, the target population might be a smaller group, such as teenagers, preschool children, or people who misuse drugs.

Sample Target Population

Studying every person in a target population is more or less impossible. Hence, psychologists select a sample or sub-group of the population that is likely to be representative of the target population we are interested in.

This is important because we want to generalize from the sample to the target population. The more representative the sample, the more confident the researcher can be that the results can be generalized to the target population.

One of the problems that can occur when selecting a sample from a target population is sampling bias. Sampling bias refers to situations where the sample does not reflect the characteristics of the target population.

Many psychology studies have a biased sample because they have used an opportunity sample that comprises university students as their participants (e.g., Asch ).

OK, so you’ve thought up this brilliant psychological study and designed it perfectly. But who will you try it out on, and how will you select your participants?

There are various sampling methods. The one chosen will depend on a number of factors (such as time, money, etc.).

Probability and Non-Probability Samples

Random Sampling

Random sampling is a type of probability sampling where everyone in the entire target population has an equal chance of being selected.

This is similar to the national lottery. If the “population” is everyone who bought a lottery ticket, then everyone has an equal chance of winning the lottery (assuming they all have one ticket each).

Random samples require naming or numbering the target population and then using some raffle method to choose those to make up the sample. Random samples are the best method of selecting your sample from the population of interest.

  • The advantages are that your sample should represent the target population and eliminate sampling bias.
  • The disadvantage is that it is very difficult to achieve (i.e., time, effort, and money).

Stratified Sampling

During stratified sampling , the researcher identifies the different types of people that make up the target population and works out the proportions needed for the sample to be representative.

A list is made of each variable (e.g., IQ, gender, etc.) that might have an effect on the research. For example, if we are interested in the money spent on books by undergraduates, then the main subject studied may be an important variable.

For example, students studying English Literature may spend more money on books than engineering students, so if we use a large percentage of English students or engineering students, our results will not be accurate.

We have to determine the relative percentage of each group at a university, e.g., Engineering 10%, Social Sciences 15%, English 20%, Sciences 25%, Languages 10%, Law 5%, and Medicine 15%. The sample must then contain all these groups in the same proportion as the target population (university students).

  • The disadvantage of stratified sampling is that gathering such a sample would be extremely time-consuming and difficult to do. This method is rarely used in Psychology.
  • However, the advantage is that the sample should be highly representative of the target population, and therefore we can generalize from the results obtained.

Opportunity Sampling

Opportunity sampling is a method in which participants are chosen based on their ease of availability and proximity to the researcher, rather than using random or systematic criteria. It’s a type of convenience sampling .

An opportunity sample is obtained by asking members of the population of interest if they would participate in your research. An example would be selecting a sample of students from those coming out of the library.

  • This is a quick and easy way of choosing participants (advantage)
  • It may not provide a representative sample and could be biased (disadvantage).

Systematic Sampling

Systematic sampling is a method where every nth individual is selected from a list or sequence to form a sample, ensuring even and regular intervals between chosen subjects.

Participants are systematically selected (i.e., orderly/logical) from the target population, like every nth participant on a list of names.

To take a systematic sample, you list all the population members and then decide upon a sample you would like. By dividing the number of people in the population by the number of people you want in your sample, you get a number we will call n.

If you take every nth name, you will get a systematic sample of the correct size. If, for example, you wanted to sample 150 children from a school of 1,500, you would take every 10th name.

  • The advantage of this method is that it should provide a representative sample.

Sample size

The sample size is a critical factor in determining the reliability and validity of a study’s findings. While increasing the sample size can enhance the generalizability of results, it’s also essential to balance practical considerations, such as resource constraints and diminishing returns from ever-larger samples.

Reliability and Validity

Reliability refers to the consistency and reproducibility of research findings across different occasions, researchers, or instruments. A small sample size may lead to inconsistent results due to increased susceptibility to random error or the influence of outliers. In contrast, a larger sample minimizes these errors, promoting more reliable results.

Validity pertains to the accuracy and truthfulness of research findings. For a study to be valid, it should accurately measure what it intends to do. A small, unrepresentative sample can compromise external validity, meaning the results don’t generalize well to the larger population. A larger sample captures more variability, ensuring that specific subgroups or anomalies don’t overly influence results.

Practical Considerations

Resource Constraints : Larger samples demand more time, money, and resources. Data collection becomes more extensive, data analysis more complex, and logistics more challenging.

Diminishing Returns : While increasing the sample size generally leads to improved accuracy and precision, there’s a point where adding more participants yields only marginal benefits. For instance, going from 50 to 500 participants might significantly boost a study’s robustness, but jumping from 10,000 to 10,500 might not offer a comparable advantage, especially considering the added costs.

Print Friendly, PDF & Email

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Sampling Methods: A guide for researchers

Affiliation.

  • 1 Arizona School of Dentistry & Oral Health A.T. Still University, Mesa, AZ, USA [email protected].
  • PMID: 37553279

Sampling is a critical element of research design. Different methods can be used for sample selection to ensure that members of the study population reflect both the source and target populations, including probability and non-probability sampling. Power and sample size are used to determine the number of subjects needed to answer the research question. Characteristics of individuals included in the sample population should be clearly defined to determine eligibility for study participation and improve power. Sample selection methods differ based on study design. The purpose of this short report is to review common sampling considerations and related errors.

Keywords: research design; sample size; sampling.

Copyright © 2023 The American Dental Hygienists’ Association.

PubMed Disclaimer

Similar articles

  • Common Sampling Errors in Research Studies. Spolarich AE. Spolarich AE. J Dent Hyg. 2023 Dec;97(6):50-53. J Dent Hyg. 2023. PMID: 38061808
  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, Moraleda C, Rogers L, Daniels K, Green P. Crider K, et al. Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217. Cochrane Database Syst Rev. 2022. PMID: 36321557 Free PMC article.
  • [A comparison of convenience sampling and purposive sampling]. Suen LJ, Huang HM, Lee HH. Suen LJ, et al. Hu Li Za Zhi. 2014 Jun;61(3):105-11. doi: 10.6224/JN.61.3.105. Hu Li Za Zhi. 2014. PMID: 24899564 Chinese.
  • Power and sample size. Case LD, Ambrosius WT. Case LD, et al. Methods Mol Biol. 2007;404:377-408. doi: 10.1007/978-1-59745-530-5_19. Methods Mol Biol. 2007. PMID: 18450060 Review.
  • Concepts in sample size determination. Rao UK. Rao UK. Indian J Dent Res. 2012 Sep-Oct;23(5):660-4. doi: 10.4103/0970-9290.107385. Indian J Dent Res. 2012. PMID: 23422614 Review.
  • Search in MeSH

Related information

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies

Introduction.

  • Sampling Strategies
  • Sample Size
  • Qualitative Design Considerations
  • Discipline Specific and Special Considerations
  • Sampling Strategies Unique to Mixed Methods Designs

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Mixed Methods Research
  • Qualitative Research Design
  • Quantitative Research Designs in Educational Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Cyber Safety in Schools
  • Girls' Education in the Developing World
  • History of Education in Europe
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies by Timothy C. Guetterman LAST REVIEWED: 26 February 2020 LAST MODIFIED: 26 February 2020 DOI: 10.1093/obo/9780199756810-0241

Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e., quantitatively) or go into sufficient depth (i.e., qualitatively). While quantitative research is generally concerned with probability-based approaches, qualitative research typically uses nonprobability purposeful sampling approaches. Scholars generally focus on two major sampling topics: sampling strategies and sample sizes. Or simply, researchers should think about who to include and how many; both of these concerns are key. Mixed methods studies have both qualitative and quantitative sampling considerations. However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study.

Sampling in Qualitative Research

Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size. Sampling strategies refers to the process of sampling and how to design a sampling. Qualitative sampling typically follows a nonprobability-based approach, such as purposive or purposeful sampling where participants or other units of analysis are selected intentionally for their ability to provide information to address research questions. Sample size refers to how many participants or other units are needed to address research questions. The methodological literature about sampling tends to fall into these two broad categories, though some articles, chapters, and books cover both concepts. Others have connected sampling to the type of qualitative design that is employed. Additionally, researchers might consider discipline specific sampling issues as much research does tend to operate within disciplinary views and constraints. Scholars in many disciplines have examined sampling around specific topics, research problems, or disciplines and provide guidance to making sampling decisions, such as appropriate strategies and sample size.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Action Research in Education
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Black Women in Academia
  • Blended Learning
  • Case Study in Education Research
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data Collection in Educational Research
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educational Research Approaches: A Comparison
  • Educational Statistics for Longitudinal Research
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • English as an International Language for Academic Publishi...
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender, Power and Politics in the Academy
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Grounded Theory
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Literature Reviews
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Meta-Analysis and Research Synthesis in Education
  • Methodological Approaches for Impact Evaluation in Educati...
  • Methodologies for Conducting Education Research
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Multivariate Research Methodology
  • Museums, Education, and Curriculum
  • Music Education
  • Narrative Research in Education
  • Native American Studies
  • Nonformal and Informal Environmental Education
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Program Evaluation
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Role of Gender Equity Work on University Campuses through ...
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Single-Subject Research Design
  • Social Context of Education
  • Social Justice
  • Social Network Analysis
  • Social Pedagogy
  • Social Science and Education Research
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Statistical Assumptions
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Using Ethnography in Educational Research
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [81.177.182.154]
  • 81.177.182.154
  • Tools and Resources
  • Customer Services
  • Business Education
  • Business Law
  • Business Policy and Strategy
  • Entrepreneurship
  • Human Resource Management
  • Information Systems
  • International Business
  • Negotiations and Bargaining
  • Operations Management
  • Organization Theory
  • Organizational Behavior
  • Problem Solving and Creativity
  • Research Methods
  • Social Issues
  • Technology and Innovation Management
  • Share This Facebook LinkedIn Twitter

Article contents

Sampling strategies for quantitative and qualitative business research.

  • Vivien Lee Vivien Lee Psychology, University of Minnesota
  •  and  Richard N. Landers Richard N. Landers Psychology, University of Minnesota
  • https://doi.org/10.1093/acrefore/9780190224851.013.216
  • Published online: 23 March 2022

Sampling refers to the process used to identify and select cases for analysis (i.e., a sample) with the goal of drawing meaningful research conclusions. Sampling is integral to the overall research process as it has substantial implications on the quality of research findings. Inappropriate sampling techniques can lead to problems of interpretation, such as drawing invalid conclusions about a population. Whereas sampling in quantitative research focuses on maximizing the statistical representativeness of a population by a chosen sample, sampling in qualitative research generally focuses on the complete representation of a phenomenon of interest. Because of this core difference in purpose, many sampling considerations differ between qualitative and quantitative approaches despite a shared general purpose: careful selection of cases to maximize the validity of conclusions.

Achieving generalizability, the extent to which observed effects from one study can be used to predict the same and similar effects in different contexts, drives most quantitative research. Obtaining a representative sample with characteristics that reflect a targeted population is critical to making accurate statistical inferences, which is core to such research. Such samples can be best acquired through probability sampling, a procedure in which all members of the target population have a known and random chance of being selected. However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is now normative. When sampling this way, special attention should be given to statistical implications of issues such as range restriction and omitted variable bias. In either case, careful planning is required to estimate an appropriate sample size before the start of data collection.

In contrast to generalizability, transferability, the degree to which study findings can be applied to other contexts, is the goal of most qualitative research. This approach is more concerned with providing information to readers and less concerned with making generalizable broad claims for readers. Similar to quantitative research, choosing a population and sample are critical for qualitative research, to help readers determine likelihood of transfer, yet representativeness is not as crucial. Sample size determination in qualitative research is drastically different from that of quantitative research, because sample size determination should occur during data collection, in an ongoing process in search of saturation, which focuses on achieving theoretical completeness instead of maximizing the quality of statistical inference.

Theoretically speaking, although quantitative and qualitative research have distinct statistical underpinnings that should drive different sampling requirements, in practice they both heavily rely on non-probability samples, and the implications of non-probability sampling is often not well understood. Although non-probability samples do not automatically generate poor-quality data, incomplete consideration of case selection strategy can harm the validity of research conclusions. The nature and number of cases collected must be determined cautiously to respect research goals and the underlying scientific paradigm employed. Understanding the commonalities and differences in sampling between quantitative and qualitative research can help researchers better identify high-quality research designs across paradigms.

  • non-probability sampling
  • convenience sampling
  • sample size
  • quantitative research
  • qualitative research

You do not currently have access to this article

Please login to access the full content.

Access to the full content requires a subscription

Printed from Oxford Research Encyclopedias, Business and Management. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 29 August 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [81.177.182.154]
  • 81.177.182.154

Character limit 500 /500

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • An Bras Dermatol
  • v.91(3); May-Jun 2016

Sampling: how to select participants in my research study? *

Jeovany martínez-mesa.

1 Faculdade Meridional (IMED) - Passo Fundo (RS), Brazil.

David Alejandro González-Chica

2 University of Adelaide - Adelaide, Australia.

Rodrigo Pereira Duquia

3 Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA) - Porto Alegre (RS), Brazil.

Renan Rangel Bonamigo

João luiz bastos.

4 Universidade Federal de Santa Catarina (UFSC) - Florianópolis (RS), Brazil.

In this paper, the basic elements related to the selection of participants for a health research are discussed. Sample representativeness, sample frame, types of sampling, as well as the impact that non-respondents may have on results of a study are described. The whole discussion is supported by practical examples to facilitate the reader's understanding.

To introduce readers to issues related to sampling.

INTRODUCTION

The essential topics related to the selection of participants for a health research are: 1) whether to work with samples or include the whole reference population in the study (census); 2) the sample basis; 3) the sampling process and 4) the potential effects nonrespondents might have on study results. We will refer to each of these aspects with theoretical and practical examples for better understanding in the sections that follow.

TO SAMPLE OR NOT TO SAMPLE

In a previous paper, we discussed the necessary parameters on which to estimate the sample size. 1 We define sample as a finite part or subset of participants drawn from the target population. In turn, the target population corresponds to the entire set of subjects whose characteristics are of interest to the research team. Based on results obtained from a sample, researchers may draw their conclusions about the target population with a certain level of confidence, following a process called statistical inference. When the sample contains fewer individuals than the minimum necessary, but the representativeness is preserved, statistical inference may be compromised in terms of precision (prevalence studies) and/or statistical power to detect the associations of interest. 1 On the other hand, samples without representativeness may not be a reliable source to draw conclusions about the reference population (i.e., statistical inference is not deemed possible), even if the sample size reaches the required number of participants. Lack of representativeness can occur as a result of flawed selection procedures (sampling bias) or when the probability of refusal/non-participation in the study is related to the object of research (nonresponse bias). 1 , 2

Although most studies are performed using samples, whether or not they represent any target population, census-based estimates should be preferred whenever possible. 3 , 4 For instance, if all cases of melanoma are available on a national or regional database, and information on the potential risk factors are also available, it would be preferable to conduct a census instead of investigating a sample.

However, there are several theoretical and practical reasons that prevent us from carrying out census-based surveys, including:

  • Ethical issues: it is unethical to include a greater number of individuals than that effectively required;
  • Budgetary limitations: the high costs of a census survey often limits its use as a strategy to select participants for a study;
  • Logistics: censuses often impose great challenges in terms of required staff, equipment, etc. to conduct the study;
  • Time restrictions: the amount of time needed to plan and conduct a census-based survey may be excessive; and,
  • Unknown target population size: if the study objective is to investigate the presence of premalignant skin lesions in illicit drugs users, lack of information on all existing users makes it impossible to conduct a census-based study.

All these reasons explain why samples are more frequently used. However, researchers must be aware that sample results can be affected by the random error (or sampling error). 3 To exemplify this concept, we will consider a research study aiming to estimate the prevalence of premalignant skin lesions (outcome) among individuals >18 years residing in a specific city (target population). The city has a total population of 4,000 adults, but the investigator decided to collect data on a representative sample of 400 participants, detecting an 8% prevalence of premalignant skin lesions. A week later, the researcher selects another sample of 400 participants from the same target population to confirm the results, but this time observes a 12% prevalence of premalignant skin lesions. Based on these findings, is it possible to assume that the prevalence of lesions increased from the first to the second week? The answer is probably not. Each time we select a new sample, it is very likely to obtain a different result. These fluctuations are attributed to the "random error." They occur because individuals composing different samples are not the same, even though they were selected from the same target population. Therefore, the parameters of interest may vary randomly from one sample to another. Despite this fluctuation, if it were possible to obtain 100 different samples of the same population, approximately 95 of them would provide prevalence estimates very close to the real estimate in the target population - the value that we would observe if we investigated all the 4,000 adults residing in the city. Thus, during the sample size estimation the investigator must specify in advance the highest or maximum acceptable random error value in the study. Most population-based studies use a random error ranging from 2 to 5 percentage points. Nevertheless, the researcher should be aware that the smaller the random error considered in the study, the larger the required sample size. 1

SAMPLE FRAME

The sample frame is the group of individuals that can be selected from the target population given the sampling process used in the study. For example, to identify cases of cutaneous melanoma the researcher may consider to utilize as sample frame the national cancer registry system or the anatomopathological records of skin biopsies. Given that the sample may represent only a portion of the target population, the researcher needs to examine carefully whether the selected sample frame fits the study objectives or hypotheses, and especially if there are strategies to overcome the sample frame limitations (see Chart 1 for examples and possible limitations).

Examples of sample frames and potential limitations as regards representativeness

Sample framesLimitations
Population census•  If the census was not conducted in recent years, areas with high migration might be outdated
•  Homeless or itinerant people cannot be represented
 
Hospital or Health Services records•  Usually include only data of affected people (this is a limitation, depending on the study objectives)
•  Depending on the service, data may be incomplete and/or outdated
•  If the lists are from public units, results may differ from those who seek private services
 
School lists• School lists are currently available only in the public sector
• Children/ teenagers not attending school will not be represented
•  Lists are quickly outdated
• There will be problems in areas with high percentage of school absenteeism
 
List of phone numbers• Several population groups are not represented: individuals with no phone line at home (low-income families, young people who use only cell phones), those who spend less time at home, etc.
 
Mailing lists• Individuals with multiple email addresses, which increase the chance of selection com­pared to individuals with only one address
•  Individuals without an email address may be different from those who have it, according to age, education, etc.

Sampling can be defined as the process through which individuals or sampling units are selected from the sample frame. The sampling strategy needs to be specified in advance, given that the sampling method may affect the sample size estimation. 1 , 5 Without a rigorous sampling plan the estimates derived from the study may be biased (selection bias). 3

TYPES OF SAMPLING

In figure 1 , we depict a summary of the main sampling types. There are two major sampling types: probabilistic and nonprobabilistic.

An external file that holds a picture, illustration, etc.
Object name is abd-91-03-0326-g01.jpg

Sampling types used in scientific studies

NONPROBABILISTIC SAMPLING

In the context of nonprobabilistic sampling, the likelihood of selecting some individuals from the target population is null. This type of sampling does not render a representative sample; therefore, the observed results are usually not generalizable to the target population. Still, unrepresentative samples may be useful for some specific research objectives, and may help answer particular research questions, as well as contribute to the generation of new hypotheses. 4 The different types of nonprobabilistic sampling are detailed below.

Convenience sampling : the participants are consecutively selected in order of apperance according to their convenient accessibility (also known as consecutive sampling). The sampling process comes to an end when the total amount of participants (sample saturation) and/or the time limit (time saturation) are reached. Randomized clinical trials are usually based on convenience sampling. After sampling, participants are usually randomly allocated to the intervention or control group (randomization). 3 Although randomization is a probabilistic process to obtain two comparable groups (treatment and control), the samples used in these studies are generally not representative of the target population.

Purposive sampling: this is used when a diverse sample is necessary or the opinion of experts in a particular field is the topic of interest. This technique was used in the study by Roubille et al, in which recommendations for the treatment of comorbidities in patients with rheumatoid arthritis, psoriasis, and psoriatic arthritis were made based on the opinion of a group of experts. 6

Quota sampling: according to this sampling technique, the population is first classified by characteristics such as gender, age, etc. Subsequently, sampling units are selected to complete each quota. For example, in the study by Larkin et al., the combination of vemurafenib and cobimetinib versus placebo was tested in patients with locally-advanced melanoma, stage IIIC or IV, with BRAF mutation. 7 The study recruited 495 patients from 135 health centers located in several countries. In this type of study, each center has a "quota" of patients.

"Snowball" sampling : in this case, the researcher selects an initial group of individuals. Then, these participants indicate other potential members with similar characteristics to take part in the study. This is frequently used in studies investigating special populations, for example, those including illicit drugs users, as was the case of the study by Gonçalves et al, which assessed 27 users of cocaine and crack in combination with marijuana. 8

PROBABILISTIC SAMPLING

In the context of probabilistic sampling, all units of the target population have a nonzero probability to take part in the study. If all participants are equally likely to be selected in the study, equiprobabilistic sampling is being used, and the odds of being selected by the research team may be expressed by the formula: P=1/N, where P equals the probability of taking part in the study and N corresponds to the size of the target population. The main types of probabilistic sampling are described below.

Simple random sampling: in this case, we have a full list of sample units or participants (sample basis), and we randomly select individuals using a table of random numbers. An example is the study by Pimenta et al, in which the authors obtained a listing from the Health Department of all elderly enrolled in the Family Health Strategy and, by simple random sampling, selected a sample of 449 participants. 9

Systematic random sampling: in this case, participants are selected from fixed intervals previously defined from a ranked list of participants. For example, in the study of Kelbore et al, children who were assisted at the Pediatric Dermatology Service were selected to evaluate factors associated with atopic dermatitis, selecting always the second child by consulting order. 10

Stratified sampling: in this type of sampling, the target population is first divided into separate strata. Then, samples are selected within each stratum, either through simple or systematic sampling. The total number of individuals to be selected in each stratum can be fixed or proportional to the size of each stratum. Each individual may be equally likely to be selected to participate in the study. However, the fixed method usually involves the use of sampling weights in the statistical analysis (inverse of the probability of selection or 1/P). An example is the study conducted in South Australia to investigate factors associated with vitamin D deficiency in preschool children. Using the national census as the sample frame, households were randomly selected in each stratum and all children in the age group of interest identified in the selected houses were investigated. 11

Cluster sampling: in this type of probabilistic sampling, groups such as health facilities, schools, etc., are sampled. In the above-mentioned study, the selection of households is an example of cluster sampling. 11

Complex or multi-stage sampling: This probabilistic sampling method combines different strategies in the selection of the sample units. An example is the study of Duquia et al. to assess the prevalence and factors associated with the use of sunscreen in adults. The sampling process included two stages. 12 Using the 2000 Brazilian demographic census as sampling frame, all 404 census tracts from Pelotas (Southern Brazil) were listed in ascending order of family income. A sample of 120 tracts were systematically selected (first sampling stage units). In the second stage, 12 households in each of these census tract (second sampling stage units) were systematically drawn. All adult residents in these households were included in the study (third sampling stage units). All these stages have to be considered in the statistical analysis to provide correct estimates.

NONRESPONDENTS

Frequently, sample sizes are increased by 10% to compensate for potential nonresponses (refusals/losses). 1 Let us imagine that in a study to assess the prevalence of premalignant skin lesions there is a higher percentage of nonrespondents among men (10%) than among women (1%). If the highest percentage of nonresponse occurs because these men are not at home during the scheduled visits, and these participants are more likely to be exposed to the sun, the number of skin lesions will be underestimated. For this reason, it is strongly recommended to collect and describe some basic characteristics of nonrespondents (sex, age, etc.) so they can be compared to the respondents to evaluate whether the results may have been affected by this systematic error.

Often, in study protocols, refusal to participate or sign the informed consent is considered an "exclusion criteria". However, this is not correct, as these individuals are eligible for the study and need to be reported as "nonrespondents".

SAMPLING METHOD ACCORDING TO THE TYPE OF STUDY

In general, clinical trials aim to obtain a homogeneous sample which is not necessarily representative of any target population. Clinical trials often recruit those participants who are most likely to benefit from the intervention. 3 Thus, the more strict criteria for inclusion and exclusion of subjects in clinical trials often make it difficult to locate participants: after verification of the eligibility criteria, just one out of ten possible candidates will enter the study. Therefore, clinical trials usually show limitations to generalize the results to the entire population of patients with the disease, but only to those with similar characteristics to the sample included in the study. These peculiarities in clinical trials justify the necessity of conducting a multicenter and/or global studiesto accelerate the recruitment rate and to reach, in a shorter time, the number of patients required for the study. 13

In turn, in observational studies to build a solid sampling plan is important because of the great heterogeneity usually observed in the target population. Therefore, this heterogeneity has to be also reflected in the sample. A cross-sectional population-based study aiming to assess disease estimates or identify risk factors often uses complex probabilistic sampling, because the sample representativeness is crucial. However, in a case-control study, we face the challenge of selecting two different samples for the same study. One sample is formed by the cases, which are identified based on the diagnosis of the disease of interest. The other consists of controls, which need to be representative of the population that originated the cases. Improper selection of control individuals may introduce selection bias in the results. Thus, the concern with representativeness in this type of study is established based on the relationship between cases and controls (comparability).

In cohort studies, individuals are recruited based on the exposure (exposed and unexposed subjects), and they are followed over time to evaluate the occurrence of the outcome of interest. At baseline, the sample can be selected from a representative sample (population-based cohort studies) or a non-representative sample. However, in the successive follow-ups of the cohort member, study participants must be a representative sample of those included in the baseline. 14 , 15 In this type of study, losses over time may cause follow-up bias.

Researchers need to decide during the planning stage of the study if they will work with the entire target population or a sample. Working with a sample involves different steps, including sample size estimation, identification of the sample frame, and selection of the sampling method to be adopted.

Financial Support: None.

* Study performed at Faculdade Meridional - Escola de Medicina (IMED) - Passo Fundo (RS), Brazil.

  • Open access
  • Published: 27 August 2024

Experience Sampling as a dietary assessment method: a scoping review towards implementation

  • Joke Verbeke 1 &
  • Christophe Matthys 1 , 2  

International Journal of Behavioral Nutrition and Physical Activity volume  21 , Article number:  94 ( 2024 ) Cite this article

1 Altmetric

Metrics details

Accurate and feasible assessment of dietary intake remains challenging for research and healthcare. Experience Sampling Methodology (ESM) is a real-time real-life data capturing method with low burden and good feasibility not yet fully explored as alternative dietary assessment method.

This scoping review is the first to explore the implementation of ESM as an alternative to traditional dietary assessment methods by mapping the methodological considerations to apply ESM and formulating recommendations to develop an Experience Sampling-based Dietary Assessment Method (ESDAM). The scoping review methodology framework was followed by searching PubMed (including OVID) and Web of Science from 2012 until 2024.

Screening of 646 articles resulted in 39 included articles describing 24 studies. ESM was mostly applied for qualitative dietary assessment (i.e. type of consumed foods) ( n  = 12), next to semi-quantitative dietary assessment (i.e. frequency of consumption, no portion size) ( n  = 7), and quantitative dietary assessment (i.e. type and portion size of consumed foods) ( n  = 5). Most studies used ESM to assess the intake of selected foods. Two studies applied ESM as an alternative to traditional dietary assessment methods assessing total dietary intake quantitatively (i.e. all food groups). ESM duration ranged from 4 to 30 days and most studies applied ESM for 7 days ( n  = 15). Sampling schedules were mostly semi-random ( n  = 12) or fixed ( n  = 9) with prompts starting at 8–10 AM and ending at 8–12 PM. ESM questionnaires were adapted from existing questionnaires, based on food consumption data or focus group discussions, and respond options were mostly presented as multiple-choice. Recall period to report dietary intake in ESM prompts varied from 15 min to 3.5 h.

Conclusions

Most studies used ESM for 7 days with fixed or semi-random sampling during waking hours and 2-h recall periods. An ESDAM can be developed starting from a food record approach (actual intake) or a validated food frequency questionnaire (long-term or habitual intake). Actual dietary intake can be measured by ESM through short intensive fixed sampling schedules while habitual dietary intake measurement by ESM allows for longer less frequent semi-random sampling schedules. ESM sampling protocols should be developed carefully to optimize feasibility and accuracy of dietary data.

Research on health and nutrition relies on accurate assessment of dietary intake [ 1 ]. However, dietary intake is a complex exposure variable with high inter- and intra-variability existing of different components ranging from micronutrients, macronutrients, food groups, meals to the dietary pattern as a whole. Therefore, measuring dietary intake accurately and feasibly is challenging for both researchers and healthcare professionals [ 2 , 3 , 4 ]. Only few established nutritional biomarkers are available and, therefore, no objective method exist to reflect true dietary intake or the dietary pattern as a whole in epidemiological research [ 2 , 3 ]. Instead, most dietary assessment methods rely on self-report. Food records, referred to as the “golden standard”, together with 24-h dietary recalls provide most detailed dietary data while Food Frequency Questionnaires (FFQ) reflects habitual (i.e. long-term usual intake) dietary intake which is the variable of interest in most diet-disease research [ 4 , 5 , 6 ]. Food records, 24-h dietary recalls, and FFQs have known limitations and challenges including recall bias, social-desirability bias, misreporting, and burdensomeness contributing to inherent measurement error in dietary intake data [ 2 , 6 ]. A review of Kirkpatrick et al . showed that feasibility, including cost-effectiveness and ease-of-use, is the main determinant for researchers in selecting a dietary assessment method instead of appropriateness for study design and purpose at the expense of data quality and accuracy [ 7 ]. To advance nutritional research and enhance the quality of dietary data, exploring the implementation of new methodologies is warranted to improve feasibility and overcome the limitations of current dietary assessment methods.

Experience Sampling Methodology (ESM), an umbrella term including Ecological Momentary Assessment (EMA), ambulatory assessment, and structured diary method, refers to intensive longitudinal assessment and real-time data-capturing methods [ 8 ]. Participants are asked to respond to short questions sent through smartphone prompt messages or beeps at random moments during the day to assess experiences or behaviors and moment-to-moment changes in daily life [ 9 ]. Originating from the field of psychology and behavioral sciences, ESM typically assesses current mood, cognitions, perceptions, or behaviors and descriptors of the momentary context (i.e. location, company) [ 9 ]. Usually, assessments are collected in a random time sampling protocol yet, assessments can also be triggered by an event (event-contingent sampling), at fixed time points, or random within fixed time intervals (semi-random). ESM questionnaires are usually designed to be completed in under 2 min consisting of open-ended questions, visual analogue scales, checklists, or self-report Likert scales. Several ESM survey applications (i.e. m-Path, PsyMate, PocketQ) are currently available in which the sampling protocol and questionnaires can be customized to the study design and aim [ 10 , 11 ]. It was shown that ESM reduces recall bias, reactivity bias, and misreporting in psychology and behavioral research by its design through unannounced, rapid, real-life, real-time repeated assessments [ 12 ]. For this reason, Experience Sampling might be an interesting new methodology to explore as an alternative dietary assessment methodology. The design of ESM could overcome recall bias, reactivity bias, social desirability bias, and misreporting seen in traditional dietary assessment methods. However, the application of ESM for dietary assessment is new. Defining and balancing ESM methodological considerations, i.e. study duration, frequency and timing of sampling (signaling technique), formulation of questions and answer options, is a delicate matter and crucial in balancing feasibility with data accuracy [ 13 ].

The application of ESM in the field of dietary assessment has not been fully explored yet. Schembre et al . reviewed ESM for dietary behavior for the first time [ 12 ]. However, it has not yet been assessed how ESM could be implemented as an alternative dietary assessment method aiming to estimate daily energy, nutrient, and food group intake quantitatively.

Therefore, this scoping review investigates how Experience Sampling Methodology can be implemented to develop an Experience Sampling-based dietary assessment method as an alternative to traditional dietary assessment methods to measure daily energy, nutrient, and food group intake quantitatively. This review aims to map ESM sampling protocols and questionnaire designs used to assess dietary intake. Additionally, the findings of this review will be combined with best practices to develop ESMs and dietary assessment methods to formulate key recommendations for the development of an Experience Sampling-based Dietary Assessment Method (ESDAM). The following questions will be answered:

How is ESM applied in literature to assess dietary intake - focusing on methodological considerations (i.e. development and formulation of questions and answers, selection and consideration of prompting schedule (timing and frequency))?

How can ESM specifically be applied for quantitative assessment of total dietary intake (i.e. as an alternative to traditional dietary assessment method)?

This scoping review followed the methodological framework for scoping reviews of Arksey and O’Malley which was further developed by Levac et al. [ 14 , 15 ]. A scoping review approach was chosen to explore and map the design aspects and considerations for developing experience sampling methods to assess dietary intake as an alternative to traditional dietary assessment methods, which is novel. Moreover, this review will formulate design recommendations to apply ESM as a dietary assessment method and will serve as starting point to develop an ESDAM. An a priori protocol was developed based on the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) and the Joanna Briggs Institute Scoping Review protocol template (Supplementary Material) [ 16 , 17 ]. According to Arksey and O’Malley methodological framework, the iterative nature of scoping reviews may include further refinement of the search strategy and the inclusion and exclusion criteria during the initial review process due to the unknown breadth of the topic [ 14 ]. Therefore, adaptations made to the methodology described in the a priori protocol based on initial searches are described below. This scoping review was reported according to the PRISMA extension for scoping reviews (PRISMA-ScR) [ 18 ].

Search strategy and screening

The search strategy was developed based on key words and Mesh terms for “dietary assessment” and “experience sampling” (Supplementary Material). The term “ecological momentary assessment” was included as a synonym of ESM. Electronic databases PubMed (including MEDLINE) and Web of Science were searched for relevant literature published between January 2012 and February 9th 2024. The year 2012 was chosen as lower limit for inclusion since this review focuses on the use of ESM by digital tools (i.e. smartphones, web-based or mobile applications) which has emerged especially since the introduction of applications for smartphones since 2008. Therefore, the time frame of this review is focused on literature published in the last 12 years. The reference lists of all included articles were screened for additional studies.

The initial search strategy described in the protocol was developed based on the assumption that research using ESM as an alternative to traditional dietary assessment was limited. Therefore, initially, research using ESM in the broader field of health research was included to obtain more evidence on methodological considerations of application of ESM. In line with the Arksey and O’Malley methodological framework, inclusion criteria were adapted following initial searches along with discussion and consensus between the reviewer (JV) and principal investigator (CM). Therefore, inclusion criteria were adapted to research applying ESM to measure dietary intake quantitatively or qualitatively since literature was also available in the field of dietary behaviour in relation to contextual factors (Table  1 ). Studies measuring dietary behaviour (i.e. cravings, hunger, eating disorder behaviour, dietary lapses) only, without assessing dietary intake, were excluded. Event-based ESM as dietary assessment method was excluded since this was deemed a similar methodology as the food record and, therefore, not serving the purpose of this review to explore a new methodology for dietary assessment to overcome limitations of traditional dietary assessment methods. All inclusion and exclusion criteria are presented in Table  1 .

All records were exported and uploaded into the review software Rayyan. Duplicates were identified through the software followed by a manual screening of the reviewer for confirmation and removal of duplicates. One reviewer (JV) screened the retrieved articles first by title and abstract followed by a full text screening [ 19 , 20 , 21 ]. In case of hesitancy on inclusion of articles, the reviewer (JV) consulted the principal investigator (CM) to reach consensus. In line with established scoping review methods, methodological quality assessment was not performed [ 14 , 18 ]. Since this review aims to shed light on design aspects and considerations of ESM and, thus focuses on the application of the methodology used in the articles rather than the study outcome, quality assessment was considered not relevant for this purpose.

Data extraction

Data were extracted in an Excel table describing the authors, title, year of publication, signalling technique, timing of prompts, study duration, dietary variables measured, answer window, (formulation of) questions, respond options, notification method, indication of qualitative or quantitative dietary assessment, delivery method, population and study name. All data were described qualitatively. Studies applying ESM for dietary assessment were categorized in separate tables for ESM used for qualitative dietary assessment (i.e. assessment of type of foods consumed without portion size, not allowing estimation of nutrient intake), ESM used for semi-quantitative dietary assessment (i.e. assessment of type of foods or frequency of consumption of foods, not allowing estimation of nutrient intake), and ESM used for quantitative dietary assessment (i.e. assessment of type of foods consumed and portion size, allowing estimation of nutrient intake).

Literature search and study characteristics

The electronic databases search resulted in 701 articles of which 55 duplicates were identified and removed. Next, 646 articles were screened by title and abstract of which 591 were excluded according to the exclusion criteria (Fig.  1 ). The remaining 55 articles were screened by full text. After exclusion of 16 articles following full text screening, 39 articles were selected for inclusion (Table  2 ). The included articles describe 24 individual studies of which the Mother’s and Their Children’s Health (MATCH) study was described most frequently ( n  = 12, 25%). Most studies were published in 2018 ( n  = 7), followed by 2020 ( n  = 6) and 2022 ( n  = 6). Students, including both high school and higher education students, were the study population in most EMA or ESM studies included ( n  = 10, 43%). Two studies applied the ESM methodology to assess dietary behaviour including dietary variables of children with mothers as proxy. Five studies referred to their methodology using the terminology ‘ESM’ while the other studies used ‘EMA’ as terminology.

figure 1

PRISMA flow diagram of the screening and selection process

Application of ESM for dietary assessment in literature

Dietary variables measured through esm.

Most studies assessed consumption of specific foods only [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. Table 2 , 3 and 4 provide an overview of the included studies described in the manuscripts with description of specific ESM methodology characteristics according to qualitative, semi-quantitative and quantitative dietary assessment respectively. Four studies used ESM to assess snack consumption [ 45 , 46 , 47 , 48 , 49 , 50 , 51 ]. Four studies focused on snack and sugar sweetened beverage (SBB) consumption only [ 22 , 36 , 44 , 52 , 53 ]. Piontak et al . applied ESM to assess unhealthy food consumption including fast food, caffeinated drinks and not consuming any fruit or vegetables [ 35 ]. Two studies focused on palatable food consumption of which the study of Cummings et al . assessed palatable food consumption together with highly processed food intake [ 37 , 54 ]. Lin et al . applied ESM to measure empty calorie food and beverage consumption while Boronat et al . assessed Mediterranean diet food consumption [ 39 , 55 ]. Two studies assessed the occurrence of food consumption only without assessing type of foods consumed [ 40 , 41 ]. The study of de Rivaz et al . assessed the largest type of meal consumed in between signals [ 56 ]. Three studies aimed to assess total dietary intake of which the study of Lucassen et al . evaluated approaches to assess both actual and habitual dietary intake using ESM [ 43 , 57 , 58 , 59 ].

Qualitative versus quantitative dietary assessment through ESM

As shown in Table  2 , twelve studies performed qualitative dietary assessment (i.e. assessing type of foods consumed without quantification) (Table  2 ). Seven studies performed semi-quantitative dietary assessment (i.e. assessing frequency of meals/eating occasions or number of servings of food categories not allowing nutrient calculation) [ 44 , 49 , 50 , 52 , 53 , 54 , 55 , 56 ] (Table  3 ). Quantitative dietary assessment, in line with the aim of traditional dietary assessment methods (i.e. assessment of both type and quantity of foods consumed allowing to estimate nutrient intake), was performed in four studies of which Wouters et al . and Richard et al . assessed snack intake only while Jeffers et al . and Lucassen et al . assessed overall dietary intake (i.e. all food groups) [ 45 , 46 , 47 , 48 , 51 , 57 , 58 ] (Table  4 ).

Study duration, ESM timing and signaling technique

Study duration of ESM dietary assessment varied from four to thirty days of which most studies ( n  = 15) had a study duration of seven days of ESM dietary assessment. The study of Piontak et al . had the longest duration of 30 days of ESM assessment [ 35 ]. The semi-random sampling scheme (i.e. random sampling within multiple fixed time-intervals) was applied most frequently ( n  = 12), followed by the fixed sampling scheme (i.e. sampling at fixed times) ( n  = 9). Random sampling (i.e. completely random sampling) was chosen in three studies [ 34 , 36 , 55 ]. A mixed sampling approach was applied in three studies of which Lucassen et al . tested and compared both a fixed sampling and a semi-random sampling approach to assess overall dietary intake [ 22 , 42 , 57 , 59 ]. Two studies applied different sampling schemes during the weekend compared to weekdays [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. Sampling time windows were adapted to the daily structure of the study population, i.e. shifts of shift-workers, school hours of students or (self-reported) waking hours (Table  2 ). The sampling time window of the included studies started between 6 and 10 AM and ended between 8 PM and midnight. One study applied a 24-h sampling time window since the study population were nurses working in shifts [ 39 ].

Formulation of ESM questions

Different types of questions and phrasing of questions can be identified in the studies using ESM for dietary assessment. Two studies use indirect phrasing (i.e. ‘What were you doing?’) followed by multiple-choice answer options including i.e. physical activity, eating, rest [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]. Seven studies use direct phrasing (i.e. ‘Did you eat?’) which is applied both as real-time prompts (i.e. ‘Were you eating or drinking anything – in this moment?’) and as retrospective prompts (i.e. ‘Did you eat anything since the last signal?’) without specifying specific food consumption [ 22 , 38 , 40 , 41 , 45 , 46 , 47 , 48 , 56 , 58 ]. Thirteen studies use direct and specific phrasing regarding consumption of specified foods (i.e. ‘Did you eat any snacks or sugar sweetened beverages since the last signal?’) [ 35 , 36 , 37 , 39 , 43 , 44 , 50 , 51 , 52 , 53 , 54 , 55 , 57 ]. The time period in retrospective prompts with direct phrasing varied. Ten studies assessed consumption since last signal, three studies during the past 2 h and one study during respectively the preceding 15 min, 1 h, 2.5 h, 3 h and 3.5 h [ 41 , 42 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 56 ]. The MATCH study used two different retrospective time periods of which the first prompt of the day requested to report since waking up and the following prompts during the last 2 h [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. Forman et al . used prompts which requested to report snack intake between the last prompt of the previous day and falling asleep and between waking up and receiving the first prompt [ 49 ]. The study of Bruening et al . combined both real-time prompts, to report what participants were doing the moment before receiving the prompt, and retrospective prompts to report what they were doing the past 3 h [ 34 ].

Formulation of ESM response options

Binary (i.e. yes or no) response options are provided in eleven studies followed by open field, a built in search function or multiple-choice bullets to specify type of food or drinks consumed in five studies [ 22 , 35 , 37 , 38 , 40 , 41 , 42 , 45 , 46 , 47 , 48 , 52 , 53 , 56 , 58 ]. Food lists shown as response option to indicate food consumption were based on National Health Surveys, validated Food Frequency Questionnaires, other validated questionnaires, the National Food Composition Database or results from focus group discussions. Eight studies requested to indicate quantities of the foods consumed by open field (i.e. in grams or milliliters), Visual Analogical Scale (VAS) sliders (i.e. from zero to 100) or multiple-choice options (i.e. small, medium, large) [ 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 54 , 56 , 57 ].

This review reveals that ESM has been applied to assess dietary intake in various research settings using different design approaches. However, most studies assessed consumption of specific foods only focusing on the foods of interest related to the research question. Especially snack consumption and, in general, unhealthy foods were the foods of interest for which ESM was used most often to measure its consumption. Due to its momentary nature, ESM may be especially suitable to measure these specific foods which are often (unconsciously) missed or underreported using traditional dietary assessment methods. Findings from our review show that ESM applied to assess dietary intake shows both features of 24-h dietary recalls (24HRs) and food frequency questionnaires (FFQ). Aside from the recall-based reporting and multiple choice assessment of specific foods, found in 24HRs and FFQs respectively, the ESM is a new methodology compared to traditional dietary assessment methods. ESM shows to lends itself well to assess the total dietary intake quantitatively as well albeit less explored yet according to our review. Moreover, most studies using ESM for dietary assessment were behavioral science research (i.e. psychological aspects of eating behavior) which highlights the novelty and need of ESM specifically designed for dietary assessment and research on diet-health associations.

Recommendations to develop an Experience Sampling-based Dietary Assessment Method

The implementation of ESM will differ depending which health behavior is being measured and in which research field it is being applied [ 13 , 60 ]. This section describes recommendations of the methodological implementation of ESM as an alternative dietary assessment methodology to measure total dietary intake quantitatively based on the findings of this review, recommendations of the open handbook for ESM by Myin-Germeys et al. and practices in traditional dietary assessment development [ 13 ].

Recommendations for study duration, ESM timing and frequency

All ESM study characteristics (study duration, sampling frequency, timing, recall period) are interrelated and cannot be evaluated individually.

ESM study duration (i.e. number of days) and sampling frequency (i.e. number of prompts per day) should be reconciled and should be inversely adapted to one another (i.e. short study duration allows for higher sampling frequency per day and vice versa) to maintain low burden and good feasibility.

Our review showed an ESM study duration of 7 days is most common however reporting fatigue might arise from day 4 onwards in case of high sampling frequency (i.e. fixed sampling every 2 h) similarly as experienced with food records [ 61 ].

Frequency and timing of ESM prompts should be adapted to waking hours covering the typical eating episodes of the target study population. Typically, studies used waking hours starting around 7 AM till 10 PM however a preliminary short survey can identify feasible and accurate waking hours of the target study population and allow to adapt accordingly.

Waking hours, and consequently sampling frequency, could be different on weekend days (i.e. more frequent, longer waking hours) as seen in some studies in our review. Short recall periods (i.e. last hours or previous day) are suggested to be better than longer recalls of weeks or months [ 62 ]. Aiming to obtain more accurate dietary intake data, lower recall bias and social desirability bias by reducing the awareness of being measured requests short recall periods of 1 up to 3.5 h, with a 2-h recall most commonly applied, as demonstrated by our review. In this way, ESM allow for near real-time measurements of dietary intake.

Furthermore, study duration, sampling frequency and timing should be adapted and differs when aiming to measure actual dietary intake or habitual dietary intake.

Recommendations ESM signaling technique for actual versus habitual dietary intake

Measuring actual dietary intake using an intensive prompting schedule can only be performed for short periods, preferably three to four days, due to the risk of responding fatigue as seen similarly in food records. As demonstrated by Lucassen et al. actual intake can be measured by ESM applying a fixed sampling approach which samples every time-window during the waking hours (i.e. sampling every 2 h between 7 AM and 10 PM on dietary intake during past 2 h) [ 58 ].

Habitual dietary intake can be measured by ESM applying a semi-random sampling approach which samples every time window during waking hours multiple times during a longer period (i.e. sampling three time-windows per day on dietary intake during past 2 h for two weeks until every time window is sampled three times) [ 58 ]. Measuring habitual dietary intake by ESM using a less intensive sampling frequency allows for a longer study duration (i.e. multiple weeks). Lastly, a combination of fixed and (semi-)random sampling schedules can be applied. Both in case of measuring actual and habitual dietary intake, it is recommended to compose a sampling schedule with time windows covering all waking hours to ensure all eating occasions could be sampled [ 12 ]. Additionally, the sampling schedule should cover weekend days next to week days to be able to sample the variability in dietary intake. More so, to capture variability of dietary intake several waves of ESM measurement periods could be implemented alternated with no-measurement periods. On the other hand, the application of multiple waves is associated with higher dropout rates especially with increased time in-between waves [ 13 ].

In conclusion, ESM signaling technique, frequency, timing, recall period and duration of sampling should be carefully adapted to one another to ensure accurate dietary intake data, low burden and optimal feasibility. As recommended by Myin-Germeys et al., a pilot study allows to evaluate all ESM design characteristics to obtain optimal data quality yet remain feasible [ 13 ].

Recommendations for ESM questions and response options

Questionnaires for ESM should be carefully developed and request methodological rigor [ 63 ]. As stated by Myin-Germeys et al., there are currently no specific guidelines on how to develop questionnaires for ESM [ 63 ]. However, according to our review most studies adapt existing questionnaires to implement in ESM research. Still, few studies in our review describe methodologically which or how adaptations are made to fit in the ESM format. First, a timeframe should be chosen on which the question will reflect. Although ESM is ideally consisting of questions on momentary variables, this is less suitable to measure dietary intake. As dietary intake does not continuously take place, momentary questions (i.e. What are you eating in this moment?) would lead to a large amount of missing data and, consequently, large measurement error on daily dietary intake estimations. Instead, time intervals lend itself better to assess dietary intake with ESM. The time interval on which the question reflects should be clearly stated (i.e. What did you eat during the last two hours?). As mentioned previously, in case of an interval contingent (semi-random) ESM approach, constitution of contiguous time intervals that cover the complete waking hour time frame (i.e. waking hours between 7 AM and 10 PM with semi-random ESM sampling by intervals of every two hours) is recommended to reduce risk of missing eating occasions [ 12 ]. Therefore, following the latter approach, it is most feasible to choose the same time frame on which the question reflects as the time intervals of the prompts (i.e. semi random sampling in time intervals of two hours with question ‘What did you eat since the last signal?’). The time frame on which the question reflects should be chosen based on expected events of dietary intake (i.e. every two or three hours) and depends on dietary habits of the target population which is culture specific. Myin-Germeys et al. recommend to keep questions short and to the point so it fits the screen of the mobile device and allows for quick response [ 63 ]. Furthermore, implicit assessments (i.e. Have you eaten since the last signal?) are recommended over explicit assessments (i.e. Did you eat fast food since the last signal?) to inhibit reactivity bias. Questionnaire length is important to consider as it is recommended to maintain a completion time of maximum three minutes to keep the burden low [ 63 ]. Although in traditional ESM research questionnaires up to 30 items are accepted, in the field of dietary assessment, this would equivalent a short FFQ and can be considered too burdensome when presented all at once at every prompt reducing compliance. Moreover, ESM research in the field of psychology, where it originated from, uses most often scales (i.e. Likert scale, visual scales) as respond options. Unlike many psychological variables (i.e. mood, emotions), dietary intake can be assessed quantitatively and precise which allows for more specific response options.

Recommendations to develop ESM sampling scheme based on FFQ or food record

Questions and respond options for ESM dietary assessment could be adapted from existing questionnaires as demonstrated in the studies of our review. In the field of dietary assessment, ESM could therefore be applied to validated dietary assessment questionnaires such as validated Food Frequency Questionnaires (FFQ’s) or (web-based) food records as proposed in Fig.  2 .

figure 2

Recommendations to implement experience sampling for actual and habitual dietary assessment

Starting from the food record approach, a general open question (i.e. Did you eat anything since the last signal?) could be followed by a question to specify the consumed foods by an open field text box or food groups part originating from a National Food Consumption Database. Portion sizes of consumed foods could be provided by an open field text box with standard units (i.e. milliliters, grams) or common household measures (i.e. table spoons, glasses).

Starting from the FFQ approach, food groups assessed in FFQ’s could be regrouped to a limited number and questions reformulated to assess dietary intake in near real time to design ESM questionnaires. Consumption of all food groups could be assessed at each prompt or consumption of a different set of food groups could be assessed at each prompt. In the latter case, the study needs to be designed so that consumption of each food group is assessed at each interval multiple times to account for unanswered prompts with missing data. Moreover, ordering of questions on consumption of food groups need to be considered as the consumption of specific food groups might need to be assessed at the same prompt to reduce ambiguity (i.e. fried food consumption needs to be assessed before consumption of fast food to avoid response overlap). Asking the same set of questions at each prompt may feel repetitive but might reduce burden [ 63 ]. A control question can be added to assess careless responding.

Application of ESM as alternative dietary assessment method in literature

Most studies used ESM to measure food consumption qualitatively (i.e. type of foods consumed) or semi-quantitatively (i.e. frequency of consumption of specific foods) as opposed to quantitatively (i.e. type and quantity of foods consumed) to serve the same purpose as traditional dietary assessment methods. Questions were most often formulated using direct phrasing and asking about consumption of specific foods since the last signal. Answers were most often binary (i.e. yes/no indicating consumption of specific foods since last signal) combined with options to specify type and/or frequency or amount of foods consumed. Only the studies of Jeffers et al . and Lucassen et al . apply ESM to measure total dietary intake quantitatively of which Lucassen et al . evaluated ESM specifically as an alternative methodology for dietary assessment [ 57 , 58 ].

Although both event-contingent and signal-contingent approaches are being used for dietary assessment, signal-contingent ESM approaches might provide auspicious opportunities to overcome the limitations and biases of traditional dietary assessment methods [ 12 ]. The near-real time data collection combined with (semi-)random sampling shows potential to reduce the burden for the participant both by its low intensity of registering and by its shorter questions with easy respond options. Moreover, the (semi-)random sampling technique might make the participant less aware of being measured resulting in possibly lower social-desirability bias leading, together with the short recall period, to more accurate data. In combination with modern technology such as mobile applications feasibility could be enhanced as well. Adapting questions and response options from either a validated FFQ or food record allow for relatively easy implementation of ESM as alternative dietary assessment method for total dietary intake (i.e. all food groups). However, validity and reliability need to be evaluated in the target population, similarly as traditional dietary assessment methods.

The systematic review and meta-analysis of Perski et al . states to have reviewed the use of ESM to assess five key health behaviors including dietary behavior [ 60 ]. Similar to our findings, all four studies described by Perski et al . are assessing dietary intake through ESM of specific foods only instead of the total dietary pattern (i.e. all food groups). Moreover, Perski et al . included event-contingent sampling (i.e. registering dietary intake as it occurs) approaches as well. As highlighted by Schembre et al . event-contingent sampling entails similar limitations and biases such as social desirability bias and burden as the traditional dietary assessment methods [ 27 ]. Not surprising, as event-contingent sampling can be seen as a similar approach as the traditional food record and serves for this reason not the purpose of this review to define a new methodology to overcome the limitations of current traditional dietary assessment methods. Similarly, photo-based methodologies (i.e. using images as food diary by event-based sampling) are unlikely to overcome the limitations of traditional dietary assessment methods due to the large measurement error in estimation of portion sizes and types of foods and were for this reason excluded in our review [ 3 ]. Most importantly, the four included reviews on dietary behavior in the meta-analysis of Perski et al . lacked specific details on ESM design characteristics or methodological implication of ESM as alternative dietary assessment method. Still, the potential of ESM to obtain more accurate and reliable dietary data is highlighted together with the need for proper validation.

Altogether, the lacking details on important methodological aspects of ESM hinders drawing conclusions on common practices for implementation of ESM for quantitative dietary assessment. Nevertheless, Perski et al . emphasize the need for more elaboration on the methodological aspects in order to provide a summary of best practices on implementation of ESM for specific health behaviors including dietary behavior [ 60 ]. Our scoping review meets this need with key methodological recommendations for developing an experience sampling dietary assessment method for total dietary intake next to elaboration on commonly applied ESM design characteristics.

Limitations and strengths

An important limitation of this scoping review is, inherent to scoping reviews, the less rigor search strategy and screening process. This will have resulted in an incomplete overview of studies describing ESM for dietary assessment. Still, this review has not the aim to assess outcomes of studies but rather evaluate how ESM can be applied for dietary assessment methodologically. Therefore, its strength lies in the assessment and description of ESM approaches specifically to provide insight in its use for quantitative dietary assessment as an alternative method for the traditional dietary assessment methods. To our knowledge, this has only been performed by Schembre et al. previously [ 12 ]. However, our scoping review is, to our knowledge, the first to describe practical recommendations for developing an ESM for total dietary assessment (i.e. all food groups). Additionally, only two studies were identified to have applied ESM for total dietary assessment. Consequently, limited evidence-based information was available in literature on the development of ESM characteristics (prompting schedule, duration, questionnaire design) for quantitative dietary assessment of total dietary intake. Nevertheless, studies on qualitative and semi-quantitative dietary assessment using ESM were described and form, together with the guidelines of Myin-Germeys et al., the base of practical guidelines of designing an ESM protocol for quantitative dietary assessment of total dietary intake. To our knowledge, this review is the first to discuss recommendations on the implementation of ESM for quantitative dietary assessment as an alternative for traditional dietary assessment methods.

This review shows that ESM is increasingly being applied in research to measure dietary intake. However, few studies applied ESM to assess total dietary intake quantitatively with the same purpose of traditional dietary assessment methods. Still, the methodological characteristics of ESM show auspicious possibilities to overcome limitations of the classic dietary assessment methods. This paper provides guidance and is the starting point for the development of an Experience Sampling Dietary Assessment Method to assess total dietary intake quantitatively based on recent literature and theoretical background. Thorough evaluation and validation studies are needed to test the full potential of ESM as a feasible and accurate alternative for traditional dietary assessment methods.

Availability of data and materials

The data that support the findings of this manuscript are available from the corresponding author upon reasonable request. The review protocol can be downloaded at: KU Leuven repository.

Abbreviations

  • Ecological Momentary Assessment

Experience Sampling-based Dietary Assessment Method

Experience Sampling Method

  • Food Frequency Questionnaire

Mother’s and Their Children’s Health

Preferred Reporting Items for Systematic review and Meta-Analysis Protocols

Preferred Reporting Items for Systematic review and Meta-Analysis extension for scoping reviews

Sugar Sweetened Beverages

Visual Analog Scale

Hebert JR, Hurley TG, Steck SE, Miller DR, Tabung FK, Peterson KE, et al. Considering the value of dietary assessment data in informing nutrition-related health policy. Adv Nutr. 2014;5(4):447–55.

Article   PubMed   PubMed Central   Google Scholar  

Liang S, Nasir RF, Bell-Anderson KS, Toniutti CA, O’Leary FM, Skilton MR. Biomarkers of dietary patterns: a systematic review of randomized controlled trials. Nutr Rev. 2022;80(8):1856–95.

Bingham S, Carroll RJ, Day NE, Ferrari P, Freedman L, Kipnis V, et al. Bias in dietary-report instruments and its implications for nutritional epidemiology. Public Health Nutr. 2002;5(6a):915–23.

Article   PubMed   Google Scholar  

Kirkpatrick SI, Baranowski T, Subar AF, Tooze JA, Frongillo EA. Best Practices for Conducting and Interpreting Studies to Validate Self-Report Dietary Assessment Methods. J Acad Nutr Diet. 2019;119(11):1801–16.

Bennett DA, Landry D, Little J, Minelli C. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology. BMC Med Res Methodol. 2017;17(1):146.

Satija A, Yu E, Willett WC, Hu FB. Understanding nutritional epidemiology and its role in policy. Adv Nutr. 2015;6(1):5–18.

Kirkpatrick SI, Reedy J, Butler EN, Dodd KW, Subar AF, Thompson FE, et al. Dietary assessment in food environment research: a systematic review. Am J Prev Med. 2014;46(1):94–102.

The Science of Real-Time Data Capture: Self-Reports in Health Research: Oxford University Press; 2007. Available from: https://doi.org/10.1093/oso/9780195178715.001.0001 .

Verhagen SJ, Hasmi L, Drukker M, van Os J, Delespaul PA. Use of the experience sampling method in the context of clinical trials. Evid Based Ment Health. 2016;19(3):86–9.

Csikszentmihalyi M. Handbook of research methods for studying daily life: Guilford Press; 2011.

Mestdagh M, Verdonck S, Piot M, Niemeijer K, Kilani G, Tuerlinckx F, et al. m-Path: an easy-to-use and highly tailorable platform for ecological momentary assessment and intervention in behavioral research and clinical practice. Front Digit Health. 2023;5:1182175.

Schembre SM, Liao Y, O’Connor SG, Hingle MD, Shen SE, Hamoy KG, et al. Mobile Ecological Momentary Diet Assessment Methods for Behavioral Research: Systematic Review. JMIR Mhealth Uhealth. 2018;6(11): e11170.

Dejonckheere E, Erbas, Y. Designing an experience sampling study. In: Myin-Germeys I, Kuppens, P., editor. The open handbook of experience sampling methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies: Center for Research on Experience Sampling and Ambulatory Methods Leuven; 2021. p. 33–70.

Arksey H, O’Malley L. Scoping Studies: Towards a Methodological Framework. International Journal of Social Research Methodology: Theory & Practice. 2005;8:19–32.

Article   Google Scholar  

Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.

JBI JBI. [cited 2022 October 28th]. Available from: https://jbi.global/scoping-review-network/resources .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73.

Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10.

Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. RAPID REVIEW: AN EMERGING APPROACH TO EVIDENCE SYNTHESIS IN HEALTH TECHNOLOGY ASSESSMENT. Int J Technol Assess Health Care. 2014;30(1):20–7.

Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5(1):56.

Grenard JL, Stacy AW, Shiffman S, Baraldi AN, MacKinnon DP, Lockhart G, et al. Sweetened drink and snacking cues in adolescents: a study using ecological momentary assessment. Appetite. 2013;67:61–73.

Dunton GF, Dzubur E, Huh J, Belcher BR, Maher JP, O’Connor S, et al. Daily Associations of Stress and Eating in Mother-Child Dyads. Health Educ Behav. 2017;44(3):365–9.

Dunton GF, Liao Y, Dzubur E, Leventhal AM, Huh J, Gruenewald T, et al. Investigating within-day and longitudinal effects of maternal stress on children’s physical activity, dietary intake, and body composition: Protocol for the MATCH study. Contemp Clin Trials. 2015;43:142–54.

O’Connor SG, Ke W, Dzubur E, Schembre S, Dunton GF. Concordance and predictors of concordance of children’s dietary intake as reported via ecological momentary assessment and 24 h recall. Public Health Nutr. 2018;21(6):1019–27.

O’Connor SG, Koprowski C, Dzubur E, Leventhal AM, Huh J, Dunton GF. Differences in Mothers’ and Children’s Dietary Intake during Physical and Sedentary Activities: An Ecological Momentary Assessment Study. J Acad Nutr Diet. 2017;117(8):1265–71.

Liao Y, Schembre SM, O’Connor SG, Belcher BR, Maher JP, Dzubur E, et al. An Electronic Ecological Momentary Assessment Study to Examine the Consumption of High-Fat/High-Sugar Foods, Fruits/Vegetables, and Affective States Among Women. J Nutr Educ Behav. 2018;50(6):626–31.

Mason TB, Naya CH, Schembre SM, Smith KE, Dunton GF. Internalizing symptoms modulate real-world affective response to sweet food and drinks in children. Behav Res Ther. 2020;135: 103753.

Mason TB, O’Connor SG, Schembre SM, Huh J, Chu D, Dunton GF. Momentary affect, stress coping, and food intake in mother-child dyads. Health Psychol. 2019;38(3):238–47.

Mason TB, Smith KE, Dunton GF. Maternal parenting styles and ecological momentary assessment of maternal feeding practices and child food intake across middle childhood to early adolescence. Pediatr Obes. 2020;15(10): e12683.

Do B, Yang CH, Lopez NV, Mason TB, Margolin G, Dunton GF. Investigating the momentary association between maternal support and children’s fruit and vegetable consumption using ecological momentary assessment. Appetite. 2020;150: 104667.

Naya CH, Chu D, Wang WL, Nicolo M, Dunton GF, Mason TB. Children’s Daily Negative Affect Patterns and Food Consumption on Weekends: An Ecological Momentary Assessment Study. J Nutr Educ Behav. 2022;54(7):600–9.

Lopez NV, Lai MH, Yang CH, Dunton GF, Belcher BR. Associations of Maternal and Paternal Parenting Practices With Children’s Fruit and Vegetable Intake and Physical Activity: Preliminary Findings From an Ecological Momentary Study. JMIR Form Res. 2022;6(8): e38326.

Bruening M, van Woerden I, Todd M, Brennhofer S, Laska MN, Dunton G. A Mobile Ecological Momentary Assessment Tool (devilSPARC) for Nutrition and Physical Activity Behaviors in College Students: A Validation Study. J Med Internet Res. 2016;18(7): e209.

Piontak JR, Russell MA, Danese A, Copeland WE, Hoyle RH, Odgers CL. Violence exposure and adolescents’ same-day obesogenic behaviors: New findings and a replication. Soc Sci Med. 2017;189:145–51.

Campbell KL, Babiarz A, Wang Y, Tilton NA, Black MM, Hager ER. Factors in the home environment associated with toddler diet: an ecological momentary assessment study. Public Health Nutr. 2018;21(10):1855–64.

Cummings JR, Mamtora T, Tomiyama AJ. Non-food rewards and highly processed food intake in everyday life. Appetite. 2019;142: 104355.

Maher JP, Harduk M, Hevel DJ, Adams WM, McGuirt JT. Momentary Physical Activity Co-Occurs with Healthy and Unhealthy Dietary Intake in African American College Freshmen. Nutrients. 2020;12(5):1360.

Lin TT, Park C, Kapella MC, Martyn-Nemeth P, Tussing-Humphreys L, Rospenda KM, et al. Shift work relationships with same- and subsequent-day empty calorie food and beverage consumption. Scand J Work Environ Health. 2020;46(6):579–88.

Yong JYY, Tong EMW, Liu JCJ. When the camera eats first: Exploring how meal-time cell phone photography affects eating behaviours. Appetite. 2020;154:104787.

Goldstein SP, Hoover A, Evans EW, Thomas JG. Combining ecological momentary assessment, wrist-based eating detection, and dietary assessment to characterize dietary lapse: A multi-method study protocol. Digit Health. 2021;7:2055207620988212.

Chmurzynska A, Mlodzik-Czyzewska MA, Malinowska AM, Radziejewska A, Mikołajczyk-Stecyna J, Bulczak E, et al. Greater self-reported preference for fat taste and lower fat restraint are associated with more frequent intake of high-fat food. Appetite. 2021;159:105053.

Barchitta M, Maugeri A, Favara G, Magnano San Lio R, Riela PM, Guarnera L, et al. Development of a Web-App for the Ecological Momentary Assessment of Dietary Habits among College Students: The HEALTHY-UNICT Project. Nutrients. 2022;14(2):330.

Spook JE, Paulussen T, Kok G, Van Empelen P. Monitoring dietary intake and physical activity electronically: feasibility, usability, and ecological validity of a mobile-based Ecological Momentary Assessment tool. J Med Internet Res. 2013;15(9): e214.

Wouters S, Jacobs N, Duif M, Lechner L, Thewissen V. Affect and between-meal snacking in daily life: the moderating role of gender and age. Psychol Health. 2018;33(4):555–72.

Wouters S, Jacobs N, Duif M, Lechner L, Thewissen V. Negative affective stress reactivity: The dampening effect of snacking. Stress Health. 2018;34(2):286–95.

Wouters S, Thewissen V, Duif M, Lechner L, Jacobs N. Assessing Energy Intake in Daily Life: Signal-Contingent Smartphone Application Versus Event-Contingent Paper and Pencil Estimated Diet Diary. Psychol Belg. 2016;56(4):357–69.

Wouters S, Thewissen V, Duif M, van Bree RJ, Lechner L, Jacobs N. Habit strength and between-meal snacking in daily life: the moderating role of level of education. Public Health Nutr. 2018;21(14):2595–605.

Forman EM, Shaw JA, Goldstein SP, Butryn ML, Martin LM, Meiran N, et al. Mindful decision making and inhibitory control training as complementary means to decrease snack consumption. Appetite. 2016;103:176–83.

Richard A, Meule A, Reichenberger J, Blechert J. Food cravings in everyday life: An EMA study on snack-related thoughts, cravings, and consumption. Appetite. 2017;113:215–23.

Richard A, Meule A, Blechert J. Implicit evaluation of chocolate and motivational need states interact in predicting chocolate intake in everyday life. Eat Behav. 2019;33:1–6.

Zenk SN, Horoi I, McDonald A, Corte C, Riley B, Odoms-Young AM. Ecological momentary assessment of environmental and personal factors and snack food intake in African American women. Appetite. 2014;83:333–41.

Ghosh Roy P, Jones KK, Martyn-Nemeth P, Zenk SN. Contextual correlates of energy-dense snack food and sweetened beverage intake across the day in African American women: An application of ecological momentary assessment. Appetite. 2019;132:73–81.

Ortega A, Bejarano CM, Hesse DR, Reed D, Cushing CC. Temporal discounting modifies the effect of microtemporal hedonic hunger on food consumption: An ecological momentary assessment study. Eat Behav. 2022;48: 101697.

Boronat A, Clivillé-Pérez J, Soldevila-Domenech N, Forcano L, Pizarro N, Fitó M, et al. Mobile Device-assisted Dietary Ecological Momentary Assessments for the Evaluation of the Adherence to the Mediterranean Diet in a Continuous Manner. J Vis Exp. 2021(175).

de Rivaz R, Swendsen J, Berthoz S, Husky M, Merikangas K, Marques-Vidal P. Associations between Hunger and Psychological Outcomes: A Large-Scale Ecological Momentary Assessment Study. Nutrients. 2022;14(23).

Lucassen DA, Brouwer-Brolsma EM, Slotegraaf AI, Kok E, Feskens EJM. DIetary ASSessment (DIASS) Study: Design of an Evaluation Study to Assess Validity, Usability and Perceived Burden of an Innovative Dietary Assessment Methodology. Nutrients. 2022;14(6). https://doi.org/10.3390/nu14061156 .

Jeffers AJ, Mason TB, Benotsch EG. Psychological eating factors, affect, and ecological momentary assessed diet quality. Eat Weight Disord. 2020;25(5):1151–9.

Lucassen DA, Brouwer-Brolsma EM, Boshuizen HC, Mars M, de Vogel-Van den Bosch J, Feskens EJ. Validation of the smartphone-based dietary assessment tool “Traqq” for assessing actual dietary intake by repeated 2-h recalls in adults: comparison with 24-h recalls and urinary biomarkers. Am J Clin Nutr. 2023;117(6):1278–87.

Article   PubMed   CAS   Google Scholar  

Perski O, Keller J, Kale D, Asare BY, Schneider V, Powell D, et al. Understanding health behaviours in context: A systematic review and meta-analysis of ecological momentary assessment studies of five key health behaviours. Health Psychol Rev. 2022;16(4):576–601.

Thompson FE, Subar AF. Chapter 1 - Dietary Assessment Methodology. In: Coulston AM, Boushey CJ, Ferruzzi MG, Delahanty LM, editors. Nutrition in the Prevention and Treatment of Disease (Fourth Edition): Academic Press; 2017. p. 5–48.

Shiffman S, Balabanis MH, Gwaltney CJ, Paty JA, Gnys M, Kassel JD, et al. Prediction of lapse from associations between smoking and situational antecedents assessed by ecological momentary assessment. Drug Alcohol Depend. 2007;91(2-3):159–68.

Eisele G, Kasanova Z, Houben M. Questionnaire design and evaluation. In: Myin-Germeys I, Kuppens, P., editor. The open handbook of Experience Sampling Methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies. Center for Research on Experience Sampling and Ambulatory Methods Leuven; 2021. p. 71–90.

Download references

Acknowledgements

Not applicable.

This work was supported by a PhD fellowship Strategic Basic research grant (1S96721N) of Research Foundation Flanders (FWO) and KU Leuven Internal Funds (C3/22/50). The funders had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Clinical and Experimental Endocrinology, Department of Chronic Diseases and Metabolism, KU Leuven, Leuven, Belgium

Joke Verbeke & Christophe Matthys

Department of Endocrinology, University Hospitals Leuven, Leuven, Belgium

Christophe Matthys

You can also search for this author in PubMed   Google Scholar

Contributions

JV conducted the review and screened the articles. CM was the second reviewer in case of hesitancy on inclusion of articles in the screening process. JV extracted the data and wrote the manuscript. CM revised the manuscript and supervised the research.

Corresponding author

Correspondence to Christophe Matthys .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Verbeke, J., Matthys, C. Experience Sampling as a dietary assessment method: a scoping review towards implementation. Int J Behav Nutr Phys Act 21 , 94 (2024). https://doi.org/10.1186/s12966-024-01643-1

Download citation

Received : 23 February 2024

Accepted : 14 August 2024

Published : 27 August 2024

DOI : https://doi.org/10.1186/s12966-024-01643-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nutrition Assessment
  • Mobile Health
  • Epidemiology

International Journal of Behavioral Nutrition and Physical Activity

ISSN: 1479-5868

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

quantitative research sampling procedures

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 28 August 2024

Assessment of transport phenomena in catalyst effectiveness for chemical polyolefin recycling

  • Shibashish D. Jaydev 1   na1 ,
  • Antonio J. Martín   ORCID: orcid.org/0000-0003-3482-8829 1   na1 ,
  • David Garcia   ORCID: orcid.org/0009-0008-7563-2199 2 ,
  • Katia Chikri 1 &
  • Javier Pérez-Ramírez   ORCID: orcid.org/0000-0002-5805-7355 1  

Nature Chemical Engineering ( 2024 ) Cite this article

54 Altmetric

Metrics details

  • Characterization and analytical techniques
  • Chemical engineering
  • Fluid dynamics

Since the dawn of agitated brewing in the Paleolithic era, effective mixing has enabled efficient reactions. Emerging catalytic chemical polyolefin recycling processes present unique challenges, considering that the polymer melt has a viscosity three orders of magnitude higher than that of honey. The lack of protocols to achieve effective mixing may have resulted in suboptimal catalyst effectiveness. In this study, we have tackled the hydrogenolysis of commercial-grade high-density polyethylene and polypropylene to show how different stirring strategies can create differences of up to 85% and 40% in catalyst effectiveness and selectivity, respectively. The reaction develops near the H 2 –melt interface, with the extension of the interface and access to catalyst particles the main performance drivers. Leveraging computational fluid dynamics simulations, we have identified a power number of 15,000–40,000 to maximize the catalyst effectiveness factor and optimize stirring parameters. This temperature- and pressure-independent model holds across a viscosity range of 1–1,000 Pa s. Temperature gradients may quickly become relevant for reactor scale-up.

quantitative research sampling procedures

Similar content being viewed by others

quantitative research sampling procedures

Recycling polyolefin plastic waste at short contact times via rapid joule heating

quantitative research sampling procedures

Utilizing combusted PET plastic waste and biogenic oils as efficient pour point depressants for crude oil

quantitative research sampling procedures

Insights into hydro thermal gasification process of microplastic polyethylene via reactive molecular dynamics simulations

Some 12,000 years ago, our ancestors already possessed a rudimentary understanding of the benefits of agitation during brewing, when process efficiency was not a pressing concern 1 . The landscape changed dramatically with the inauguration of the chemical industry after the construction of the first soda ash plant in Widnes in 1847 2 . This marked the start of an era in which reducing costs and environmental pollution in large-scale processes became the imperative. Fast forward to the 20th century, pioneers such as G. Damköhler 3 , E. W. Thiele 4 and J. J. Carberry 5 , 6 conceptualized the notion of the ‘catalyst effectiveness’ and devised quantitative criteria to assess it in heterogeneously catalyzed reactions, integrating the concepts of catalyst design and reaction engineering. The catalytic chemical recycling of polyolefins, with the potential for processing more than 60% of global plastic waste 7 , is a prominent example of a catalyst design and reaction engineering challenge for contemporary chemical engineers 8 .

Focusing on the emerging hydrogenolysis strategy to tackle polyolefin waste, design efforts to find active catalytic surfaces offering control over the cleavage of polyolefins, which are more chemically resistant than functionalized polymers 9 , 10 , has steadily gained momentum over the past decade 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 . In contrast, reports on reaction engineering are scarce, despite distinctive features requiring careful attention, as extrapolation from already mature and related technologies, such as pyrolysis 19 , 20 , 21 , 22 , is not straightforward. The main reason is that the multiphasic operation, typically in batch mode, compounded by the presence of non-Newtonian polymer melts with viscosities exceeding that of honey by up to three orders of magnitude, can lead to highly ineffective mixing if technologies with high power per unit volume, such as mechanical stirring, are not used 23 , 24 . A recent report has raised awareness about transport phenomena in this field, exemplified by the application of classical diffusion theory to analyze external mass transport limitations in polyolefin depolymerization under equilibrium 25 . The inability of high-average-molecular-weight polyolefin chains to access the interior of porous catalyst particles has also been claimed to be a factor 26 , 27 , 28 , whereas for low-average-molecular-weight polyolefins, the accessibility of different types of plastic to the pores is mostly seen as a catalyst design strategy to tune selectivity 29 , 30 .

Increasing attention is directed toward low-molecular-weight plastics (<100 kDa, with limited commercial relevance) compared with consumer-grade plastics with molecular weights ( M w ) > 100 kDa. Magnetic (or often unspecified) stirring is, however, becoming the most popular mixing strategy regardless of the M w of the studied plastic (Fig. 1a and Supplementary Tables 1 and 2 ). The lack of quantitative criteria regarding stirring in current testing protocols 31 raises the question of the impact on catalyst effectiveness, with a lack of standardization and limited catalyst benchmarking, which are among the most prominent obstacles to the scaling up of catalytic technologies. Plastic recycling urgently needs modern chemical engineering tools to fully exploit catalyst design efforts.

figure 1

a , Aggregated number of scientific publications on the hydrogenolysis/hydrocracking of polyolefins, classified according to the stirring configuration and molecular weight. Low M w is defined as <100 kDa and high M w is defined as >100 kDa. See Supplementary Table 2 for numerical values . b , Dependency of the viscosity of PP 340 and HDPE 200 on shear rate at different temperatures (Supplementary Tables 3 and 4 ). The approximate stirring rates required to reach the equivalent shear rates in a typical reactor used for the catalytic tests in this study are included. More details are available in Supplementary Note 1 . The viscosity of water at 298 K is shown for comparison. c , Maximum viscosity of fluids amenable to the magnetic or mechanical stirrers usually available in research laboratories. High-molecular-weight (and some low-molecular-weight) plastics require mechanical stirring, as shown in Supplementary Videos 1 and 2 .

Source data

Here we report quantitative guidelines for maximizing three-phase contact in this field of reaction engineering and demonstrate them for the hydrogenolysis of commercial-grade high-density polyethylene (HDPE) and polypropylene (PP). The guidelines are derived from a combination of experimental, theoretical and simulation studies, which led to a simple quantitative criterion based on the dimensionless power number to optimize catalyst effectiveness factors.

Viscosity of molten polyolefins and stirring performance

The mixing of highly viscous substances (viscosity ( μ ) ≥ 10 Pa s) 23 is mainly characterized by the difficulty in reaching turbulent flows due to high viscous energy dissipation leading to excessive power consumption and local hot spots in the reaction medium. Polymer melts are known to be non-Newtonian fluids with a viscosity dictated by shear rate (a measure of the rate at which parallel internal surfaces slide past one another; Extended Data Table 1 ) and temperature, potentially leading to local variations in the reactor that could affect the mixing regime during operation 32 , 33 . We selected HDPE ( M w  = 200 kDa, denoted HDPE 200 ) and PP ( M w  = 340 kDa, denoted PP 340 ) grades found in consumer goods such as plastic caps, cars or textiles, with the characteristic M w used to label the polymers obtained from their melt flow index 34 .

We first conducted a rheological analysis, the results of which are shown in Fig. 1b (see Supplementary Tables 3 and 4 and Methods for details). As expected, increasing the shear rate led to a decrease in the viscosity at all of the tested temperatures, converging toward a distinctive value for each polymer ( ~ 500 Pa s for HDPE 200 and ∼ 320 Pa s for PP 340 ). Laminar flows with inherently low mixing capabilities are thus expected in polyolefin chemical recycling (Supplementary Note 1 ) 35 . Estimation of the stirring rates ( N ) required in a typical reactor vessel to reach the equivalent shear rates (secondary horizontal axis in Fig. 1b ) revealed that temperature and local variations in viscosity in the melt under typical stirring rates can be disregarded (see Supplementary Note 1 for details).

The torque ( τ ) required to stir a non-Newtonian fluid is proportional to the product of its viscosity and a power law of the stirring rate, that is, τ   ∝   μN α (ref. 36 ), where α is an experimentally determined parameter dependent on the fluid. Taking into account that molten HDPE 200 and PP 340 have viscosities approximately one million times greater than that of water at room temperature ( \(\mu_{{\rm{H}}_2{\rm{O}},298{\rm{K}}}\)  ≈ 0.001 Pa s), we verified that the magnetic stirrers commonly available in laboratories are incapable of stirring high-molecular-weight polyolefins and presumably do not allow good control over the stirring rate for low-molecular-weight ones, as illustrated in Fig. 1c and Supplementary Videos 1 and 2 . Magnetic stirrers are suitable for viscosities lower than ~1.5 Pa s (ref. 37 ), whereas mechanical stirrers can be functional up to 10 5  Pa s (ref. 38 ).

Catalyst evaluation and computational fluid dynamics simulations

Experiments were developed in a four-parallel reactor set-up (see Methods , Supplementary Fig. 1 and also Supplementary Video 3 for the stirring configuration). Ruthenium nanoparticles supported on titania, a state-of-the-art catalyst for the conversion of HDPE 200 , was used throughout the study (see Supplementary Fig. 2 for the characterization of the Ru/TiO 2 catalyst) 39 . The factors driving the performance (listed in Extended Data Fig. 1 ) within the experimental limitations described in Supplementary Note 2 were systematically varied and translated into computational fluid dynamics (CFD) simulations, including experimentally obtained viscosity dependencies, to describe the hydrogen–catalyst–melt contact over time (see Methods for a general description, Supplementary Note 3 for scope and Supplementary Fig. 3 for convergence plots).

Initial experiments revealed optimum hydrogen pressures for HDPE 200 and PP 340 in accordance with the literature, with a pressure of 20 bar selected for subsequent tests (Extended Data Fig. 2 , Supplementary Fig. 4 and Supplementary Tables 5 and 6 ) 40 , 41 , likely due to competitive adsorption of the polyolefin and hydrogen on the metal surface (Supplementary Fig. 5 ). Simulations at different temperatures confirmed minimum differences in the distribution of viscosities and Reynolds numbers (Extended Data Fig. 1 and Supplementary Table 7 ), in line with the observations depicted in Fig. 1b showing viscosity values almost independent of temperature at shear rates equivalent to stirring rates larger than approximately 15 r.p.m. in laboratory reactors (Supplementary Note 1 ). A commonly reported reaction temperature (498 K) was thus chosen for further analyses after performance tests (Extended Data Fig. 1 ) 42 , 43 .

Internal and external mass transport limitations

We first evaluated the ability of polymer chains to penetrate micropores and mesopores (see Supplementary Note 4 for the case of micrometer-sized pores usually found in shaped catalysts). The Freely Jointed Chain model (Supplementary Note 5 ) predicts a typical dimension for the folded chain ( Λ ) of ~ 22 nm for HDPE 200 44 , 45 , 46 . For chain lengths below Λ , polymer chains tend to gradually favor the linear conformation 47 . The relevant pore size that polyolefins, or their liquid products following reaction, may not be able to penetrate thus ranges from Λ  ≈ 1 nm (C 6 ) to Λ  ≈ 100 nm (high-molecular-weight polyethylene, M w  ≈ 5,000 kDa). These scales are represented in Fig. 2a and suggest that internal mass transport limitations in the case of porous catalysts may increase in relevance even for polyolefins with very high M w as the reaction progresses toward shorter chain products, and therefore further studies are required. Internal mass transport phenomena were disregarded for simplicity as the Ru/TiO 2 used in our study showed an average pore size of 6 nm within a very low specific pore volume of 0.02 cm 3  g −1 . Experiments at different stirring rates using a particle diameter of 0.6 mm support this assumption (Supplementary Fig. 6 ).

figure 2

a , Typical magnitudes of relevant length scales in the catalytic processing of consumer-grade plastics. Chain size refers to the typical dimensions of folded chains in low- and high-molecular-weight polymers. Reaction front refers to the characteristic penetration length of hydrogen in the melt before its concentration drops below 10% of its value at the H 2 -melt interface. b , Simulated decay of relative hydrogen concentration with distance from the H 2 –melt interface for different reaction rate constants ( k r ). \({\rm{C}}_{{{\rm{H}}_2},{\rm{int}}}\) refers to the concentration of H 2 at the H 2 -melt interface. Pseudo-first-order kinetics ( \(r=k_{\rm{r}}c_{{\rm{H}}_2}\) ) were considered, as explained in more detail in Supplementary Note 5 . The area shaded in gray indicates the typical range of k r values calculated in our experiments and reported in the literature. c , Schematic representation of the circulation of catalyst particles in the reaction vessel, with those exposed to hydrogen and polymer melt shown in green as active particles toward hydrogenolysis, as deduced from b , highlighting the fact that the reaction is mostly constrained to the vicinity of the H 2 –melt interface.

Regarding external mass transport limitations, an earlier study determined negligible external hydrogen gradients to catalyst particles immersed in the melt if equilibrium bulk concentrations of H 2 are reached 25 . However, the simulation of H 2 diffusion into molten HDPE 200 in the absence of reaction for a range of hydrogen pressures, times and viscosities (Supplementary Fig. 7 and Supplementary Notes 6 and 7 ) revealed a characteristic time for equilibration beyond typically reported reaction times. In view of this, we computed the decay of the H 2 concentration at the H 2 –melt interface assuming the direct reaction of H 2 with the melt after estimating that the observed reaction rate is around five times that of the diffusion rate of H 2 , as provided by the Hatta number (Extended Data Table 2 and Supplementary Note 7 ) 48 . Figure 2b shows the results for different reaction rate constants ( k r ), defined according to the expression for pseudo-first-order kinetics, \(r=k_{\rm{r}}c_{{\rm{H}}_2}\) , where r is the rate of the reaction and \(c_{{\rm{H}}_2}\) is the concentration of H 2 . A suitable range of k r was estimated from the typical hydrogen consumption and reaction times observed in our study and reported in the literature (see Supplementary Note 7 for details) 30 . Poorly active catalysts not yielding any liquid products are characterized by k r values of ∼ 1.5 × 10 −3  s −1 , whereas highly active systems able to provide 100% conversion into methane are expected to present k r values of ∼ 0.1 s −1 . These values translate into a range of concentration decays, highlighted in gray in Fig. 2b . Typically, we obtained values for k r of ∼ 0.01 s −1 . As observed in Fig. 2b , the concentration of hydrogen drops below 10% of the interface value within a few millimeters in all cases, strongly suggesting that the reaction is mostly confined to the vicinity of the H 2 –melt interface with a typical length ( λ ) of ∼ 10 −3  m, leaving most of the melt non-reactive (Fig. 2a ). A representation in which only catalyst particles exposed to this region are active toward the reaction (Fig. 2c ) is thus a suitable approximation to study the role of agitation in catalytic performance.

Impact of catalyst particle circulation on performance

The previous analysis shows the benefit of stirring configurations maximizing the presence of catalyst particles in the vicinity of the H 2 –melt interface. A first analysis based on the ratio of gravitational and viscous forces given by the Archimedes number (Ar; Extended Data Table 2 ) predicted Ar = 10 −8 –10 −7 and therefore that the density of the catalyst is expected to play a negligible role in particle motion (Supplementary Note 8 ). Nevertheless, the average catalyst particle diameter ( d p ) is important as it determines the tendency of particles to follow melt streamlines according to the Stokes number (Stk; Extended Data Table 2 ) 49 . Considering λ as the characteristic length, Stk ≈ 10 −2 –10 0 for d p  = 10 −4 –10 −3  m. These values indicate that small catalyst particles will closely follow streamlines, whereas larger ones may deviate from them to an extent comparable to λ .

We evaluated the importance of d p by comparing the catalytic performance of three different catalyst sieve fractions (0–0.2, 0.2–0.4 and 0.4–0.6 mm) in the hydrogenolysis of HDPE 200 (Fig. 3a and Supplementary Tables 5 and 6 ). Equivalent experiments with PP 340 did not show substantial variation in the total yield due to its low reactivity effectively limiting the performance of the catalyst (Supplementary Fig. 8 ). However, the same trend was confirmed by using the shorter and thus more reactive PP 12 (Supplementary Table 8 and Supplementary Fig. 9 ). We found the smallest sieve fraction to be beneficial, producing a 40% greater yield of the C 1 –C 45 products compared with the largest sieve fraction under the tested conditions. CFD simulations for d p  = 0.2, 0.4 and 0.6 mm, keeping the same stirrer geometry, predicted differences in the particle trajectories. Larger particles on average required longer times to leave the bottom of the reactor and tended toward a more irregular occupancy of the vessel volume (Fig. 3b and Supplementary Fig. 10 for the three modeled particle sizes). Having determined the benefits of smaller sieve fractions, d p  = 0.2 mm was used for the rest of the simulations in this study.

figure 3

a , Variation in the product distribution for the hydrogenolysis of HDPE 200 with catalyst sieve fraction using a propeller stirrer after 4 h. b , Corresponding three-phase CFD simulations using discrete phase modeling showing the trajectories of 0.2 and 0.6 mm catalyst particles under steady-state conditions: the total number of particles ( n p , with n p  = 196 for 0.2 mm and n p  = 58 for 0.6 mm; top) and representative initial trajectories of individual particles (bottom). c , Variation in the product distribution for the hydrogenolysis of HDPE 200 with stirrer type after 4 h. d , Corresponding CFD simulations (top views) of catalyst particle trajectories for different stirrers, colored according to the H 2 fraction in the vicinity. Simulations for other sieve fractions and parallel analyses for PP 340 can be found in Supplementary Figs. 8 – 10 and Supplementary Tables 5 – 7 . Simulated particle trajectories are presented in Supplementary Video 4 . Reaction and simulation conditions: T  = 498 K, \({p}_{{{\rm{H}}}_{2}}\)  = 20 bar, catalyst/plastic ratio = 0.05 and stirring rate = 750 r.p.m.

The stirrer imposes the flow pattern that catalyst particles follow (Supplementary Fig. 11 ). Figure 3c shows the product distribution for the catalytic hydrogenolysis of HDPE 200 using three different stirrer geometries under the same conditions (Supplementary Tables 5 and 6 and Supplementary Fig. 12 for PP 340 ). The yield of C 1 –C 45 products was not greatly affected by the stirrer type, whereas the product distribution shifted from gas to liquid fractions, with the amount of gaseous product decreasing in the order impeller > propeller > turbine, highlighting that stirring strategies can tune selectivity, as a consequence of tuning the activity, given that the hydrogenolysis is a series of reactions, and must be reported to facilitate benchmarking. The total number of carbon–carbon bonds followed the same trend, as determined using the recently published procedure for calculating the number of backbone scission, isomerization and demethylation events 15 (Supplementary Table 9 ). Figure 3d and Supplementary Video 4 show critical differences between the stirrers. Propellers tend to split the catalyst particles into two separate zones with either high or low H 2 concentration. Impellers tend to keep catalyst particles circulating around the mid plane, where the H 2 concentration is high due to the V shape adopted by the H 2 –melt interface. Impellers are thus better suited to optimizing catalyst use. The turbine is poorly efficient in transferring particles to H 2 -rich zones, leading to the modest generation of gaseous products (which require more molecules of H 2 per molecule of polymer). These effects can be quantitatively understood by considering the maximum value of the vertical component of the particle Reynolds number (Re p, z ,max ; Extended Data Figs. 3 and 4 , Supplementary Notes 9 and 10 , Supplementary Tables 7 , 10 and 11 , and Supplementary Fig. 13 ) as the first performance descriptor. Re p, z ,max can be derived from the melt properties, stirring rate, and particle and stirrer geometries and can be linked to activity and therefore changes in selectivity (Extended Data Fig. 2 ), offering a first tool to predict performance trends.

Criterion to maximize the catalyst effectiveness factor

The yield of C 1 –C 45 products did not monotonically increase with stirring rate, as shown in the catalytic tests for both polymers (Fig. 4a ). The existence of an optimum rate, in accord with some reports in the literature on the conversion of low-molecular-weight plastics 50 , 51 , led us to study the influence of stirring rate on the extent of the H 2 –melt interface. CFD simulations developed for the three stirrer types (Fig. 4b ) predicted the potential of propellers and impellers to increase the interface. The ability of simulations to reproduce the V shape of the H 2 –melt interface for highly viscous plastics is confirmed in Supplementary Video 3 . CFD simulations of the impeller at different stirring rates strongly hint at a relationship between stirring rate and the H 2 –melt interfacial area (Fig. 4c ). Small variations in the distance between the base of the stirrer and the bottom of the vessel also led to small changes in the H 2 –melt interface. However, an excessive distance (the top of the stirrer at the free melt surface) led to a decrease in the interface (Supplementary Fig. 14 ). In general, the shear-thinning character of molten plastics makes stirring only effective in the imaginary volume occupied by the stirrer under rotation or slightly beyond, making it advisable to minimize the distance between the stirrer and reactor walls.

figure 4

a , Variation in the product distribution with stirring rates for HDPE 200 and PP 340 with the impeller stirrer. b , c , Two-phase CFD simulations of the hydrogen fraction in the mid z – x plane for different stirrer types ( b ) and different stirring rates for the impeller stirrer ( c ). d , Correlation between the effectiveness factor, defined as the ratio between the yield of C 1 –C 45 and maximum yield of C 1 –C 45 in a , and the modified power number for HDPE 200 and PP 340 , calculated using the stirring rates in a and the simulated fraction of H 2 in Extended Data Fig. 5 . Reaction and simulation conditions: T  = 498 K, \({p}_{{{\rm{H}}}_{2}}\)  = 20 bar and catalyst/plastic ratio = 0.05.

Given the difficulty of calculating the extent of the H 2 –melt interface, we defined as a proxy the fraction of hydrogen ( \(\chi_{{\rm{H}}_2}\) ; Extended Data Table 2 ) in a volume contained between the bottom of the stirrer and the H 2 –melt interface (Extended Data Fig. 1 ) when there is no stirring. The average Reynolds number in this region could serve as a descriptor for \(\chi_{{\rm{H}}_2}\) as more turbulence (larger Re values) may lead to more pronounced hills and valleys on the surface of the melt. However, Re is not observable. For the case of Re  ≪  1 as studied here, the power and Reynolds number are inversely linearly correlated. The power number ( N p ) expresses the relationship between resistance and inertia forces and can be written in terms of \(\chi_{{\rm{H}}_2}\) and observable variables such as the average density of the melt ( \({\bar{\rho }}\) ), the reactor diameter ( D ) and the average density of the melt ( ρ m ) (equation ( 1 ), Extended Data Table 2 and Supplementary Note 11 ) 24 , 52 .

Extended Data Fig. 5 shows the relationship between \(\chi_{{\rm{H}}_2}\) and N p obtained from CFD simulations for HDPE 200 and PP 340 under the same conditions as used in Fig. 4a . Propeller and impeller stirrers yielded volcano behavior, with a maximum \(\chi_{{\rm{H}}_2}\)  ≈ 0.20–0.30 for N p  ≈ 10 4 –10 5 , shifted toward slightly lower values in the case for PP 340 . The difference in the optimal rates in Fig. 4a and Extended Data Fig. 5 can be ascribed to the lower average viscosity in reaction compared with the simulations (Supplementary Note 12 ), although in practice this has a small impact as \(\chi_{{\rm{H}}_2}\) displays values of around 0.2 for a broad range of stirring rates.

Plots of the effectiveness factor ( η ), defined as the ratio between the yield of C 1 –C 45 products and the maximum yield of C 1 –C 45 products over a series of experiments (equation ( 2 ) and Extended Data Table 1 ), versus the corresponding N p values based on the results presented in Fig. 4a show the optimal N p ranges for the two polymers ( \(N_{{\rm{p}},{\rm{HDPE}}_{200}}\)  ≈ 2 × 10 4 to 3 × 10 4 and \(N_{{\rm{p}},{\rm{PP}}_{340}}\)  ≈ 1.5 × 10 4 to 2.5 × 10 4 ) to achieve high η values (Fig. 4d ) and serve as a guide for the design of catalytic tests for performance optimization. From equation ( 1 ) and Extended Data Fig. 5 , it is possible, for a given stirrer geometry (stirrer type and D ), to select the stirring rate ( N ) and torque ( τ ) to be applied to deliver the desired N p value. Nevertheless, torque control is not a widely available feature of current reactor systems for catalyst evaluation, hindering the applicability of this criterion.

Use of the concentric cylinders model to describe the stirrer geometry (Supplementary Note 1 ) gives access to analytical relationships between viscosity, shear rate and torque, leading to an alternative expression for N p (Fig. 5 and Supplementary Note 11 ) that now includes contributions from the melt properties, stirring rate, fluid dynamics (through \(\chi_{{\rm{H}}_2}\) ), and reactor and stirrer geometry (through D , D r and L ; Extended Data Fig. 1 ). All of the variables are either directly observable or design parameters, except \(\chi_{{\rm{H}}_2}\) , which is available from Extended Data Fig. 1 and Supplementary Table 12 and depends on the stirrer type and plastic under treatment. Practitioners of catalysis can thus select appropriate combinations of stirring rate and reactor geometry to achieve the optimal N p ranges for a certain plastic. We note that deviations from the optimal range led to differences of up to 85% in activity and 40% in selectivity (Fig. 4a ).

figure 5

Parameters that can be approximated under typical reaction conditions or are known a priori are indicated. Ranges of optimal stirring rates for a given reactor and stirrer geometry can thus be calculated. \({\bar{\mu }}\) refers to the average viscosity of the melt, D r refers to the diameter of the stirrer and L to the height of the stirrer blades (Extended Data Table 1 ).

In the most common case where the geometries of the stirrers and reactor are given, a first approximation to the optimal ranges of N p can be obtained from the values provided in Fig. 5 . Melt densities and average viscosities at typical operation temperatures (Fig. 1b ) and a reasonable value for \(\chi_{{\rm{H}}_2}\) of ∼ 0.2 (Extended Data Fig. 5 ) allow a straightforward calculation of stirring rate ranges. For example, in the case of D  = 2 cm, L  = 1 cm and D r  = 2.5 cm, the approximate ranges for high catalyst effectiveness factors would be N  = 880–1,300 r.p.m. for HDPE 200 and N  = 760–1,100 r.p.m. for PP 340 This criterion was shown to be valid in the range of most reported operation pressures (20–30 bar), with lower pressures (10 bar) showing behavior compatible with H 2 depletion as the limiting factor (Supplementary Table 5 and Supplementary Fig. 15 ). These results, together with the small variation in viscosity at commonly applied temperatures (Fig. 1 ), make this criterion pressure- and temperature-independent under most reported conditions.

Model scope and future directions

As the average chain length of the hydrocarbons decreases due to cleavage, so does the viscosity, spanning six orders of magnitude until reaching values close to water (Fig. 1 ). Thus, the ability of the criterion to predict performance as the reaction progresses was next investigated.

We hypothesized that the transition from the initial non-Newtonian character to a Newtonian character, facilitating the creation of turbulence 24 , may change the structure of the H 2 –melt interface. The Freely Jointed Chain model predicts a transitioning chain length of around C 200 (Supplementary Note 5 ). With this in mind, we simulated stirring patterns for HDPE 100 (non-Newtonian, a proxy for low conversion stages), a hypothetical C 200 under Newtonian and non-Newtonian regimes, and eicosane (C 21 , Newtonian, a proxy for high conversion stages). The results clearly reflect the transition from a single H 2 –melt interface to an abundance of H 2 bubbles populating the melt (Extended Data Fig. 6 and Supplementary Fig. 16 ), as supported by direct observations when turbulence starts to dominate as viscosity decreases (Supplementary Video 3 ). We then performed catalytic tests on HDPE 100 and eicosane (Supplementary Table 8 and Fig. 6a,b ), calculated \(\chi_{{\rm{H}}_2}\) (Supplementary Table 13 and Supplementary Fig. 17 ) and applied the criterion (Fig. 6c ). The non-Newtonian melts of HDPE 200 and HDPE 100 exhibited very similar trends, with identical optimal stirring rates (although different N p due to different viscosities), whereas eicosane displayed a C-shaped relationship between effectiveness and N p , clearly suggesting the need for a different modeling strategy for the later stages of the reaction (or for the case of catalytic hydrogenolysis of surrogate molecules or the often-used very-low-molecular-weight plastics). The transition seems to occur at N p  = 10 2 –10 3 , corresponding to viscosities of around 3–30 Pa s at 1,000 r.p.m., therefore validating the proposed criterion until the later stages of the reaction.

figure 6

a , b , Variation in the product distribution with stirring rate and two-phase CFD simulations of the hydrogen fraction in the mid z – x plane for HDPE 100 ( a ) and eicosane ( b ). c , Correlation between the effectiveness factor, defined as the ratio between the yield of C 1 –C 45 and the maximum yield of C 1 –C 45 for HDPE and between the yield of methane and the maximum yield of methane for eicosane, and the modified power number, calculated using the simulated H 2 fractions in Extended Data Fig. 5 , using an impeller as stirrer. d , Temperature distribution in the mid x – y plane for different stirrer geometries when the thermocouple reaches the operation temperature (498 K, at the position indicated). e , Temporal evolution of the temperature distribution in the x – z plane for different reactor diameters ( D r ). Reaction and simulated conditions: T  = 498 K, \({p}_{{{\rm{H}}}_{2}}\)  = 20 bar and catalyst/plastic ratio = 0.05.

In addition to the analysis of mass transport limitations, we also investigated heat transport constraints. We simulated the largest possible temperature gradient within the reactor during operation with three different stirrer configurations when the temperature at the thermocouple reaches the set temperature (498 K in our case, equal to that imposed on the reactor walls). Figure 6d shows the temperature distribution in the reactor, which resembles that of the Reynolds number distribution (Extended Data Fig. 2 ), with gradients of approximately 100 K for the best impeller and propeller geometries. This led us to conduct temporal simulations to predict the time for the gradient to reduce to less than 10 K. Figure 6e (top) shows a time of 15 min for the worst-case scenario of walls at the set temperature and the interior at room temperature at t  = 0 with a stagnant and non-reactive melt (see Methods for more details), representing only 6% of the operation time (4 h). From a forward-looking perspective, more refined models able to stepwise predict product distributions and thus recommend optimal operation times will be possible after incorporating kinetic descriptions for the catalyst under study. Alternatively, developing operando tools to track viscosity could also guide optimal reaction times. We also highlight the generality of the applied analysis that could be adapted to future reactor architecture operating in continuous mode. In this direction, processes such as continuous reactive extrusion 53 are first steps, which would also enable the online analysis of products.

The importance of stirring strategies for optimizing the potential of catalytic materials in hydrogenolysis and hydrocracking has been quantitatively established for virgin consumer-grade polyolefins. Our results show that mechanical stirring is highly recommended, even for low-molecular-weight plastics. The reaction can be assumed to develop in a millimeter-scale region next to the H 2 –melt interface for moderately and highly active catalysts. Stirring can thus be seen as a means to maximize the presence of particles in this region. Stirrer geometries largely determine the location of particles and thus performance, which decrease in the order impeller > propeller > turbine. A temperature- and pressure-independent criterion to maximize the catalyst effectiveness factor based on the observed correlation between power number and H 2 –melt interface has been developed based on the power number. For a given stirrer and reactor geometry, stirring rates for non-Newtonian melts (viscosities greater than 3–30 Pa s) can easily be calculated to operate within optimal power number values (2 × 10 4 to 3 × 10 4 for HDPE 200 and 1.5 × 10 4 to 2.5 × 10 4 for PP 340 ). Future criteria incorporating heat transport gradients will be key for the successful scale-up of this technology. This work provides readily implementable tools to maximize and tune the performance of catalysts, facilitating the standardization of catalyst evaluation and underscoring the key role of engineering considerations in catalyst development programs.

Rheological measurements

The viscosity of HDPE 200 and PP 340 (Supplementary Table 1 ) was measured using an Ares-G2 rheometer equipped with a separate motor and transducer (TA Instruments). A parallel plate geometry with a diameter of 25 mm was used. The temperature was regulated by convection, and flow curves were generated using a shear rate sweep from 50 s −1 to 0.01 s −1 . The shear rate and viscosity were fitted assuming the plastic melts to be Carreau fluids.

Catalyst synthesis

Commercially available titanium oxide (anatase, Sigma–Aldrich) was used as the support and calcined in ceramic crucibles in static nitrogen (5 h, 573 K and 5 K min −1 ) before synthesis. Typically, 0.30 g TiO 2 was dry-impregnated with 1.04 cm 3 ruthenium nitrosyl nitrate (Ru(NO)(NO 3 ) x (OH) y , x  +  y  = 3) in dilute HNO 3 (0.015 g cm −3 ; Sigma–Aldrich) to achieve 5 wt% while mixing with a glass-coated magnetic stir bar (VWR chemicals) until the solvent had evaporated. Residual solvent was removed under vacuum (12 h, 353 K and 80 mbar). Finally, the sample was heated in nitrogen at 573 K. The prepared catalyst was then pressed and sieved into different sieve fractions (0.0−0.2, 0.2−0.4 and 0.4−0.6 mm).

Catalyst characterization

High-angle annular dark-field scanning transmission electron microscopy and energy-dispersive X-ray spectroscopy were carried out on a probe-corrected Titan Themis microscope operated at 300 kV. Samples were pretreated to remove adventitious compounds in Ar/O 2 plasma before being inserted into the microscope. X-ray photoelectron spectroscopy (XPS) was conducted using a Physical Electronics Quantera SXM spectrometer. Monochromatic Al Kα radiation (1,486.6 eV) generated by an electron beam (15 kV and 49.3 W) was used to irradiate the samples with a spot size of 200 μm. The finely ground samples were pressed into indium foil (99.9%, Alfa Aesar) and then mounted onto the sample holder. During measurement, electron and ion neutralizers were operated simultaneously to suppress undesired sample charging. High-resolution spectra were obtained using pass energies of 55 eV, while the Au f 7/2 signal at 84 ± 0.1 eV was used for calibration. All calculations were performed with the CasaXPS (Casa software), and the relative sensitivity factors used for quantification were taken from the instrument. All spectra were deconvoluted into Gaussian–Lorentzian components after application of the Shirley background. X-ray diffraction was conducted using a Rigaku SmartLab diffractometer equipped with a D/teX Ultra 250 detector and Cu Kα radiation ( λ  = 0.1541 nm) operating in Bragg−Brentano geometry. Data were acquired in the 2 θ range of 20–80° with an angular step size of 0.025° and a counting time of 1.5 s per step. Temperature-programmed desorption experiments were conducted using a Micromeritics AutoChem II 2920 instrument. Signals were acquired with a pre-equipped thermal conductivity detector (TCD) and an attached Pfeiffer Vacuum OmniStar GSD 320 O mass spectrometer (MS). Samples of ~0.1 g were dried at 473 K for 1 h, followed by saturation with either n -C 7 H 16 or 2,4-dimethylpentane (contained in an attached vaporizer). In the subsequent desorption experiment, a heating ramp (313–873 K at 10 K min −1 ) was applied while using both the TCD and MS to monitor the evolution of the probe gas.

Catalyst evaluation

The catalysts were evaluated in a parallel batch reactor set-up (BüchiGlasUster) fitted with an electrical heating jacket and active cooling unit (chilled water system). Typically, 0.5 g virgin polyethylene or polypropylene and 0.025 g catalyst were placed inside a glass inset, which was then placed inside the reactor (Supplementary Fig. 1 ). Catalyst particles were always found at the bottom of the reactor after reaction regardless of the catalyst activity. The distance between the base of the stirrer and the bottom of the reactor was minimized, and it is highly recommended to use a volume of melt that closely matches the envelope of the stirrer to minimize melt zones between the stirrer blades and reactor walls, where stirring is highly ineffective due to the shear-thinning nature of molten plastics. The reactors were flushed first with nitrogen and then with hydrogen before pressurization to the desired pressure. All reactor systems were equipped with mechanical stirring, temperature/pressure control systems and gas sampling lines. A schematic of the reactor set-up, including the most relevant dimensions, is shown in Extended Data Fig. 1 , and the experimental limitations are described in Supplementary Note 2 . Instantaneous values of temperature, pressure and stirring torque were recorded using the SYSTAG Flexsys software 54 . The temperature in the reactor was measured using a thermocouple placed in the space (immersed in the polyolefin melt) between the stirrer and reactor wall. Before the reaction, the reactor was weighed along with all its content. The reaction mixture was heated and stirred (at 498 K and 750 r.p.m. unless otherwise specified) for a set reaction time. After the reaction, the vessels were cooled with circulating chilled water. Three different commercially available stirrer types (acquired from BüchiGlasUster) were used, namely, a turbine, propeller and impeller. They all had the same blade height (1 cm) and diameter (2.8 cm). The glass inset containing the reaction mixture had a diameter of 3 cm (Extended Data Fig. 1 ) and was inserted into a stainless-steel reactor placed in a heating jacket. The glass inset was fabricated so that its outer diameter closely fitted the inner diameter of the reactor (Supplementary Fig. 1 ). A detailed description of the experimentally available range of operating conditions is provided in Supplementary Note 2 .

Product analysis

The gaseous products were collected from the headspace of the reactor using a sampling cylinder and analyzed using a gas chromatograph (HP Agilent 6890) equipped with a 25 m × 0.53 mm × 20 μm column (Agilent J&W PoraPlot Q column) and flame ionizing detector (FID). A temperature ramp of 308–573 K (5 K min −1 ) was applied, while the inlet and FID were held at 573 K and 473 K, respectively. The gas chromatograph (GC) columns were calibrated as per a procedure reported elsewhere for products C 1 –C 45 (ref. 55 ); the calibration was performed for products C 1 –C 5 , detected by FID, using a standard refinery gas mixture (Agilent P/N 5080-8755). The reactor inset was then weighed to calculate the amount of gas formed. The products remaining inside the inset were dissolved in dichloromethane using sonication and filtered using a syringe for GC-FID and 1 H NMR analysis. GC-FID analysis was performed on a GC (HP Agilent 6890) equipped with a 15 m × 0.25 mm × 0.10 μm column (HP DB-5 HT). A temperature ramp of 313–648 K (4 K min −1 ) was applied, while the FID detector was held at 613 K. The initial and final hold times were set at 2 and 10 min, respectively. Calibration was performed for C 7 –C 40 alkanes using a certified reference mixture (C 7 –C 40 in hexane, 1 mg cm −3 , traceCERT, Sigma–Aldrich). For each 1 H NMR experiment, conducted on a 300 MHz Bruker Ultrashield spectrometer, 0.45 cm 3 of sample and 0.05 cm 3 of [D 2 ]dichloromethane were mixed and analyzed using a solvent suppression method reported elsewhere 2 . The signals in the aliphatic region of the 1 H NMR spectra were numerically integrated to identify the ratio of primary to secondary carbon atoms. The areas corresponding to each carbon type were normalized by the number of hydrogen atoms bonded to the carbon.

CFD simulations

CFD simulations were performed with Ansys Academic Research Fluent (Release 2023 R1) using a double-precision steady-state solver. Supplementary Note 3 describes the scope and limitations of the simulations when applied to polyolefin hydrogenolysis. The geometry of the reactor was created in Ansys SpaceClaim on a 1:1 scale with the actual dimensions. The geometry was meshed using the watertight geometry workflow of Ansys Fluent Meshing, resulting in 0.5 million polyhedral cells. The stirring within the reactor was simulated using the moving reference frame method. Fluid flow was computed using the renormalized group k-ε model with swirl-dominated flow and Menter–Lechner near-wall treatment. The experimentally measured viscosity of the plastic melt was modeled using the Carreau–Yasuda model and implemented in Fluent as an interpreted function. The power number was calculated as a user-defined function within Fluent. The pressure–velocity coupled solver used the Rhie–Chow momentum-based flux type auto-selected by Fluent, which was run until residual values of 10 −5 were reached for continuity, x , y and z velocities, turbulent kinetic energy ( k ) and the rate of dissipation of kinetic energy ( e ).

The two-phase system of hydrogen and molten polyolefin was simulated using the volume-of-fluid (VOF) method with sharp interfaces, continuum surface stress and the no-slip condition. The VOF simulations were achieved by solving the continuity equation ( 3 ), where ρ q , α q and v q refer to the density, volume fraction and velocity of a given phase q , \({\dot{m}}_{sq}\) and \({\dot{m}}_{qs}\) refer to the mass transfer between phases q and s within the multiphase system, t refers to time and S indicates a mass transfer source (none in this case). The validity of this equation is subject to equation ( 4 ). A condition of no mass transfer between the phases was assumed, which results in equation ( 5 ). The volume fraction of the secondary phase (in this case molten plastics) was first computed, followed by the calculation of the primary phase (hydrogen) using the constraint that the sum of the volume fractions of all phases must be 1. The volume fraction equation was solved in this case using an implicit scheme for time discretization that uses equation ( 6 ), where V refers to the volume of the cell, U f refers to the volume flux through the face, based on a normal velocity, n refers to the previous iteration step, while n  + 1 is the current iteration step, α q,f is the face value of the q th volume fraction computed through either the first- or second-order upwind scheme and Δ t is the infinitesimally small time step considered for iterative computation. The volume used to compute the H 2 fraction was a cylinder delimited by the free surface of the melt, the reactor walls and reached 2 mm below the stirrer base after preliminary tests to determine the volume showing the best compromise to account for the H 2 –melt interface under all of the tested conditions.

Particles within the melt were simulated using the discrete phase modeling method of Ansys Fluent with the particles being allowed to interact with the continuous phase. Particles were introduced at the bottom of the vessel during simulations to reflect the fact that they were always found at the bottom of the reactor after reaction regardless of the catalyst activity. Virtual mass force and pressure gradient force models were employed along with two-way turbulence coupling. The trajectory of each particle was computed using the force balance on the particle provided by equation ( 7 ), where v p and v m are the velocities of the particles and fluid (multiphase mix of plastic melt and hydrogen), respectively, g x is the gravitational acceleration (9.81 m s −2 ), and ρ p and ρ m are the densities of the particle and fluid, respectively. The drag force ( F D ) was computed using equation ( 8 ), where C D is the coefficient of drag, Re p is the particle Reynolds number (computed with equation ( 9 )) and d p is the particle diameter. F x 1 represents the virtual mass force (equation ( 10 )) and F x 2 is the pressure gradient force (equation ( 11 )).

The sources of the discrete phase model were updated every flow iteration. The particles were injected into the two-phase system of molten polyolefin and hydrogen after the VOF model had been fully resolved. Particle injection was achieved using an injection file generated by a Python script to randomly generate particles at coordinates ( x , y , z ) near the base of the stirrer.

Hydrogen diffusion across a static polyolefin melt was simulated with the COMSOL Multiphysics software using a time-dependent dilute species diffusion model (in all three dimensions) constructed using Fick’s law of diffusion. The concentration of hydrogen was varied from 25 to 100 mol m −3 , corresponding to different pressures of hydrogen. The diffusivity of hydrogen through the polyolefin was varied between 10 −8 and 5 × 10 −8  m 2  s −1 , corresponding to different polyolefin M w , as explained in Supplementary Note 6 . The time-dependent temperature profile in a static polyolefin melt (with the reactor wall temperature set at 498 K) was simulated with the COMSOL Multiphysics software 56 .

The steady-state temperature profiles in polyolefin melts stirred with various stirrer geometries were simulated by adding energy dissipation equations to the volume-of-fluid and discrete particle method (VOF-DPM) model in ANSYS Fluent (ref. 57 ). The heat transfer coefficients and heat capacities were taken from libraries embedded within ANSYS. As the stirring simulations were always conducted under steady-state conditions, the reported results correspond to the case where the temperature at the position of the thermocouple reached the set temperature.

Data availability

The data presented in the figures of this paper are publicly available via Zenodo at https://doi.org/10.5281/zenodo.10812922 (ref. 58 ). Other supporting data are available from the corresponding authors upon request. Source data are provided with this paper.

Code availability

No new code was generated in this work.

Hayden, B., Canuel, N. & Shanse, J. What was brewing in the Natufian? An archaeological assessment of brewing technology in the Epipaleolithic. J. Archaeol. Method Theory 20 , 102–150 (2013).

Article   Google Scholar  

Williams, F. J. Widnes and the early chemical industry 1847–71. A case of occupational mobility in the industrial revolution. Trans. Hist. Soc. Lancs. Ches. 134 , 89–105 (1984).

Google Scholar  

Damköhler, G. Einflüsse der Strömung, Diffusion und des Wärmeüberganges auf die Leistung von Reaktionsöfen: I. Allgemeine Gesichtspunkte für die Übertragung eines chemischen Prozesses aus dem Kleinen ins Große. Z. Elektrochem. Angew. Phys. Chem. 42 , 846–862 (1936).

Thiele, E. W. Relation between catalytic activity and size of particle. Ind. Eng. Chem. Res. 31 , 916–921 (1939).

Article   CAS   Google Scholar  

Pereira, C. J., Carberry, J. J. & Varma, A. Uniqueness criteria for first order catalytic reactions with external transport limitations. Chem. Eng. Sci. 34 , 249–255 (1979).

Carberry, J. J. in Chemical and Catalytic Reaction Engineering 474–519 (Dover Publications, 1991).

Geyer, R., Jambeck, J. R. & Law, K. L. Production, use, and fate of all plastics ever made. Sci. Adv. 3 , e1700782 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Mitchell, S., Martín, A. J. & Pérez-Ramírez, J. Transcending scales in catalysis for sustainable development. Nat. Chem. Eng. 1 , 13–15 (2024).

Shirazimoghaddam, S., Amin, I., Faria Albanese, J. A. & Shiju, N. R. Chemical recycling of used PET by glycolysis using niobia-based catalysts. ACS Eng. Au 3 , 37–44 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sullivan, K. P. et al. Mixed plastics waste valorization through tandem chemical oxidation and biological funneling. Science 378 , 207–211 (2022).

Article   CAS   PubMed   Google Scholar  

Martín, A. J., Mondelli, C., Jaydev, S. D. & Pérez-Ramírez, J. Catalytic processing of plastic waste on the rise. Chem 7 , 1487–1533 (2021).

Vollmer, I. et al. Beyond mechanical recycling: giving new life to plastic waste. Angew. Chem. Int. Ed. 59 , 15402–15423 (2020).

Li, H. et al. Expanding plastics recycling technologies: chemical aspects, technology status and challenges. Green Chem. 24 , 8899–9002 (2022).

Hancock, J. N. & Rorrer, J. E. Hydrogen-free catalytic depolymerization of waste polyolefins at mild temperatures. Appl. Catal. B 338 , 123071 (2023).

Jaydev, S. D., Usteri, M. E., Martín, A. J. & Pérez-Ramírez, J. Identifying selective catalysts in polypropylene hydrogenolysis by decoupling scission pathways. Chem Catal. 3 , 100564 (2023).

Jaydev, S. D., Martín, A. J. & Pérez-Ramírez, J. Direct conversion of polypropylene into liquid hydrocarbons on carbon-supported platinum catalysts. ChemSusChem 14 , 5179–5185 (2021).

Kots, P. A. et al. Polypropylene plastic waste conversion to lubricants over Ru/TiO 2 catalysts. ACS Catal. 11 , 8104–8115 (2021).

Celik, G. et al. Upcycling single-use polyethylene into high-quality liquid products. ACS Cent. Sci. 5 , 1795–1803 (2019).

Abbas-Abadi, M. S., Haghighi, M. N. & Yeganeh, H. Evaluation of pyrolysis product of virgin high density polyethylene degradation using different process parameters in a stirred reactor. Fuel Process. Technol. 109 , 90–95 (2013).

Elordi, G., Olazar, M., Lopez, G., Artetxe, M. & Bilbao, J. Product yields and compositions in the continuous pyrolysis of high-density polyethylene in a conical spouted bed reactor. Ind. Eng. Chem. Res. 50 , 6650–6659 (2011).

Conesa, J. A., Font, R., Marcilla, A. & Garcia, A. N. Pyrolysis of polyethylene in a fluidized bed reactor. Energy Fuels 8 , 1238–1246 (1994).

Mastral, J. F., Berrueco, C. & Ceamanos, J. Pyrolysis of high-density polyethylene in free-fall reactors in series. Energy Fuels 20 , 1365–1371 (2006).

Todd, D. B. in Handbook of Industrial Mixing 987–1025 (John Wiley, 2003).

Zlokarnik, M. in Stirring 76–96 (Wiley-VCH, 2001).

Ge, J. & Peters, B. Mass transfer in catalytic depolymerization: external effectiveness factors and serendipitous processivity in stagnant and stirred melts. Chem. Eng. J. 466 , 143251 (2023).

Rejman, S. et al. Transport limitations in polyolefin cracking at the single catalyst particle level. Chem. Sci. 14 , 10068–10080 (2023).

Liu, K. & Meuzelaar, H. L. C. Catalytic reactions in waste plastics, HDPE and coal studied by high-pressure thermogravimetry with on-line GC/MS. Fuel Process. Technol. 49 , 1–15 (1996).

Serrano, D. P., Aguado, J., Escola, J. M. & Rodríguez, J. M. Influence of nanocrystalline HZSM-5 external surface on the catalytic cracking of polyolefins. J. Anal. Appl. Pyrolysis 74 , 353–360 (2005).

Jerdy, A. C. et al. Deconvoluting the roles of polyolefin branching and unsaturation on depolymerization reactions over acid catalysts. Appl. Catal. B 337 , 122986 (2023).

Tennakoon, A. et al. Catalytic upcycling of high-density polyethylene via a processive mechanism. Nat. Catal. 3 , 893–901 (2020).

Lee, Y. H., Sun, J., Scott, S. L. & Abu-Omar, M. M. Quantitative analyses of products and rates in polyethylene depolymerization and upcycling. STAR Protoc. 4 , 102575 (2023).

Henzler, H.-J. & Obernosterer, G. Effect of mixing behaviour on gas-liquid mass transfer in highly viscous, stirred non-Newtonian liquids. Chem. Eng. Technol. 14 , 1–10 (1991).

Kang, Q., Liu, J., Feng, X., Yang, C. & Wang, J. Isolated mixing regions and mixing enhancement in a high-viscosity laminar stirred tank. Chin. J. Chem. Eng. 41 , 176–192 (2022).

Bremner, T., Rudin, A. & Cook, D. G. Melt flow index values and molecular weight distributions of commercial thermoplastics. J. Appl. Polym. Sci. 41 , 1617–1627 (1990).

Ng, K. Y. & Erwin, L. Experiments in extensive mixing in laminar flow. I. Simple illustrations. Polym. Eng. Sci. 21 , 212–217 (1981).

Cortada-Garcia, M., Dore, V., Mazzei, L. & Angeli, P. Experimental and CFD studies of power consumption in the agitation of highly viscous shear thinning fluids. Chem. Eng. Res. Des. 119 , 171–182 (2017).

Magnetic stirrers frequently asked questions. Fisher Scientific https://www.fishersci.se/se/en/scientific-products/featured-categories/magnetic-stirrers/frequently-asked-questions.html (2024).

Viscous slurry machinery. Siehe https://www.sieheindustry.com/product_category/viscous-slurry-machinery (2024).

Jaydev, S. D. et al. Consumer grade polyethylene recycling via hydrogenolysis on ultrafine supported ruthenium nanoparticles. Angew. Chem. Int. Ed. 63 , e202317526 (2023).

Chen, L. et al. Effect of reaction conditions on the hydrogenolysis of polypropylene and polyethylene into gas and liquid alkanes. React. Chem. Eng. 7 , 844–854 (2022).

Chen, L., Moreira, J. B., Meyer, L. C. & Szanyi, J. Efficient and selective dual-pathway polyolefin hydro-conversion over unexpectedly bifunctional M/TiO 2 -anatase catalysts. Appl. Catal. B 335 , 122897 (2023).

Rorrer, J. E., Beckham, G. T. & Román-Leshkov, Y. Conversion of polyolefin waste to liquid alkanes with Ru-based catalysts under mild conditions. JACS Au 1 , 8–12 (2021).

Rorrer, J. E., Troyano-Valls, C., Beckham, G. T. & Román-Leshkov, Y. Hydrogenolysis of polypropylene and mixed polyolefin plastic waste over Ru/C to produce liquid alkanes. ACS Sustain. Chem. Eng. 9 , 11661–11666 (2021).

Shinohara, K., Yanagisawa, M. & Makida, Y. Direct observation of long-chain branches in a low-density polyethylene. Sci. Rep. 9 , 9791 (2019).

Gedde, U. W. et al. Molecular structure, crystallization behavior, and morphology of fractions obtained from an extrusion grade high-density polyethylene. Polym. Eng. Sci. 28 , 1289–1303 (1988).

Sakai, T. in Physics of Polymer Gels 1–22 (Wiley‐VCH, 2020).

Keller, A. & O’Connor, A. A study on the relation between chain folding and chain length in polyethylene. Polymer 1 , 163–168 (1960).

Levenspiel, O. in Chemical Reaction Engineering 566–606 (John Wiley, 1999).

Brennen, C. E. in Fundamentals of Multiphase Flow 252–271 (Cambridge Univ. Press, 2005).

Mason, A. H. et al. Rapid atom-efficient polyolefin plastics hydrogenolysis mediated by a well-defined single-site electrophilic/cationic organo-zirconium catalyst. Nat. Commun. 13 , 7187 (2022).

Edenfield, W. C. et al. Rapid polyolefin plastic hydrogenolysis mediated by single-site heterogeneous electrophilic/cationic organo-group IV catalysts. ACS Catal. 14 , 554–565 (2024).

Hemrajani, R. R. & Tatterson, G. B. in Handbook of Industrial Mixing 345–390 (John Wiley, 2003).

Chandrasekaran, S. et al. Recent advances in metal sulfides: from controlled fabrication to electrocatalytic, photocatalytic and photoelectrochemical water splitting and beyond. Chem. Soc. Rev. 48 , 4178–4280 (2019).

SYSTAG. System Technik AG https://www.systag.ch/de/labor-reaktor-systeme/produkte/flexyconcept/flexysys/ (accessed 14 August 2024).

Rome, K. & Mcintyre, A. Intelligent use of relative response factors in gas chromatography-flame ionisation detection. Chromatogr. Today 52–56 (2012).

Theory for heat transfer in fluids. COMSOL https://doc.comsol.com/6.1/docserver/#!/com.comsol.help.heat/heat_ug_theory.07.008.html (2024).

Energy equation. ANSYS https://www.afs.enea.it/project/neptunius/docs/fluent/html/th/node302.htm (2024).

Jaydev, S. D., Martín, A. J., García, D., Chikri, K. & Pérez-Ramírez, J. Assessment of transport phenomena in catalyst effectiveness for chemical polyolefin recycling. Zenodo https://doi.org/10.5281/zenodo.10812922 (2024).

Download references

Acknowledgements

This study was supported by ETH Zurich through an ETH Research Grant (ETH-40 20-2, to J.P.-R.) and NCCR Catalysis (grant no. 180544, to J.P.-R.), a National Centre of Competence in Research funded by the Swiss National Science Foundation. G. Pagani is thanked for viscometry measurements. F. Krumeich is thanked for TEM characterization. T. P. Araújo and V. Giulimondi are thanked for XPS characterization.

Open access funding provided by Swiss Federal Institute of Technology Zurich.

Author information

These authors contributed equally: Shibashish D. Jaydev, Antonio J. Martín.

Authors and Affiliations

Institute for Chemical and Bioengineering, Department of Chemistry and Applied Biosciences, ETH Zurich, Zurich, Switzerland

Shibashish D. Jaydev, Antonio J. Martín, Katia Chikri & Javier Pérez-Ramírez

Büchi, Uster, Switzerland

David Garcia

You can also search for this author in PubMed   Google Scholar

Contributions

S.D.J. conceptualized and performed the experiments and analyzed the data. A.J.M. conceptualized the experiments, analyzed the data and supervised the study. D.G. carried out simulations. K.C. performed the experiments and analyzed the data. J.P.-R. conceptualized the experiments, conceived and supervised the entire study. All of the authors were involved in the development of the paper.

Corresponding authors

Correspondence to Antonio J. Martín or Javier Pérez-Ramírez .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Chemical Engineering thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 geometry and reaction variables..

Schematic representation of the main components of the testing setup (Supplementary Fig. 1 ) replicated in CFD simulations, including reactor vessel and stirrer types. Main dimensions are indicated, including those of the vessel, the height of the modelled polymer melt, and the diameter ( D ) and blade height ( L ) common to the three stirrer types. The mesh used for the propeller type in CFD simulations is shown as representative. The main experimentally imposed or monitored operation variables with their ranges are provided. Stirring of polyethylene with and without catalyst particles in the experimental setup is available in Supplementary Video 3 .

Extended Data Fig. 2 Influence of pressure and temperature on performance and viscosity.

a,b , Variation of product distribution with pressure ( a ) and temperature ( b ) for hydrogenolysis of HDPE 200 . Corresponding results for PP 340 are available in Supplementary Fig. 4 . c , Three-phase CFD simulations of viscosity distribution (top) and Reynolds number (bottom) at the mid x - y plane of the stirrer at different temperatures. Values in Supplementary Table 7 . Reaction and simulated conditions: d p  = 0.0-0.2 mm (0.2 mm for simulations), stirrer = impeller, stirring rate = 750 r.p.m.

Extended Data Fig. 3 Influence of vertical motion of catalyst particles in performance.

a , Three-phase CFD simulated distributions using discrete phase modelling from a top view of catalyst particle velocity along the z -axis. b , Correlation between total yield to C 1 -C 45 products and selectivity to liquid products (C 6 -C 45 ) with average particle z -Reynolds number for HDPE 200 and PP 340 . Simulated particle trajectories are available in Supplementary Video 4 . Reaction and simulated conditions: T  = 498 K, p H2  = 20 bar, catalysts/plastic = 0.05, stirring rate = 750 r.p.m.

Extended Data Fig. 4 Shape factor.

Variation of the shape factor ( K s ) defined as the ratio of the maximum vertical velocity of catalyst particles and velocity of the tip of the stirrer, determined via CFD simulations, for three different stirrer types. Values can be found in Supplementary Table 8 .

Extended Data Fig. 5 Simulated hydrogen fractions.

Variation of the hydrogen fraction in the melt volume with power number at different stirring rates calculated from CFD simulations corresponding to those shown in Fig. 5 for the three stirrer types for HDPE 200 (top) and PP 340 (bottom). Values can be found in Supplementary Table 10 .

Extended Data Fig. 6 Effect of transition from Non-Newtonian to Newtonian melts in the H2-melt interface.

Two-phase CFD simulations of the hydrogen fraction in the mid z - x plane for HDPE 200 ( ∼ C 10000 ), HDPE 100 ( ∼ C 5000 ), a generic C 200 alkane showing a non-Newtonian character (left) and Newtonian (right), and eicosane (C 21 ) resembling the evolution over the course of hydrogenolysis from Non-Newtonian to a Newtonian character. The direction of the change of other relevant features is added.

Supplementary information

Supplementary information.

Supplementary Notes 1–12, Figs. 1–17, Tables 1–13 and references.

Supplementary Video 1

Mixing with a magnetic stirrer: water, LDPE 35 and HDPE 100 .

Supplementary Video 2

Mixing with a magnetic stirrer: water, PP 12 and PP 340 .

Supplementary Video 3

Mixing with a mechanical stirrer: eicosane, LDPE 35 and HDPE 100 .

Supplementary Video 4

Trajectories of 0.2 mm particles at 1,750 r.p.m. for impeller, propeller and turbine stirrers.

Source Data Fig. 1

Raw data for plots in Fig. 1.

Source Data Fig. 2

Raw data for plots in Fig. 2.

Source Data Fig. 3

Raw data for plots in Fig. 3.

Source Data Fig. 4

Raw data for plots in Fig. 4.

Source Data Fig. 6

Raw data for plots in Fig. 6.

Source Data Extended Data Fig. 2

Raw data for plots in Extended Data Fig. 2.

Source Data Extended Data Fig. 3

Raw data for plots in Extended Data Fig. 3.

Source Data Extended Data Fig. 4

Raw data for plots in Extended Data Fig. 4.

Source Data Extended Data Fig. 5

Raw data for plots in Extended Data Fig. 5.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Jaydev, S.D., Martín, A.J., Garcia, D. et al. Assessment of transport phenomena in catalyst effectiveness for chemical polyolefin recycling. Nat Chem Eng (2024). https://doi.org/10.1038/s44286-024-00108-3

Download citation

Received : 23 March 2024

Accepted : 25 July 2024

Published : 28 August 2024

DOI : https://doi.org/10.1038/s44286-024-00108-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

quantitative research sampling procedures

IMAGES

  1. Sampling Techniques For Quantitative Research

    quantitative research sampling procedures

  2. We used Quantitative Sampling Procedure in Research paper : r

    quantitative research sampling procedures

  3. Sampling Method

    quantitative research sampling procedures

  4. schematic representation of sampling procedures for quantitative and

    quantitative research sampling procedures

  5. Explain Different Types of Sampling Methods With Sample

    quantitative research sampling procedures

  6. Types Of Sampling Methods

    quantitative research sampling procedures

VIDEO

  1. SAMPLING PROCEDURE AND SAMPLE (QUALITATIVE RESEARCH)

  2. QUANTITATIVE METHODOLOGY (Part 2 of 3):

  3. Introduction to Statistical Sampling (5 Minutes)

  4. Field Sampling Procedures Manual (FSPM) Updates: Chapters 5

  5. IAS

  6. 2022-02-08 Practical Research 2

COMMENTS

  1. Sampling Methods

    Probability sampling methods. Probability sampling means that every member of the population has a chance of being selected. It is mainly used in quantitative research. If you want to produce results that are representative of the whole population, probability sampling techniques are the most valid choice.

  2. 3.4 Sampling Techniques in Quantitative Research

    3.4 Sampling Techniques in Quantitative Research Target Population. The target population includes the people the researcher is interested in conducting the research and generalizing the findings on. 40 For example, if certain researchers are interested in vaccine-preventable diseases in children five years and younger in Australia. The target population will be all children aged 0-5 years ...

  3. Sampling Methods & Strategies 101 (With Examples)

    Probability-based sampling methods are most commonly used in quantitative research, especially when it's important to achieve a representative sample that allows the researcher to generalise their findings. Non-probability sampling, on the other hand, refers to sampling methods in which the selection of participants is not statistically random.

  4. What are sampling methods and how do you choose the best one?

    We could choose a sampling method based on whether we want to account for sampling bias; a random sampling method is often preferred over a non-random method for this reason. Random sampling examples include: simple, systematic, stratified, and cluster sampling. Non-random sampling methods are liable to bias, and common examples include ...

  5. Sampling Techniques for Quantitative Research

    Types of Sampling Techniques in Quantitative Research. There are two main types of sampling techniques are observed—probability and non-probability sampling (Malhotra & Das, 2010; Sekaran & Bougie, 2016 ). If the population is known and each element has an equal chance of being picked, then probability sampling applies.

  6. Methodology Series Module 5: Sampling Strategies

    The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available.

  7. Sampling Methods

    Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.

  8. Part I: Sampling, Data Collection, & Analysis in Quantitative Research

    Obtaining Samples for Population Generalizability. In quantitative research, a population is the entire group that the researcher wants to draw conclusions about.. A sample is the specific group that the researcher will actually collect data from. A sample is always a much smaller group of people than the total size of the population.

  9. Sampling Methods for Research: Types, Uses, and Examples

    Let's take a look at the most common sampling methods. Types of sampling methods. There are two main sampling methods: probability sampling and non-probability sampling. ... Probability sampling is used in quantitative research, so it provides data on the survey topic in terms of numbers. Probability relates to mathematics, hence the name ...

  10. PDF Sampling Techniques for Quantitative Research

    Types of Sampling Techniques in Quantitative Research. There are two main types of sampling techniques are observed—probability and non-probability sampling (Malhotra & Das, 2010; Sekaran & Bougie, 2016). If the population is known and each element has an equal chance of being picked, then probability sampling applies.

  11. What are Sampling Methods? Techniques, Types, and Examples

    Understand sampling methods in research, from simple random sampling to stratified, systematic, and cluster sampling. Learn how these sampling techniques boost data accuracy and representation, ensuring robust, reliable results. Check this article to learn about the different sampling method techniques, types and examples.

  12. Sampling Methods

    The data collected is quantitative and statistical analyses are used to draw conclusions. Purpose of Sampling Methods. The main purpose of sampling methods in research is to obtain a representative sample of individuals or elements from a larger population of interest, in order to make inferences about the population as a whole. ...

  13. Sampling methods in Clinical Research; an Educational Review

    Sampling types. There are two major categories of sampling methods ( figure 1 ): 1; probability sampling methods where all subjects in the target population have equal chances to be selected in the sample [ 1, 2] and 2; non-probability sampling methods where the sample population is selected in a non-systematic process that does not guarantee ...

  14. Sampling Methods, Types & Techniques

    There are two major types of sampling methods: probability and non-probability sampling. Probability sampling, also known as random sampling, is a kind of sample selection where randomization is used instead of deliberate choice. Each member of the population has a known, non-zero chance of being selected.

  15. (PDF) Sampling Methods in Research: A Review

    Linear systematic sampling is a statistical sampling technique that involves selec ting every kth element from a. list or population after a random starting point has been det ermined. This method ...

  16. Sampling Methods In Reseach: Types, Techniques, & Examples

    Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.

  17. Sampling Techniques (Probability) for Quantitative Social Science

    Different sampling methods are used depending on the aim of the study and whether the research question seeks a confident answer about the population of interest.

  18. Sampling Methods: A guide for researchers

    Sampling is a critical element of research design. Different methods can be used for sample selection to ensure that members of the study population reflect both the source and target populations, including probability and non-probability sampling. Power and sample size are used to determine the number of subjects needed to answer the research ...

  19. Qualitative, Quantitative, and Mixed Methods Research Sampling

    However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study. Sampling in Qualitative Research Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size.

  20. PDF Chapter 8: Quantitative Sampling

    4. Cluster Sampling. Cluster sampling addresses two problems: Researchers lack a good sampling frame for a geographically dispersed population and the cost to reach a sampled element is very high. Instead of using a single sampling frame, researchers use a sampling design that involves multiple stages and clusters.

  21. Sampling Strategies for Quantitative and Qualitative Business Research

    Such samples can be best acquired through probability sampling, a procedure in which all members of the target population have a known and random chance of being selected. However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is ...

  22. Sampling: how to select participants in my research study?

    The essential topics related to the selection of participants for a health research are: 1) whether to work with samples or include the whole reference population in the study (census); 2) the sample basis; 3) the sampling process and 4) the potential effects nonrespondents might have on study results. We will refer to each of these aspects ...

  23. Experience Sampling as a dietary assessment method: a scoping review

    Background Accurate and feasible assessment of dietary intake remains challenging for research and healthcare. Experience Sampling Methodology (ESM) is a real-time real-life data capturing method with low burden and good feasibility not yet fully explored as alternative dietary assessment method. Methods This scoping review is the first to explore the implementation of ESM as an alternative to ...

  24. Assessment of transport phenomena in catalyst effectiveness for

    This study was supported by ETH Zurich through an ETH Research Grant (ETH-40 20-2, to J.P.-R.) and NCCR Catalysis (grant no. 180544, to J.P.-R.), a National Centre of Competence in Research funded ...