Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research bias

Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .

Understanding research bias is important for several reasons.

  • Bias exists in all research, across research designs , and is difficult to eliminate.
  • Bias can occur at any stage of the research process .
  • Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimize them.

For example, the success rate of the program will likely be affected if participants start to drop out ( attrition ). Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.  

Table of contents

Information bias, interviewer bias.

  • Publication bias

Researcher bias

Response bias.

Selection bias

Cognitive bias

How to avoid bias in research

Other types of research bias, frequently asked questions about research bias.

Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias

Performance bias

Regression to the mean (rtm).

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgment (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the d ata collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double-blinded  and single-blinded research methods.

Based on discussions you had with other researchers before starting your observations , you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct semi-structured interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behavior because they are aware they are being studied, this is called the Hawthorne effect (or observer effect). Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behavior in an effort to compensate for their perceived disadvantage.

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the center of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Participant: “I like to solve puzzles, or sometimes do some gardening.”

You: “I love gardening, too!”

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).

The unconscious form of researcher bias is associated with the Pygmalion effect (or Rosenthal effect ), where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .

This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

While interviewing a student, you ask them:

“Do you think it’s okay to cheat on an exam?”

Common types of response bias are:

Acquiescence bias

Demand characteristics.

  • Social desirability bias

Courtesy bias

  • Question-order bias

Extreme responding

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like “agree/disagree,” “yes/no,” or “true/false.” Acquiescence is sometimes referred to as “yea-saying.”

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Q: Are you a social person?

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  • A quiet night in
  • A night out with friends

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviors or views. Ensuring that participants are not aware of the research objectives is the best way to avoid this type of bias.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behavior.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Sampling or ascertainment bias

  • Attrition bias
  • Self-selection (or volunteer) bias
  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey, three surveys during the program, and a posttest survey.

Self-selection or volunteer bias

Self-selection bias (also called volunteer bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment —i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process—focusing on “survivors” and forgetting those who went through a similar process and did not survive.

Note that “survival” does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

Cognitive bias refers to a set of predictable (i.e., nonrandom) errors in thinking that arise from our limited ability to process information objectively. Rather, our judgment is influenced by our values, memories, and other personal traits. These create “ mental shortcuts” that help us process information intuitively and decide faster. However, cognitive bias can also cause us to misunderstand or misinterpret situations, information, or other people.

Because of cognitive bias, people often perceive events to be more predictable after they happen.

Although there is no general agreement on how many types of cognitive bias exist, some common types are:

  • Anchoring bias  
  • Framing effect  
  • Actor-observer bias
  • Availability heuristic (or availability bias)
  • Confirmation bias  
  • Halo effect
  • The Baader-Meinhof phenomenon  

Anchoring bias

Anchoring bias is people’s tendency to fixate on the first piece of information they receive, especially when it concerns numbers. This piece of information becomes a reference point or anchor. Because of that, people base all subsequent decisions on this anchor. For example, initial offers have a stronger influence on the outcome of negotiations than subsequent ones.

  • Framing effect

Framing effect refers to our tendency to decide based on how the information about the decision is presented to us. In other words, our response depends on whether the option is presented in a negative or positive light, e.g., gain or loss, reward or punishment, etc. This means that the same information can be more or less attractive depending on the wording or what features are highlighted.

Actor–observer bias

Actor–observer bias occurs when you attribute the behavior of others to internal factors, like skill or personality, but attribute your own behavior to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behavior of others, you are more likely to associate behavior with their personality, nature, or temperament.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the highway, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

  • Availability heuristic

Availability heuristic (or availability bias) describes the tendency to evaluate a topic using the information we can quickly recall to our mind, i.e., that is available to us. However, this is not necessarily the best information, rather it’s the most vivid or recent. Even so, due to this mental shortcut, we tend to think that what we can recall must be right and ignore any other information.

  • Confirmation bias

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasize findings that “prove” that your lived experience is the case for most families, neglecting other explanations and experiences.

The halo effect refers to situations whereby our general impression about a person, a brand, or a product is shaped by a single trait. It happens, for instance, when we automatically make positive assumptions about people based on something positive we notice, while in reality, we know little about them.

The Baader-Meinhof phenomenon

The Baader-Meinhof phenomenon (or frequency illusion) occurs when something that you recently learned seems to appear “everywhere” soon after it was first brought to your attention. However, this is not the case. What has increased is your awareness of something, such as a new word or an old song you never knew existed, not their frequency.

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies , make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies .
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgmental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview , paying special attention to any influence you may have had on participants. You can include these in your final analysis.
  • Baader–Meinhof phenomenon
  • Sampling bias
  • Ascertainment bias
  • Self-selection bias
  • Hawthorne effect
  • Omitted variable bias
  • Pygmalion effect
  • Placebo effect

Research bias affects the validity and reliability of your research findings , leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behavior and external factors (difficult circumstances) to justify the same behavior in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews. These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen because people are either not willing or not able to participate.

Is this article helpful?

Other students also liked.

  • Attrition Bias | Examples, Explanation, Prevention
  • Observer Bias | Definition, Examples, Prevention
  • What Is Social Desirability Bias? | Definition & Examples

More interesting articles

  • Demand Characteristics | Definition, Examples & Control
  • Hostile Attribution Bias | Definition & Examples
  • Regression to the Mean | Definition & Examples
  • Representativeness Heuristic | Example & Definition
  • Sampling Bias and How to Avoid It | Types & Examples
  • Self-Fulfilling Prophecy | Definition & Examples
  • The Availability Heuristic | Example & Definition
  • The Baader–Meinhof Phenomenon Explained
  • What Is a Ceiling Effect? | Definition & Examples
  • What Is Actor-Observer Bias? | Definition & Examples
  • What Is Affinity Bias? | Definition & Examples
  • What Is Anchoring Bias? | Definition & Examples
  • What Is Ascertainment Bias? | Definition & Examples
  • What Is Belief Bias? | Definition & Examples
  • What Is Bias for Action? | Definition & Examples
  • What Is Cognitive Bias? | Definition, Types, & Examples
  • What Is Confirmation Bias? | Definition & Examples
  • What Is Conformity Bias? | Definition & Examples
  • What Is Correspondence Bias? | Definition & Example
  • What Is Explicit Bias? | Definition & Examples
  • What Is Generalizability? | Definition & Examples
  • What Is Hindsight Bias? | Definition & Examples
  • What Is Implicit Bias? | Definition & Examples
  • What Is Information Bias? | Definition & Examples
  • What Is Ingroup Bias? | Definition & Examples
  • What Is Negativity Bias? | Definition & Examples
  • What Is Nonresponse Bias? | Definition & Example
  • What Is Normalcy Bias? | Definition & Example
  • What Is Omitted Variable Bias? | Definition & Examples
  • What Is Optimism Bias? | Definition & Examples
  • What Is Outgroup Bias? | Definition & Examples
  • What Is Overconfidence Bias? | Definition & Examples
  • What Is Perception Bias? | Definition & Examples
  • What Is Primacy Bias? | Definition & Example
  • What Is Publication Bias? | Definition & Examples
  • What Is Recall Bias? | Definition & Examples
  • What Is Recency Bias? | Definition & Examples
  • What Is Response Bias? | Definition & Examples
  • What Is Selection Bias? | Definition & Examples
  • What Is Self-Selection Bias? | Definition & Example
  • What Is Self-Serving Bias? | Definition & Example
  • What Is Status Quo Bias? | Definition & Examples
  • What Is Survivorship Bias? | Definition & Examples
  • What Is the Affect Heuristic? | Example & Definition
  • What Is the Egocentric Bias? | Definition & Examples
  • What Is the Framing Effect? | Definition & Examples
  • What Is the Halo Effect? | Definition & Examples
  • What Is the Hawthorne Effect? | Definition & Examples
  • What Is the Placebo Effect? | Definition & Examples
  • What Is the Pygmalion Effect? | Definition & Examples
  • What Is Unconscious Bias? | Definition & Examples
  • What Is Undercoverage Bias? | Definition & Example
  • What Is Vividness Bias? | Definition & Examples

research bias findings

The Ultimate Guide to Qualitative Research - Part 1: The Basics

research bias findings

  • Introduction and overview
  • What is qualitative research?
  • What is qualitative data?
  • Examples of qualitative data
  • Qualitative vs. quantitative research
  • Mixed methods
  • Qualitative research preparation
  • Theoretical perspective
  • Theoretical framework
  • Literature reviews
  • Research question
  • Conceptual framework
  • Conceptual vs. theoretical framework
  • Data collection
  • Qualitative research methods
  • Focus groups
  • Observational research
  • Case studies
  • Ethnographical research
  • Ethical considerations
  • Confidentiality and privacy

What is research bias?

Understanding unconscious bias, how to avoid bias in research, bias and subjectivity in research.

  • Power dynamics
  • Reflexivity

Bias in research

In a purely objective world, research bias would not exist because knowledge would be a fixed and unmovable resource; either one knows about a particular concept or phenomenon, or they don't. However, qualitative research and the social sciences both acknowledge that subjectivity and bias exist in every aspect of the social world, which naturally includes the research process too. This bias is manifest in the many different ways that knowledge is understood, constructed, and negotiated, both in and out of research.

research bias findings

Understanding research bias has profound implications for data collection methods and data analysis , requiring researchers to take particular care of how to account for the insights generated from their data .

Research bias, often unavoidable, is a systematic error that can creep into any stage of the research process , skewing our understanding and interpretation of findings. From data collection to analysis, interpretation , and even publication , bias can distort the truth we seek to capture and communicate in our research.

It’s also important to distinguish between bias and subjectivity, especially when engaging in qualitative research . Most qualitative methodologies are based on epistemological and ontological assumptions that there is no such thing as a fixed or objective world that exists “out there” that can be empirically measured and understood through research. Rather, many qualitative researchers embrace the socially constructed nature of our reality and thus recognize that all data is produced within a particular context by participants with their own perspectives and interpretations. Moreover, the researcher’s own subjective experiences inevitably shape how they make sense of the data. These subjectivities are considered to be strengths, not limitations, of qualitative research approaches, because they open new avenues for knowledge generation. This is also why reflexivity is so important in qualitative research. When we refer to bias in this guide, on the other hand, we are referring to systematic errors that can negatively affect the research process but that can be mitigated through researchers’ careful efforts.

To fully grasp what research bias is, it's essential to understand the dual nature of bias. Bias is not inherently evil. It's simply a tendency, inclination, or prejudice for or against something. In our daily lives, we're subject to countless biases, many of which are unconscious. They help us navigate our world, make quick decisions, and understand complex situations. But when conducting research, these same biases can cause significant issues.

research bias findings

Research bias can affect the validity and credibility of research findings, leading to erroneous conclusions. It can emerge from the researcher's subconscious preferences or the methodological design of the study itself. For instance, if a researcher unconsciously favors a particular outcome of the study, this preference could affect how they interpret the results, leading to a type of bias known as confirmation bias.

Research bias can also arise due to the characteristics of study participants. If the researcher selectively recruits participants who are more likely to produce desired outcomes, this can result in selection bias.

Another form of bias can stem from data collection methods . If a survey question is phrased in a way that encourages a particular response, this can introduce response bias. Moreover, inappropriate survey questions can have a detrimental effect on future research if such studies are seen by the general population as biased toward particular outcomes depending on the preferences of the researcher.

Bias can also occur during data analysis . In qualitative research for instance, the researcher's preconceived notions and expectations can influence how they interpret and code qualitative data, a type of bias known as interpretation bias. It's also important to note that quantitative research is not free of bias either, as sampling bias and measurement bias can threaten the validity of any research findings.

Given these examples, it's clear that research bias is a complex issue that can take many forms and emerge at any stage in the research process. This section will delve deeper into specific types of research bias, provide examples, discuss why it's an issue, and provide strategies for identifying and mitigating bias in research.

What is an example of bias in research?

Bias can appear in numerous ways. One example is confirmation bias, where the researcher has a preconceived explanation for what is going on in their data, and any disconfirming evidence is (unconsciously) ignored. For instance, a researcher conducting a study on daily exercise habits might be inclined to conclude that meditation practices lead to greater engagement in exercise because that researcher has personally experienced these benefits. However, conducting rigorous research entails assessing all the data systematically and verifying one’s conclusions by checking for both supporting and refuting evidence.

research bias findings

What is a common bias in research?

Confirmation bias is one of the most common forms of bias in research. It happens when researchers unconsciously focus on data that supports their ideas while ignoring or undervaluing data that contradicts their ideas. This bias can lead researchers to mistakenly confirm their theories, despite having insufficient or conflicting evidence.

What are the different types of bias?

There are several types of research bias, each presenting unique challenges. Some common types include:

Confirmation bias: As already mentioned, this happens when a researcher focuses on evidence supporting their theory while overlooking contradictory evidence.

Selection bias: This occurs when the researcher's method of choosing participants skews the sample in a particular direction.

Response bias: This happens when participants in a study respond inaccurately or falsely, often due to misleading or poorly worded questions.

Observer bias (or researcher bias): This occurs when the researcher unintentionally influences the results because of their expectations or preferences.

Publication bias: This type of bias arises when studies with positive results are more likely to get published, while studies with negative or null results are often ignored.

Analysis bias: This type of bias occurs when the data is manipulated or analyzed in a way that leads to a particular result, whether intentionally or unintentionally.

research bias findings

What is an example of researcher bias?

Researcher bias, also known as observer bias, can occur when a researcher's expectations or personal beliefs influence the results of a study. For instance, if a researcher believes that a particular therapy is effective, they might unconsciously interpret ambiguous results in a way that supports the efficacy of the therapy, even if the evidence is not strong enough.

Even quantitative research methodologies are not immune from bias from researchers. Market research surveys or clinical trial research, for example, may encounter bias when the researcher chooses a particular population or methodology to achieve a specific research outcome. Questions in customer feedback surveys whose data is employed in quantitative analysis can be structured in such a way as to bias survey respondents toward certain desired answers.

Turn your data into findings with ATLAS.ti

Key insights are at your fingertips with our powerful interface. See how with a free trial.

Identifying and avoiding bias in research

As we will remind you throughout this chapter, bias is not a phenomenon that can be removed altogether, nor should we think of it as something that should be eliminated. In a subjective world involving humans as researchers and research participants, bias is unavoidable and almost necessary for understanding social behavior. The section on reflexivity later in this guide will highlight how different perspectives among researchers and human subjects are addressed in qualitative research. That said, bias in excess can place the credibility of a study's findings into serious question. Scholars who read your research need to know what new knowledge you are generating, how it was generated, and why the knowledge you present should be considered persuasive. With that in mind, let's look at how bias can be identified and, where it interferes with research, minimized.

How do you identify bias in research?

Identifying bias involves a critical examination of your entire research study involving the formulation of the research question and hypothesis , the selection of study participants, the methods for data collection, and the analysis and interpretation of data. Researchers need to assess whether each stage has been influenced by bias that may have skewed the results. Tools such as bias checklists or guidelines, peer review , and reflexivity (reflecting on one's own biases) can be instrumental in identifying bias.

How do you identify research bias?

Identifying research bias often involves careful scrutiny of the research methodology and the researcher's interpretations. Was the sample of participants relevant to the research question ? Were the interview or survey questions leading? Were there any conflicts of interest that could have influenced the results? It also requires an understanding of the different types of bias and how they might manifest in a research context. Does the bias occur in the data collection process or when the researcher is analyzing data?

Research transparency requires a careful accounting of how the study was designed, conducted, and analyzed. In qualitative research involving human subjects, the researcher is responsible for documenting the characteristics of the research population and research context. With respect to research methods, the procedures and instruments used to collect and analyze data are described in as much detail as possible.

While describing study methodologies and research participants in painstaking detail may sound cumbersome, a clear and detailed description of the research design is necessary for good research. Without this level of detail, it is difficult for your research audience to identify whether bias exists, where bias occurs, and to what extent it may threaten the credibility of your findings.

How to recognize bias in a study?

Recognizing bias in a study requires a critical approach. The researcher should question every step of the research process: Was the sample of participants selected with care? Did the data collection methods encourage open and sincere responses? Did personal beliefs or expectations influence the interpretation of the results? External peer reviews can also be helpful in recognizing bias, as others might spot potential issues that the original researcher missed.

The subsequent sections of this chapter will delve into the impacts of research bias and strategies to avoid it. Through these discussions, researchers will be better equipped to handle bias in their work and contribute to building more credible knowledge.

Unconscious biases, also known as implicit biases, are attitudes or stereotypes that influence our understanding, actions, and decisions in an unconscious manner. These biases can inadvertently infiltrate the research process, skewing the results and conclusions. This section aims to delve deeper into understanding unconscious bias, its impact on research, and strategies to mitigate it.

What is unconscious bias?

Unconscious bias refers to prejudices or social stereotypes about certain groups that individuals form outside their conscious awareness. Everyone holds unconscious beliefs about various social and identity groups, and these biases stem from a tendency to organize social worlds into categories.

research bias findings

How does unconscious bias infiltrate research?

Unconscious bias can infiltrate research in several ways. It can affect how researchers formulate their research questions or hypotheses , how they interact with participants, their data collection methods, and how they interpret their data . For instance, a researcher might unknowingly favor participants who share similar characteristics with them, which could lead to biased results.

Implications of unconscious bias

The implications of unconscious research bias are far-reaching. It can compromise the validity of research findings , influence the choice of research topics, and affect peer review processes . Unconscious bias can also lead to a lack of diversity in research, which can severely limit the value and impact of the findings.

Strategies to mitigate unconscious research bias

While it's challenging to completely eliminate unconscious bias, several strategies can help mitigate its impact. These include being aware of potential unconscious biases, practicing reflexivity , seeking diverse perspectives for your study, and engaging in regular bias-checking activities, such as bias training and peer debriefing .

By understanding and acknowledging unconscious bias, researchers can take steps to limit its impact on their work, leading to more robust findings.

Why is researcher bias an issue?

Research bias is a pervasive issue that researchers must diligently consider and address. It can significantly impact the credibility of findings. Here, we break down the ramifications of bias into two key areas.

How bias affects validity

Research validity refers to the accuracy of the study findings, or the coherence between the researcher’s findings and the participants’ actual experiences. When bias sneaks into a study, it can distort findings and move them further away from the realities that were shared by the research participants. For example, if a researcher's personal beliefs influence their interpretation of data , the resulting conclusions may not reflect what the data show or what participants experienced.

The transferability problem

Transferability is the extent to which your study's findings can be applied beyond the specific context or sample studied. Applying knowledge from one context to a different context is how we can progress and make informed decisions. In quantitative research , the generalizability of a study is a key component that shapes the potential impact of the findings. In qualitative research , all data and knowledge that is produced is understood to be embedded within a particular context, so the notion of generalizability takes on a slightly different meaning. Rather than assuming that the study participants are statistically representative of the entire population, qualitative researchers can reflect on which aspects of their research context bear the most weight on their findings and how these findings may be transferable to other contexts that share key similarities.

How does bias affect research?

Research bias, if not identified and mitigated, can significantly impact research outcomes. The ripple effects of research bias extend beyond individual studies, impacting the body of knowledge in a field and influencing policy and practice. Here, we delve into three specific ways bias can affect research.

Distortion of research results

Bias can lead to a distortion of your study's findings. For instance, confirmation bias can cause a researcher to focus on data that supports their interpretation while disregarding data that contradicts it. This can skew the results and create a misleading picture of the phenomenon under study.

Undermining scientific progress

When research is influenced by bias, it not only misrepresents participants’ realities but can also impede scientific progress. Biased studies can lead researchers down the wrong path, resulting in wasted resources and efforts. Moreover, it could contribute to a body of literature that is skewed or inaccurate, misleading future research and theories.

Influencing policy and practice based on flawed findings

Research often informs policy and practice. If the research is biased, it can lead to the creation of policies or practices that are ineffective or even harmful. For example, a study with selection bias might conclude that a certain intervention is effective, leading to its broad implementation. However, suppose the transferability of the study's findings was not carefully considered. In that case, it may be risky to assume that the intervention will work as well in different populations, which could lead to ineffective or inequitable outcomes.

research bias findings

While it's almost impossible to eliminate bias in research entirely, it's crucial to mitigate its impact as much as possible. By employing thoughtful strategies at every stage of research, we can strive towards rigor and transparency , enhancing the quality of our findings. This section will delve into specific strategies for avoiding bias.

How do you know if your research is biased?

Determining whether your research is biased involves a careful review of your research design, data collection , analysis , and interpretation . It might require you to reflect critically on your own biases and expectations and how these might have influenced your research. External peer reviews can also be instrumental in spotting potential bias.

Strategies to mitigate bias

Minimizing bias involves careful planning and execution at all stages of a research study. These strategies could include formulating clear, unbiased research questions , ensuring that your sample meaningfully represents the research problem you are studying, crafting unbiased data collection instruments, and employing systematic data analysis techniques. Transparency and reflexivity throughout the process can also help minimize bias.

Mitigating bias in data collection

To mitigate bias in data collection, ensure your questions are clear, neutral, and not leading. Triangulation, or using multiple methods or data sources, can also help to reduce bias and increase the credibility of your findings.

Mitigating bias in data analysis

During data analysis , maintaining a high level of rigor is crucial. This might involve using systematic coding schemes in qualitative research or appropriate statistical tests in quantitative research . Regularly questioning your interpretations and considering alternative explanations can help reduce bias. Peer debriefing , where you discuss your analysis and interpretations with colleagues, can also be a valuable strategy.

By using these strategies, researchers can significantly reduce the impact of bias on their research, enhancing the quality and credibility of their findings and contributing to a more robust and meaningful body of knowledge.

Impact of cultural bias in research

Cultural bias is the tendency to interpret and judge phenomena by standards inherent to one's own culture. Given the increasingly multicultural and global nature of research, understanding and addressing cultural bias is paramount. This section will explore the concept of cultural bias, its impacts on research, and strategies to mitigate it.

What is cultural bias in research?

Cultural bias refers to the potential for a researcher's cultural background, experiences, and values to influence the research process and findings. This can occur consciously or unconsciously and can lead to misinterpretation of data, unfair representation of cultures, and biased conclusions.

How does cultural bias infiltrate research?

Cultural bias can infiltrate research at various stages. It can affect the framing of research questions , the design of the study, the methods of data collection , and the interpretation of results . For instance, a researcher might unintentionally design a study that does not consider the cultural context of the participants, leading to a biased understanding of the phenomenon being studied.

Implications of cultural bias

The implications of cultural bias are profound. Cultural bias can skew your findings, limit the transferability of results, and contribute to cultural misunderstandings and stereotypes. This can ultimately lead to inaccurate or ethnocentric conclusions, further perpetuating cultural bias and inequities.

As a result, many social science fields like sociology and anthropology have been critiqued for cultural biases in research. Some of the earliest research inquiries in anthropology, for example, have had the potential to reduce entire cultures to simplistic stereotypes when compared to mainstream norms. A contemporary researcher respecting ethical and cultural boundaries, on the other hand, should seek to properly place their understanding of social and cultural practices in sufficient context without inappropriately characterizing them.

Strategies to mitigate cultural bias

Mitigating cultural bias requires a concerted effort throughout the research study. These efforts could include educating oneself about other cultures, being aware of one's own cultural biases, incorporating culturally diverse perspectives into the research process, and being sensitive and respectful of cultural differences. It might also involve including team members with diverse cultural backgrounds or seeking external cultural consultants to challenge assumptions and provide alternative perspectives.

By acknowledging and addressing cultural bias, researchers can contribute to more culturally competent, equitable, and valid research. This not only enriches the scientific body of knowledge but also promotes cultural understanding and respect.

research bias findings

Ready to jumpstart your research with ATLAS.ti?

Conceptualize your research project with our intuitive data analysis interface. Download a free trial today.

Keep in mind that bias is a force to be mitigated, not a phenomenon that can be eliminated altogether, and the subjectivities of each person are what make our world so complex and interesting. As things are continuously changing and adapting, research knowledge is also continuously being updated as we further develop our understanding of the world around us.

research bias findings

Ready to analyze your data with ATLAS.ti?

See how our intuitive software can draw key insights from your data with a free trial today.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 17, Issue 4
  • Bias in research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Joanna Smith 1 ,
  • Helen Noble 2
  • 1 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
  • 2 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • Correspondence to : Dr Joanna Smith , School of Human and Health Sciences, University of Huddersfield, Huddersfield HD1 3DH, UK; j.e.smith{at}hud.ac.uk

https://doi.org/10.1136/eb-2014-101946

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The aim of this article is to outline types of ‘bias’ across research designs, and consider strategies to minimise bias. Evidence-based nursing, defined as the “process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction with patient involvement, to provide the delivery of optimum nursing care,” 1 is central to the continued development of the nursing professional. Implementing evidence into practice requires nurses to critically evaluate research, in particular assessing the rigour in which methods were undertaken and factors that may have biased findings.

What is bias in relation to research and why is understanding bias important?

Although different study designs have specific methodological challenges and constraints, bias can occur at each stage of the research process ( table 1 ). In quantitative research, the validity and reliability are assessed using statistical tests that estimate the size of error in samples and calculating the significance of findings (typically p values or CIs). The tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research. However, in the broadest context, these terms are applicable, with validity referring to the integrity and application of the methods and the precision in which the findings accurately reflect the data, and reliability referring to the consistency within the analytical processes. 4

  • View inline

Types of research bias

How is bias minimised when undertaken research?

Bias exists in all study designs, and although researchers should attempt to minimise bias, outlining potential sources of bias enables greater critical evaluation of the research findings and conclusions. Researchers bring to each study their experiences, ideas, prejudices and personal philosophies, which if accounted for in advance of the study, enhance the transparency of possible research bias. Clearly articulating the rationale for and choosing an appropriate research design to meet the study aims can reduce common pitfalls in relation to bias. Ethics committees have an important role in considering whether the research design and methodological approaches are biased, and suitable to address the problem being explored. Feedback from peers, funding bodies and ethics committees is an essential part of designing research studies, and often provides valuable practical guidance in developing robust research.

In quantitative studies, selection bias is often reduced by the random selection of participants, and in the case of clinical trials randomisation of participants into comparison groups. However, not accounting for participants who withdraw from the study or are lost to follow-up can result in sample bias or change the characteristics of participants in comparison groups. 7 In qualitative research, purposeful sampling has advantages when compared with convenience sampling in that bias is reduced because the sample is constantly refined to meet the study aims. Premature closure of the selection of participants before analysis is complete can threaten the validity of a qualitative study. This can be overcome by continuing to recruit new participants into the study during data analysis until no new information emerges, known as data saturation. 8

In quantitative studies having a well-designed research protocol explicitly outlining data collection and analysis can assist in reducing bias. Feasibility studies are often undertaken to refine protocols and procedures. Bias can be reduced by maximising follow-up and where appropriate in randomised control trials analysis should be based on the intention-to-treat principle, a strategy that assesses clinical effectiveness because not everyone complies with treatment and the treatment people receive may be changed according to how they respond. Qualitative research has been criticised for lacking transparency in relation to the analytical processes employed. 4 Qualitative researchers must demonstrate rigour, associated with openness, relevance to practice and congruence of the methodological approach. Although other researchers may interpret the data differently, appreciating and understanding how the themes were developed is an essential part of demonstrating the robustness of the findings. Reducing bias can include respondent validation, constant comparisons across participant accounts, representing deviant cases and outliers, prolonged involvement or persistent observation of participants, independent analysis of the data by other researchers and triangulation. 4

In summary, minimising bias is a key consideration when designing and undertaking research. Researchers have an ethical duty to outline the limitations of studies and account for potential sources of bias. This will enable health professionals and policymakers to evaluate and scrutinise study findings, and consider these when applying findings to practice or policy.

  • Wakefield AJ ,
  • Anthony A ,
  • ↵ The Lancet . Retraction—ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children . Lancet 2010 ; 375 : 445 . OpenUrl CrossRef PubMed Web of Science
  • Easterbrook PJ ,
  • Berlin JA ,
  • Gopalan R ,
  • Petticrew M ,
  • Thomson H ,
  • Francis J ,
  • Johnston M ,
  • Robertson C ,

Competing interests None.

Read the full text or download the PDF:

University Libraries

  • Research Guides
  • Blackboard Learn
  • Interlibrary Loan
  • Study Rooms
  • University of Arkansas

Confronting Bias

Bias in research.

  • Bias in Media
  • Bias in Search Tools
  • Discovering Your Own Biases
  • Reducing Bias in Your Writing
  • Writing Tips

Additional Reading

  • Bias in research Smith J, Noble H. Bias in research. Evidence-Based Nursing 2014;17:100-101.
  • Bias in Research Simundić, A. (2013). Bias in research. Biochemia Medica, 23(1), 12. doi:10.11613/BM.2013.003
  • Big Pharma Entanglement with Biomedical Science in James, Jack. The Health of Populations : Beyond Medicine, Elsevier Science & Technology, 2015.

Ebook

  • How to Limit Bias in Experimental Research in Experimental Research Methods in Orthopedics and Trauma, edited by Hamish Simpson, and Peter Augat, Thieme Medical Publishers, Incorporated, 2015.
  • Sources of method bias in social science research and recommendations on how to control it Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63(1), 539-569. doi:10.1146/annurev-psych-120710-100452

Words to Know

Expectancy Effect -- A particular type of experimenter effect in which the expectations of the experimenter as to the likely outcome of the experiment acts as a self-fulfilling prophecy, biasing the results in the direction of the expectation. Experimenter Effect -- A biasing effect on the results of an experiment caused by expectations or preconceptions on the part of the experimenter. Also called experimenter bias. Response Bias -- In psychometrics, any systematic tendency of a respondent to choose a particular response category in a multiple-choice questionnaire for an extraneous reason, unrelated to the variable that the response is supposed to indicate but related to the content or meaning of the question.

Definitions from Colman, A.(2015).A Dictionary of Psychology.: Oxford University Press.

Understanding research bias is important for several reasons: first, bias exists in all research, across research designs and is difficult to eliminate; second, bias can occur at each stage of the research process; third, bias impacts on the validity and reliability of study findings and misinterpretation of data can have important consequences for practice. The controversial study that suggested a link between the measles-mumps-rubella vaccine and autism in children 2 resulted in a rare retraction of the published study because of media reports that highlighted significant bias in the research process. 3 Bias occurred on several levels: the process of selecting participants was misrepresented; the sample size was too small to infer any firm conclusion from the data analysis and the results were overstated which suggested caution against widespread vaccination and an urgent need for further research. However, in the time between the original publication, and later research refuting the original findings, the uptake of measles-mumps-rubella vaccine in Britain declined, resulting in a 25-fold increases in measles in the 10-year period following the original publication.

Design Bias

Researchers may engage in poorly designed research, which could increase the likelihood of bias. Poor research design may occur when the research questions and aims are not aligned with the research methods, or when researchers choose a biased research question.

Colorful chart. Center text reads Research Design. Encircling the center text are words that read Research Problem, Methods, Data Gathering, Data Analysis, and Report Writing

Selection or Participant Bias

Research which relies on recruiting or selecting participants may results in selection or participant bias in a number of ways. For instance, participant recruitment might unintentionally target or exclude a specific population, or researchers may not appropriately account for participant withdrawal.

A crowd of people of multiple colors. Image by Gerd Altmann from Pixabay

Analysis Bias

Researchers may unknowingly bias their results during data analysis by looking for or focusing on results that support their hypotheses or personal beliefs.

Black and white comic titled Machine Learning

Publication Bias

Not all research articles are published. Publication or reporting bias occurs when publishers are more likely to publish articles showing positive results or statistically significant findings. Research showing negative results may be equally important to the contribution of knowledge in the field but may be less likely to be published.

Conflict of Interest

Bias in research may occur when researchers have a conflict of interest, a person interest that conflicts with their professional obligation. Researchers should always be transparent in disclosing how their work was funded and what, if any, conflicts of interest exist.

This content inspired and informed by the following resources: Smith J, Noble H. Bias in research. Evidence-Based Nursing 2014;17:100-101. ; Research Bias ; Academic Integrity: Avoiding Plagiarism and Understanding Research Ethics: Research Ethics (University of Pittsburgh Libraries)

  • << Previous: Bias in Media
  • Next: Bias in Search Tools >>
  • Last Updated: May 21, 2024 3:25 PM
  • URL: https://uark.libguides.com/bias
  • See us on Instagram
  • Follow us on Twitter
  • Phone: 479-575-4104

Grad Coach

Research Bias 101: What You Need To Know

By: Derek Jansen (MBA) | Expert Reviewed By: Dr Eunice Rautenbach | September 2022

If you’re new to academic research, research bias (also sometimes called researcher bias) is one of the many things you need to understand to avoid compromising your study. If you’re not careful, research bias can ruin the credibility of your study. 

In this post, we’ll unpack the thorny topic of research bias. We’ll explain what it is , look at some common types of research bias and share some tips to help you minimise the potential sources of bias in your research.

Overview: Research Bias 101

  • What is research bias (or researcher bias)?
  • Bias #1 – Selection bias
  • Bias #2 – Analysis bias
  • Bias #3 – Procedural (admin) bias

So, what is research bias?

Well, simply put, research bias is when the researcher – that’s you – intentionally or unintentionally skews the process of a systematic inquiry , which then of course skews the outcomes of the study . In other words, research bias is what happens when you affect the results of your research by influencing how you arrive at them.

For example, if you planned to research the effects of remote working arrangements across all levels of an organisation, but your sample consisted mostly of management-level respondents , you’d run into a form of research bias. In this case, excluding input from lower-level staff (in other words, not getting input from all levels of staff) means that the results of the study would be ‘biased’ in favour of a certain perspective – that of management.

Of course, if your research aims and research questions were only interested in the perspectives of managers, this sampling approach wouldn’t be a problem – but that’s not the case here, as there’s a misalignment between the research aims and the sample .

Now, it’s important to remember that research bias isn’t always deliberate or intended. Quite often, it’s just the result of a poorly designed study, or practical challenges in terms of getting a well-rounded, suitable sample. While perfect objectivity is the ideal, some level of bias is generally unavoidable when you’re undertaking a study. That said, as a savvy researcher, it’s your job to reduce potential sources of research bias as much as possible.

To minimize potential bias, you first need to know what to look for . So, next up, we’ll unpack three common types of research bias we see at Grad Coach when reviewing students’ projects . These include selection bias , analysis bias , and procedural bias . Keep in mind that there are many different forms of bias that can creep into your research, so don’t take this as a comprehensive list – it’s just a useful starting point.

Research bias definition

Bias #1 – Selection Bias

First up, we have selection bias . The example we looked at earlier (about only surveying management as opposed to all levels of employees) is a prime example of this type of research bias. In other words, selection bias occurs when your study’s design automatically excludes a relevant group from the research process and, therefore, negatively impacts the quality of the results.

With selection bias, the results of your study will be biased towards the group that it includes or favours, meaning that you’re likely to arrive at prejudiced results . For example, research into government policies that only includes participants who voted for a specific party is going to produce skewed results, as the views of those who voted for other parties will be excluded.

Selection bias commonly occurs in quantitative research , as the sampling strategy adopted can have a major impact on the statistical results . That said, selection bias does of course also come up in qualitative research as there’s still plenty room for skewed samples. So, it’s important to pay close attention to the makeup of your sample and make sure that you adopt a sampling strategy that aligns with your research aims. Of course, you’ll seldom achieve a perfect sample, and that okay. But, you need to be aware of how your sample may be skewed and factor this into your thinking when you analyse the resultant data.

Need a helping hand?

research bias findings

Bias #2 – Analysis Bias

Next up, we have analysis bias . Analysis bias occurs when the analysis itself emphasises or discounts certain data points , so as to favour a particular result (often the researcher’s own expected result or hypothesis). In other words, analysis bias happens when you prioritise the presentation of data that supports a certain idea or hypothesis , rather than presenting all the data indiscriminately .

For example, if your study was looking into consumer perceptions of a specific product, you might present more analysis of data that reflects positive sentiment toward the product, and give less real estate to the analysis that reflects negative sentiment. In other words, you’d cherry-pick the data that suits your desired outcomes and as a result, you’d create a bias in terms of the information conveyed by the study.

Although this kind of bias is common in quantitative research, it can just as easily occur in qualitative studies, given the amount of interpretive power the researcher has. This may not be intentional or even noticed by the researcher, given the inherent subjectivity in qualitative research. As humans, we naturally search for and interpret information in a way that confirms or supports our prior beliefs or values (in psychology, this is called “confirmation bias”). So, don’t make the mistake of thinking that analysis bias is always intentional and you don’t need to worry about it because you’re an honest researcher – it can creep up on anyone .

To reduce the risk of analysis bias, a good starting point is to determine your data analysis strategy in as much detail as possible, before you collect your data . In other words, decide, in advance, how you’ll prepare the data, which analysis method you’ll use, and be aware of how different analysis methods can favour different types of data. Also, take the time to reflect on your own pre-conceived notions and expectations regarding the analysis outcomes (in other words, what do you expect to find in the data), so that you’re fully aware of the potential influence you may have on the analysis – and therefore, hopefully, can minimize it.

Analysis bias

Bias #3 – Procedural Bias

Last but definitely not least, we have procedural bias , which is also sometimes referred to as administration bias . Procedural bias is easy to overlook, so it’s important to understand what it is and how to avoid it. This type of bias occurs when the administration of the study, especially the data collection aspect, has an impact on either who responds or how they respond.

A practical example of procedural bias would be when participants in a study are required to provide information under some form of constraint. For example, participants might be given insufficient time to complete a survey, resulting in incomplete or hastily-filled out forms that don’t necessarily reflect how they really feel. This can happen really easily, if, for example, you innocently ask your participants to fill out a survey during their lunch break.

Another form of procedural bias can happen when you improperly incentivise participation in a study. For example, offering a reward for completing a survey or interview might incline participants to provide false or inaccurate information just to get through the process as fast as possible and collect their reward. It could also potentially attract a particular type of respondent (a freebie seeker), resulting in a skewed sample that doesn’t really reflect your demographic of interest.

The format of your data collection method can also potentially contribute to procedural bias. If, for example, you decide to host your survey or interviews online, this could unintentionally exclude people who are not particularly tech-savvy, don’t have a suitable device or just don’t have a reliable internet connection. On the flip side, some people might find in-person interviews a bit intimidating (compared to online ones, at least), or they might find the physical environment in which they’re interviewed to be uncomfortable or awkward (maybe the boss is peering into the meeting room, for example). Either way, these factors all result in less useful data.

Although procedural bias is more common in qualitative research, it can come up in any form of fieldwork where you’re actively collecting data from study participants. So, it’s important to consider how your data is being collected and how this might impact respondents. Simply put, you need to take the respondent’s viewpoint and think about the challenges they might face, no matter how small or trivial these might seem. So, it’s always a good idea to have an informal discussion with a handful of potential respondents before you start collecting data and ask for their input regarding your proposed plan upfront.

Procedural bias

Let’s Recap

Ok, so let’s do a quick recap. Research bias refers to any instance where the researcher, or the research design , negatively influences the quality of a study’s results, whether intentionally or not.

The three common types of research bias we looked at are:

  • Selection bias – where a skewed sample leads to skewed results
  • Analysis bias – where the analysis method and/or approach leads to biased results – and,
  • Procedural bias – where the administration of the study, especially the data collection aspect, has an impact on who responds and how they respond.

As I mentioned, there are many other forms of research bias, but we can only cover a handful here. So, be sure to familiarise yourself with as many potential sources of bias as possible to minimise the risk of research bias in your study.

research bias findings

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research proposal mistakes

This is really educational and I really like the simplicity of the language in here, but i would like to know if there is also some guidance in regard to the problem statement and what it constitutes.

Alvin Neil A. Gutierrez

Do you have a blog or video that differentiates research assumptions, research propositions and research hypothesis?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

8 Types of Research Bias and How to Avoid Them?

Appinio Research · 18.10.2023 · 39min read

Types of Research Bias and How to Avoid Them Examples

Curious about how to ensure the integrity of your research ? Ever wondered how research bias can impact your findings? How might it affect your data-driven decisions?

Join us on a journey through the intricate landscape of unbiased research as we delve deep into strategies and real-world examples to guide you toward more reliable insights.

What is Bias in Research?

Research bias, often simply referred to as bias, is a systematic error or deviation from the true results or inferences in research. It occurs when the design, conduct, or interpretation of a study systematically skews the findings in a particular direction, leading to inaccurate or misleading results. Bias can manifest in various forms and at different stages of the research process, and it can compromise the validity and reliability of research outcomes.

Key Aspects of Research Bias

  • Systematic Error: Bias is not a random occurrence but a systematic error that consistently influences research outcomes.
  • Influence on Results: Bias can lead to overestimating or underestimating effects, associations, or relationships studied.
  • Unintentional or Intentional: Bias can be unintentional, stemming from flaws in study design, data collection, or analysis. In some cases, it can also be introduced intentionally, leading to deliberate distortion of results.
  • Impact on Decision-Making: Research bias can have significant consequences, affecting decisions in fields ranging from healthcare and policy to marketing and academia.

Understanding and recognizing the various types and sources of bias is crucial for researchers to minimize its impact and produce credible, objective, and actionable research findings.

Importance of Avoiding Research Bias

Avoiding research bias is paramount for several compelling reasons, as it directly affects the quality and integrity of research outcomes. Here's why researchers and decision-makers should prioritize bias mitigation:

  • Credibility and Trustworthiness: Research bias undermines the credibility and trustworthiness of research findings. Biased results can erode public trust, damage an organization's reputation, and hinder the acceptance of research in the scientific community.
  • Informed Decision-Making: Research serves as the foundation for informed decision-making in various fields. Bias can lead to erroneous conclusions, potentially leading to misguided policies, ineffective treatments, or poor business strategies.
  • Resource Allocation: Bias can result in the misallocation of valuable resources. When resources are allocated based on biased research, they may not effectively address the intended issues or challenges.
  • Ethical Considerations: Introducing bias, whether intentionally or unintentionally, raises ethical concerns in research. Ethical research practices demand objectivity, transparency, and fairness in the pursuit of knowledge.
  • Advancement of Knowledge: Research contributes to the advancement of knowledge and innovation. Bias hinders scientific progress by introducing errors and distorting the true nature of phenomena, hindering the development of accurate theories and solutions.
  • Public Health and Safety: In fields like healthcare, bias can have life-and-death implications. Biased medical research can lead to the adoption of less effective or potentially harmful treatments, putting patient health and safety at risk.
  • Economic Impact: In business and economics , biased research can result in poor investment decisions, market strategies, and financial losses. Avoiding bias is essential for achieving sound economic outcomes.
In the pursuit of unbiased research, having a robust data collection platform is crucial. Appinio offers a comprehensive solution that helps researchers gather data in a transparent and unbiased manner. With Appinio, you can access a diverse and representative pool of respondents, ensuring that your research reflects a broad spectrum of perspectives.   Appinio 's platform also facilitates data collection with rigorous quality control, reducing the risk of bias at the source. By partnering with Appinio, you can enhance the integrity and credibility of your work, ultimately contributing to more reliable and impactful research outcomes.   Explore the possibilities with Appinio – book a demo today!

Book a demo EN US faces

Get a free demo and see the Appinio platform in action!

The importance of avoiding research bias cannot be overstated. Recognizing bias, implementing strategies to mitigate it, and promoting transparent and unbiased research practices are essential steps to ensure that research contributes meaningfully to advancing knowledge, informed decision-making, and the well-being of individuals and society as a whole.

Common Types of Research Bias

Research bias can manifest in various forms, each with unique characteristics and implications. Understanding these common types of research bias is essential for recognizing and mitigating their effects on your research.

Selection Bias

Selection bias occurs when the sample used in a study does not represent the target population , leading to distorted results. It can happen when certain groups are systematically more or less likely to be included in the study, introducing bias.

Causes of Selection Bias:

  • Volunteer Bias: Participants self-select to participate in a study, and their motivations or characteristics differ from those who do not volunteer.
  • Convenience Sampling: Researchers choose participants who are readily available but may not be representative of the broader population.
  • Non-Response Bias: Occurs when a significant portion of selected participants does not respond or drops out during the study, potentially due to differing characteristics.

Mitigation Strategies:

  • Random Sampling: Select participants randomly from the target population to ensure equal representation.
  • Stratified Sampling: Divide the population into subgroups and sample proportionally from each subgroup.
  • Use of Control Groups: Compare the study group to a control group to help account for potential selection bias.

Sampling Bias

Sampling bias arises when the individuals or items in your sample are not chosen randomly or are not representative of the broader population. It can lead to inaccurate generalizations and skewed conclusions.

Causes of Sampling Bias:

  • Sampling Frame Issues: When the list or database used to select the sample is incomplete or outdated.
  • Self-Selection: Participants choose to be part of the sample, introducing bias if their motivations differ from non-participants.
  • Undercoverage: When certain groups are underrepresented in the sample due to difficulties in reaching or including them.
  • Random Sampling: Employ random selection methods to ensure every individual or item has an equal chance of being included.
  • Stratified Sampling: Divide the population into homogeneous subgroups and sample proportionally from each subgroup.
  • Quota Sampling : Set quotas for specific demographics to ensure representation.

Measurement Bias

Measurement bias occurs when the methods used to collect data are inaccurate or systematically flawed, leading to incorrect conclusions. This bias can affect both quantitative and qualitative data .

Causes of Measurement Bias:

  • Instrument Flaws: When the measurement tools used are inherently unreliable or imprecise.
  • Data Collection Errors: Mistakes made during data collection, such as misinterpretation of responses or inconsistent recording.
  • Response Bias: Participants may provide socially desirable responses , leading to measurement errors. Next to that are various types of bias that arise from the structure of the questionnaire and psychologically influence the participants' answers. We summarized those on our Instagram:
          View this post on Instagram                       A post shared by Appinio (@appinio)
  • Use Reliable Instruments: Select measurement tools that have been validated and are known for their accuracy.
  • Pilot Testing: Test data collection procedures to identify and address potential sources of measurement bias.
  • Blinding: Keep researchers unaware of specific measurements to minimize subjectivity.

Reporting Bias

Reporting bias involves selectively reporting results that support a particular hypothesis while ignoring or downplaying contrary findings. It can lead to a skewed representation of the evidence.

Causes of Reporting Bias:

  • Publication Pressure: Researchers may prioritize publishing positive or significant results, leaving negative or inconclusive findings unreported.
  • Editorial Bias: Journals may preferentially accept studies with significant results, discouraging the publication of less exciting findings.
  • Confirmation Bias: Researchers may unintentionally focus on, emphasize, or interpret data that aligns with their hypotheses.
  • Transparent Reporting: Share all research findings, whether they support your hypotheses or not.
  • Pre-Registration: Register your research design and hypotheses before data collection, reducing the temptation to selectively report.
  • Peer Review: Engage in peer review to ensure a balanced and comprehensive presentation of your research.

Confirmation Bias

Confirmation bias is the tendency to seek out or interpret information in a way that confirms pre-existing beliefs or expectations. It can cloud objectivity and lead to the misinterpretation of data.

Causes of Confirmation Bias:

  • Cognitive Biases: Researchers may unconsciously filter or interpret data in a way that aligns with their preconceptions.
  • Selective Information Search: Researchers might seek out information that supports their hypotheses while ignoring contradictory evidence.
  • Interpretation Bias: Even when presented with neutral data, researchers may interpret it to fit their expectations.
  • Blinding: Keep researchers unaware of the study's hypotheses to prevent bias in data interpretation.
  • Objectivity Training: Train researchers to approach research with open minds and to recognize and challenge their biases.
  • Diverse Perspectives: Collaborate with colleagues with different viewpoints to reduce the impact of confirmation bias.

Publication Bias

Publication bias occurs when studies with positive or significant results are more likely to be published, skewing the overall literature. Unpublished studies with negative or null findings remain hidden.

Causes of Publication Bias:

  • Journal Preferences: Journals may favor publishing studies with significant results, leading to the underrepresentation of negative or null findings.
  • Researcher Publication Bias: Researchers may prioritize submitting and resubmitting studies with positive results for publication.
  • Publication of Negative Results: Encourage publishing studies with negative or null findings.
  • Meta-analysis: Combine results from multiple studies to assess the overall effect, considering both published and unpublished studies.
  • Journal Policies: Support journals that promote balanced publication practices.

Recall Bias

Recall bias arises when participants in a study inaccurately remember or report past events or experiences. It can compromise the accuracy of historical data.

Causes of Recall Bias:

  • Memory Decay: Memories naturally fade over time, making it challenging to recall distant events accurately.
  • Social Desirability Bias: Participants may provide responses they believe are socially acceptable or favorable.
  • Leading Questions: The phrasing of questions can influence participants' recollections.
  • Use of Objective Data Sources: Whenever possible, rely on documented records, medical charts, or other objective sources of information.
  • Minimize Leading Questions: Craft questions carefully to avoid suggesting specific responses.
  • Consider Timing: Be aware of how the timing of data collection may affect participants' recall.

Observer Bias

Observer bias occurs when researchers' expectations or preconceived notions influence their observations and interpretations of data. It can introduce subjectivity into the research process.

Causes of Observer Bias:

  • Expectation Effects: Researchers may see what they expect or want to see in their observations.
  • Interpretation Biases: Researchers may interpret ambiguous data in a way that confirms their hypotheses.
  • Confirmation Bias: Researchers may selectively focus on evidence that supports their expectations.
  • Blinding: Keep researchers unaware of the study's hypotheses to minimize their influence on observations.
  • Inter-rater Reliability: Ensure agreement among multiple observers by using consistent criteria for data collection.
  • Training and Awareness: Train researchers to recognize and mitigate their biases, promoting more objective observations.

Understanding and identifying these common types of research bias is the first step toward conducting rigorous and reliable research. By implementing effective mitigation strategies and fostering a culture of transparency and objectivity, you can enhance the credibility and impact of your research. It's not just about avoiding pitfalls but also about ensuring that your findings stand up to scrutiny and contribute to the broader body of knowledge in your field.

Remember, research is a continuous journey of discovery, and the quest for unbiased, evidence-based insights is at its core. Embracing these principles will not only strengthen your research but also empower you to make more informed decisions, drive positive change, and ultimately, advance both your individual goals and the greater collective knowledge of society.

What Causes Research Bias?

Research bias can stem from various sources, and gaining a deeper understanding of these causes is vital for effectively addressing and preventing bias in your research endeavors. Let's explore these causes in detail:

Inherent Biases

Inherent biases are those that are an intrinsic part of the research process itself and can be challenging to eliminate entirely. They often result from limitations or constraints in a study's design, data collection, or analysis.

Key Characteristics:

  • Inherent to Study Design : These biases are ingrained in the very design or structure of a study.
  • Difficult to Eliminate: Since they are innate, completely eradicating them may not be feasible.
  • Potential to Skew Findings: Inherent biases can lead to skewed or inaccurate results.

Examples of Inherent Biases:

  • Sampling Bias: Due to inherent limitations in data collection methods .
  • Selection Bias: As a result of constraints in participant recruitment.
  • Time-Order Bias: Arising from changes over time, which may be challenging to control.

Systematic Biases

Systematic biases result from consistent errors or flaws in the research process, which can lead to predictable patterns of deviation from the truth. Unlike inherent biases, systematic biases can be addressed with deliberate efforts.

  • Consistent Patterns: These biases produce recurring errors or distortions.
  • Identifiable Sources: The sources of systematic biases can often be pinpointed and addressed.
  • Amenable to Mitigation: Conscious adjustments can reduce or eliminate systematic biases.

Examples of Systematic Biases:

  • Measurement Bias: When measurement tools are systematically flawed, leading to inaccuracies.
  • Reporting Bias: Stemming from the selective reporting of results to favor certain outcomes.
  • Confirmation Bias: Arising from researchers' preconceived notions or hypotheses.

Non-Systematic Biases

Non-systematic biases are random errors that can occur in the research process but are neither consistent nor predictable. They introduce variability and can affect individual data points but may not systematically impact the overall study.

  • Random Occurrence: Non-systematic biases are not tied to specific patterns or sources.
  • Unpredictable: They may affect one data point but not another unexpectedly.
  • Potential for Random Variation: Non-systematic biases introduce noise into data.

Examples of Non-Systematic Biases:

  • Sampling Error : Natural fluctuations in data points due to random chance.
  • Non-Response Bias: When non-responders differ from responders randomly.

Cognitive Biases

Cognitive biases are biases rooted in human psychology and decision-making processes. They can influence how researchers perceive, interpret, and make sense of data, often unconsciously.

  • Psychological Origin: Cognitive biases originate from the way our brains process information.
  • Subjective Interpretation: They affect how researchers subjectively interpret data.
  • Affect Decision-Making: Cognitive biases can influence researchers' decisions throughout the research process.

Examples of Cognitive Biases:

  • Confirmation Bias: Seeking information that confirms pre-existing beliefs.
  • Anchoring Bias: Relying too heavily on the first piece of information encountered.
  • Hindsight Bias: Seeing events as having been predictable after they've occurred.

Understanding these causes of research bias is crucial for researchers at all stages of their work. It enables you to identify potential sources of bias, take proactive measures to minimize bias and foster a research environment that prioritizes objectivity and rigor. By acknowledging the inherent biases in research, recognizing systematic and non-systematic biases, and being aware of the cognitive biases that can affect decision-making, you can conduct more reliable and credible research.

How to Detect Research Bias?

Detecting research bias is a crucial step in maintaining the integrity of your study and ensuring the reliability of your findings. Let's explore some effective methods and techniques for identifying bias in your research.

Data Analysis Techniques

Utilizing appropriate data analysis techniques is crucial in detecting and addressing research bias. Here are some strategies to consider:

  • Statistical Analysis : Employ rigorous statistical methods to examine the data. Look for anomalies, inconsistencies, or patterns that may indicate bias, such as skewed distributions or unexpected correlations.
  • Sensitivity Analysis: Conduct sensitivity analyses by varying key parameters or assumptions in your analysis. This helps assess the robustness of your results and identifies whether bias may be influencing your findings.
  • Subgroup Analysis: If your study involves diverse groups or populations, perform subgroup analyses to explore whether bias may be affecting specific subsets differently.

Peer Review

Peer review is a fundamental process for evaluating research and identifying potential bias. Here's how it can assist in detecting bias:

  • External Evaluation: Involve independent experts in your field who can objectively assess your research methods , data, and interpretations. They may identify overlooked sources of bias or offer suggestions for improvement.
  • Bias Assessment: Ask peer reviewers specifically to scrutinize your study for any signs of bias. Encourage them to assess the transparency of your methods and reporting.
  • Replicability: Peer reviewers can also assess the replicability of your study, ensuring that others can reproduce your findings independently.

Cross-Validation

Cross-validation is a technique that involves comparing the results of your research with external or independent sources to identify potential bias:

  • External Data Sources: Compare your findings with data from external sources, such as government statistics, industry reports , or previous research. Significant disparities may signal bias.
  • Expert Consultation: Seek feedback from experts who are not directly involved in your research. Their impartial perspectives can help identify any biases in your study design, data collection, or interpretation.
  • Historical Comparisons: If applicable, compare your current research with historical data to assess whether trends or patterns have changed over time, which could indicate bias.

By employing these methods and techniques, you can proactively detect and address research bias, ultimately enhancing the credibility and reliability of your research findings.

How to Avoid Research Bias?

Effectively avoiding research bias is a fundamental aspect of conducting high-quality research. Implementing specific strategies can help researchers minimize the impact of bias and enhance the validity and reliability of their findings. Let's delve into these strategies in detail:

1. Randomization

Randomization is a method used to allocate participants or data points to different groups or conditions in an entirely random way. It helps ensure that each participant has an equal chance of being assigned to any group, reducing the potential for bias in group assignments.

Key Aspects:

  • Random Assignment: Randomly assigning participants to experimental or control groups .
  • Equal Opportunity: Ensuring every participant has an equal likelihood of being in any group.
  • Minimizing Bias: Reduces the risk of selection bias by distributing potential biases equally across groups.
  • Balanced Groups: Randomization creates comparable groups in terms of potential confounding variables.
  • Minimizes Selection Bias: Eliminates researcher or participant biases in group allocation.
  • Enhanced Causality: Strengthens the ability to make causal inferences from research findings.
  • Simple Randomization: Assign participants or data points to groups using a random number generator or drawing lots.
  • Stratified Randomization: Divide the population into subgroups based on relevant characteristics (e.g., age, gender) and then randomly assign within those subgroups.
  • Blocked Randomization: Create blocks of participants, ensuring each block contains an equal number from each group.

In a clinical trial testing a new drug, researchers use simple randomization to allocate participants into two groups: one receiving the new drug and the other receiving a placebo. This helps ensure that patient characteristics, such as age or gender, do not systematically favor one group over another, minimizing bias in the study's results.

2. Blinding and Double-Blinding

Blinding involves keeping either the participants or the researchers (single-blinding) or both (double-blinding) unaware of certain aspects of the study, such as group assignments or treatment conditions. This prevents the introduction of bias due to expectations or knowledge of the study's hypotheses.

  • Single-Blinding: Either participants or researchers are unaware of crucial information.
  • Double-Blinding: Both participants and researchers are unaware of crucial information.
  • Placebo Control: Often used in pharmaceutical research to ensure blinding.
  • Minimizes Observer Bias: Researchers' expectations do not influence data collection or interpretation.
  • Prevents Participant Bias: Participants' awareness of their group assignment does not affect their behavior or responses.
  • Enhances Study Validity: Blinding reduces the risk of bias influencing study outcomes.
  • Use of Placebos: In clinical trials, a placebo group is often included to maintain blinding.
  • Blinding Procedures: Establish protocols to ensure that those who need to be blinded are kept unaware of relevant information.
  • Blinding Verification: Conduct assessments to confirm that blinding has been maintained throughout the study.

In a double-blind drug trial, neither the participants nor the researchers know whether they are receiving or administering the experimental drug or a placebo. This prevents biases in reporting and evaluating the drug's effects, ensuring that results are objective and reliable.

3. Standardization of Procedures

Standardization involves creating and following consistent, well-defined procedures throughout a study. This ensures that data collection, measurements, and interventions are carried out uniformly, minimizing potential sources of bias.

  • Detailed Protocols: Developing clear and precise protocols for data collection or interventions.
  • Consistency: Ensuring that all research personnel adhere to the established procedures.
  • Reducing Variability: Minimizing variation in how processes are carried out.
  • Increased Reliability: Standardized procedures lead to more consistent and reliable data.
  • Minimized Measurement Bias: Reduces the likelihood of measurement errors or inconsistencies.
  • Easier Replication: Standardization facilitates replication by providing a clear roadmap for future studies.
  • Protocol Development: Create detailed step-by-step protocols for data collection, interventions, or experiments.
  • Training: Train all research personnel thoroughly on standardized procedures.
  • Quality Control: Implement quality control measures to monitor and ensure adherence to protocols.

In a psychological study, researchers standardize the procedure for administering a cognitive test to all participants. They use the same test materials, instructions, and environmental conditions for every participant to ensure that the data collected are not influenced by variations in how the test is administered.

4. Sample Size Considerations

Sample size considerations involve determining the appropriate number of participants or data points needed for a study. Inadequate sample sizes can lead to underpowered studies that fail to detect meaningful effects, while excessively large samples can be resource-intensive without adding substantial value.

  • Power Analysis: Calculating the required sample size based on expected effect sizes and desired statistical power.
  • Effect Size Considerations: Ensuring the sample size is sufficient to detect the effect size of interest.
  • Resource Constraints: Balancing the need for a larger sample with available resources.
  • Improved Statistical Validity: Adequate sample sizes increase the likelihood of detecting actual effects.
  • Generalizability: Larger samples enhance the generalizability of study findings to the target population.
  • Resource Efficiency: Avoiding extensive samples conserves resources.
  • Power Analysis Software: Use statistical software to perform power analyses.
  • Pilot Studies: Conduct pilot studies to estimate effect sizes and inform sample size calculations.
  • Consider Practical Constraints: Factor in time, budget, and other practical limitations when determining sample sizes.

In a medical research study evaluating the efficacy of a new treatment, researchers conduct a power analysis to determine the required sample size. This analysis considers the expected effect size, desired level of statistical power, and available resources to ensure that the study can reliably detect the treatment's effects.

5. Replication

Replication involves conducting the same study or experiment multiple times to assess the consistency and reliability of the findings. Replication is a critical step in research, as it helps validate the results and ensures that they are not due to chance or bias.

  • Exact or Conceptual Replication: Replicating the study with the same methods (exact) or similar methods addressing the same research question (conceptual).
  • Independent Replication: Replication by different research teams or in other settings.
  • Enhanced Confidence: Replication builds confidence in the robustness of research findings.
  • Enhanced Reliability: Replicated findings are more reliable and less likely to be influenced by bias.
  • Verification of Results: Replication verifies the validity of initial study results.
  • Error Detection: Identifies potential sources of bias or errors in the original study.
  • Plan for Replication: Include replication as part of the research design from the outset.
  • Collaboration: Collaborate with other researchers or research teams to conduct independent replications.
  • Transparent Reporting: Clearly document replication methods and results for transparency.

A psychology study that originally reported a significant effect of a particular intervention on memory performance is replicated by another research team using the same methods and procedures. If the replication study also finds a significant impact, it provides additional support for the initial findings and reduces the likelihood of bias influencing the results.

6. Transparent Reporting

Transparent reporting involves thoroughly documenting all aspects of a research study, from its design and methodology to its results and conclusions. Transparent reporting ensures that other researchers can assess the study's validity and detect any potential sources of bias.

  • Comprehensive Documentation: Detailed reporting of study design, procedures, data collection, and analysis.
  • Open Access to Data: Sharing data and materials to allow for independent verification and analysis.
  • Disclosure of Conflicts: Transparent reporting includes disclosing any potential conflicts of interest that could introduce bias.
  • Accountability: Transparent reporting holds researchers accountable for their methods and results.
  • Enhanced Credibility: Transparent research is more credible and less likely to be influenced by bias.
  • Reproducibility: Other researchers can replicate and verify study findings.
  • Use of Reporting Guidelines: Follow established reporting guidelines specific to your field (e.g., CONSORT for clinical trials, STROBE for observational studies).
  • Data Sharing: Make research data and materials available to others through data repositories or supplementary materials.
  • Peer Review: Engage in peer review to ensure clear and comprehensive reporting.

A scientific journal article reporting the results of a research study includes detailed descriptions of the study design, methods, statistical analyses, and potential limitations. The authors also provide access to the raw data and materials used in the study, allowing other researchers to assess the study's validity and potential bias. This transparent reporting enhances the credibility of the research.

Real-World Examples of Research Bias

To better understand the pervasive nature of research bias and its implications, let's delve into additional real-world examples that illustrate various types of research bias beyond those previously discussed.

Pharmaceutical Industry Influence on Clinical Trials

Bias Type: Funding Bias, Sponsorship Bias

Example: The pharmaceutical industry often sponsors clinical trials to evaluate the safety and efficacy of new drugs. In some cases, studies sponsored by pharmaceutical companies have been found to report more favorable outcomes for their products compared to independently funded research.

Explanation: Funding bias occurs when the financial interests of the sponsor influence study design, data collection, and reporting. In these instances, there may be pressure to emphasize positive results or downplay adverse effects to promote the marketability of the drug.

Impact: This bias can have severe consequences for patient safety and public health, as it can lead to the approval and widespread use of drugs that may not be as effective or safe as initially reported.

Social Desirability Bias in Self-reported Surveys

Bias Type: Response Bias

Example: Researchers conducting surveys on sensitive topics such as drug use, sexual behavior, or income levels often encounter social desirability bias. Respondents may provide answers they believe are socially acceptable or desirable rather than accurate.

Explanation: Social desirability bias is rooted in the tendency to present oneself in a favorable light. Respondents may underreport stigmatized behaviors or overreport socially acceptable ones, leading to inaccurate data.

Impact: This bias can compromise the validity of survey research, especially in areas where honest reporting is crucial for public health interventions or policy decisions.

Non-Publication of Negative Clinical Trials

Bias Type: Publication Bias

Example: Clinical trials with negative or null results are less likely to be published than those with positive findings. This leads to an overrepresentation of studies showing treatment efficacy and an underrepresentation of trials indicating no effect.

Explanation: Publication bias occurs because journals often preferentially accept studies with significant results, while researchers and sponsors may be less motivated to publish negative findings. This skews the evidence base and can result in the overuse of specific treatments or interventions.

Impact: Patients and healthcare providers may make decisions based on incomplete or biased information, potentially exposing patients to ineffective or even harmful treatments.

Gender Bias in Medical Research

Bias Type: Gender Bias

Example: Historically, medical research has been biased toward male subjects, leading to a limited understanding of how diseases and treatments affect women. Clinical trials and studies often fail to include a representative number of female participants.

Explanation: Gender bias in research arises from the misconception that results from male subjects can be generalized to females. This bias can lead to treatments and medications that are less effective or safe for women.

Impact: Addressing gender bias is crucial for developing healthcare solutions that account for the distinct biological and physiological differences between genders and ensuring equitable access to effective treatments.

Political Bias in Climate Change Research

Bias Type: Confirmation Bias, Political Bias

Example: In climate change research, political bias can influence the framing, interpretation, and reporting of findings. Researchers aligned with certain political ideologies may downplay or exaggerate the significance of climate change based on their preconceptions.

Explanation: Confirmation bias comes into play when researchers seek data or interpretations that align with their political beliefs. This can result in less objective research and more susceptible to accusations of bias.

Impact: Political bias can undermine public trust in scientific research, impede policy-making, and hinder efforts to address critical issues such as climate change.

These diverse examples of research bias highlight the need for robust safeguards, transparency, and peer review in the research process. Recognizing and addressing bias is essential for maintaining the integrity of scientific inquiry and ensuring that research findings can be trusted and applied effectively.

Conclusion for Research Bias

Understanding and addressing research bias is critical in conducting reliable and trustworthy research. By recognizing the various types of bias, whether they are inherent, systematic, non-systematic, or cognitive, you can take proactive measures to minimize their impact. Strategies like randomization, blinding, standardization, and transparent reporting offer powerful tools to enhance the validity of your research.

Moreover, real-world examples highlight the tangible consequences of research bias and emphasize the importance of conducting research with integrity. Whether you're in the world of science, healthcare, marketing, or any other field, the pursuit of unbiased research is essential for making informed decisions that drive success. So, keep these insights in mind as you embark on your next research journey, and remember that a commitment to objectivity will always lead to better, more reliable outcomes.

How to Prevent Bias in Research?

Are you tired of lengthy, expensive, and potentially biased research processes? Appinio , the real-time market research platform, is here to revolutionize how you gather consumer insights. Say goodbye to research bias and hello to rapid, data-driven decision-making.

Here's why you should choose Appinio:

  • Fast and Precise: Get from questions to insights in minutes, ensuring you have the most up-to-date information for your decisions.
  • User-Friendly: No need for a PhD in research – our platform is so intuitive that anyone can use it to gather valuable insights.
  • Global Reach: Survey your target audience from over 90 countries and define precise target groups from a pool of 1200+ characteristics.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Pareto Analysis Definition Pareto Chart Examples

30.05.2024 | 29min read

Pareto Analysis: Definition, Pareto Chart, Examples

What is Systematic Sampling Definition Types Examples

28.05.2024 | 32min read

What is Systematic Sampling? Definition, Types, Examples

Time Series Analysis Definition Types Techniques Examples

16.05.2024 | 30min read

Time Series Analysis: Definition, Types, Techniques, Examples

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Publish with us
  • About the journal
  • Meet the editors
  • Specialist reviews
  • BMJ Journals

You are here

  • Volume 3, Issue 1
  • Assessing the methodological quality and risk of bias of systematic reviews: primer for authors of overviews of systematic reviews
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-7825-6765 Carole Lunny 1 , 2 ,
  • Salmaan Kanji 3 , 4 ,
  • Pierre Thabet 5 ,
  • Anna-Bettina Haidich 6 ,
  • Konstantinos I Bougioukas 6 and
  • Dawid Pieper 7
  • 1 Knowledge Translation Program, Li Ka Shing Knowledge Institute , St Michael's Hospital , Toronto , ON , Canada
  • 2 Cochrane Hyptertension Review Group , Cochrane Canada , Vancouver , BC , Canada
  • 3 Ottawa Hospital , Ottawa , ON , Canada
  • 4 Ottawa Hospital Research Institute , Ottawa , ON , Canada
  • 5 School of Pharmaceutical Sciences University of Ottawa , Hôpital Montfort , Ottawa , ON , Canada
  • 6 Department of Hygiene, Social-Preventive Medicine and Medical Statistics , Aristotle University of Thessaloniki Faculty of Health Sciences , Thessaloniki , Central Macedonia , Greece
  • 7 Faculty of Health Sciences Brandenburg , Brandenburg Medical School Theodor Fontane Ruppin Clinics , Neuruppin , Brandenburg , Germany
  • Correspondence to Dr Carole Lunny, Knowledge Translation Program, Li Ka Shing Knowledge Institute, St Michael's Hospital, Toronto, Canada; carole.lunny{at}ubc.ca

https://doi.org/10.1136/bmjmed-2023-000604

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • Epidemiology
  • Research design
  • Public health

Key messages

Systematic reviews underpin evidence based healthcare decision making, but flaws in their conduct may lead to biased estimates of intervention effects and hence invalid recommendations

Overviews of reviews (also known as umbrella reviews, meta-reviews, or reviews of reviews) evaluate biases at the systematic review level, among others, but proper use of tools for this purpose require training, time, and an appreciation of their strengths and limitations

AMSTAR-2 and ROBIS are the two most popular and rigorous critical appraisal tools used for appraising systematic reviews

The AMSTAR-2 16-item checklist focuses on methodological quality of systematic reviews of healthcare interventions, and incorporates aspects of review conduct, reporting comprehensiveness, and risk of bias as specific items

ROBIS is a domain based tool with 19 items focusing on risk of biases in a systematic review (eg, selective reporting of outcomes or analyses) of healthcare interventions and contains items related to risk of bias in results and conclusions, relevance, and an item about risk of interpretation bias or ’spin’

Carole Lunny and colleagues consider methods such as AMSTAR-2 and ROBIS tools to evaluate the methodological quality and risk of bias of systematic reviews of intervention effects that are included in overviews of reviews

Introduction

Overviews of reviews, which synthesise the findings of systematic reviews, 1 have significantly increased in publication over the past decade. 2 However, the terminology used to describe them is not agreed in consensus, with terms such as umbrella reviews, meta-reviews, and reviews of reviews being used interchangeably to mean overviews of reviews. Methods research has been ongoing since the 2010s to develop effective approaches for conducting overviews of reviews and addressing their unique characteristics. 3–7 Overview authors use various approaches to assess the methodological quality and risk of bias in their included systematic reviews, and they apply these assessments to inform the overviews' results and conclusions. However, proper use of tools for this purpose require training, time, and an appreciation of their strengths and limitations. This methods primer aims to address the inconsistency in assessing and reporting bias in systematic reviews of intervention effects included within overviews, and focuses on presenting the different validated tools, comparing them, and providing guidance on the interpretation and reporting of these assessments.

Assessing the methodological quality and risk of bias

Assessment tools.

In 2016, ROBIS was developed to assess risk of bias in systematic reviews, 11 ROBIS consists of three phases: assessment of relevance (optional), identification of bias concerns with the review process, and judgement of the overall risk of bias in the review. The tool focuses on four domains: study eligibility criteria, identification and selection of studies, data collection and study appraisal, and synthesis and findings. ROBIS helps reviewers identify potential biases in these domains by asking specific questions related to the review's methods and reporting. The tool underwent content validity and reliability testing to ensure its accuracy and consistency in assessing the risk of bias in systematic reviews.

In 2017, an update to AMSTAR, called AMSTAR-2, 10 aimed to assess methodological quality of systematic reviews, and involved inter-rater reliability and usability testing. AMSTAR-2 consists of 16 items that evaluate various aspects of the systematic review process, including the research question formulation, study selection and data extraction, assessment of risk of bias in individual studies, consideration of publication bias, and appropriate statistical analysis. This tool also assesses the overall methodological quality and risk of bias in the review, providing a comprehensive evaluation.

The decision about how to evaluate overall risk of bias for ROBIS is made at the assessors' discretion, as opposed to the AMSTAR-2 overall judgement, which is prescribed by AMSTAR-2 guidance. Examples of how to interpret methodological quality and risk of bias assessments, and how to make an overall judgement are found in box 1 .

Decision rules: how to decide that the results of a review are of high quality or at low risk of bias overall

Decision rules are a priori strategies used to specify rules to define explicitly how each item is rated, as well as how an overall judgement is made about a specific systematic review with the AMSTAR-2 and ROBIS tools. In the case of AMSTAR-2, the authors who are using the tool stipulate how to come to an overall high quality rating in the results of the review, but not how to rate each item. For example, item 15 of AMSTAR-2 asks assessors whether an adequate investigation of publication bias (small study bias) was conducted and whether its likely effect on the results was discussed. However, the AMSTAR-2 team did not specify what happens when 10 studies or fewer were included (ie, the analysis will be underpowered to detect publication bias), what methods to detect publication bias are recommended, and if publication bias is detected, how it should be discussed (ie, as a systematic review limitation).

The ROBIS tool equally does not specify what decision rules should be used for assessment of risk of bias, nor how to come to an overall judgement. For example, item 4.6 of ROBIS ("Were biases in primary studies minimal or addressed in the synthesis?") is similar to item 12 of AMSTAR-2 ("If meta-analysis was performed, did the review authors assess the potential impact of risk of bias in individual studies on the results of the meta-analysis?"). Of note, risk of bias should be assessed in any systematic review regardless of whether a meta-analysis was performed. A possible decision rule for answering these two questions when considering whether bias was adressed and considered in the results and their interpretation could be to respond "Yes" or "Probably/Partial Yes" if:

All studies received a low risk of bias rating; and

Studies were judged at high risk of bias and sensitivity analyses (grouping high v low risk studies in a meta-analysis) or adjustment approaches were used

For a "No" response:

Important biases were suspected to have been in the included studies that have been ignored by the review authors; or

Risk of bias was not assessed at all in the included studies; or

Bias was assessed but authors did not incorporate it into findings, discussion, and conclusions

Based on the above decision rules, how would the following statement be rated? "We planned on conducting sensitivity analysis on the studies based on their level of risk of bias. Most of the included studies had a similar risk of bias across all the domains except for industry sponsorship bias and incomplete data for total testosterone. Due to the inadequate number of studies, we were not able to conduct a sensitivity analysis on the included studies based on industry sponsorship."

For overall judgements, a decision rule could be that if one or more ROBIS domains are at high risk of bias, then the overall study is deemed at high risk of bias. For AMSTAR-2, the authors of the tool have stipulated that the review is considered of low or critical low quality when any of the subset of seven ‘critical’ items have one or more critical flaws. While the decisions about how to rate the items and make overall judgements can be debated, the grounds on which overview authors make these decisions should be noted explicitly in the manuscript or in an appendix, as then the assessment results will be transparent and reproducible.

Cautionary note: empirical evidence does not currently support the assignment of scores to items that are met in a risk of bias tool followed by the summation or averaging of these scores to produce a numerical measure of risk of bias. A thoughtful, nuanced, and customised overall judgement is required that considers all items with suspected bias on the basis of specific context.

The AMSTAR-2 and ROBIS tools were designed to assess systematic reviews with pairwise meta-analysis only. A more recent tool under development aims to assess the potential biases and limitations in network meta-analyses. 12 13 Guidance documents (eg, Cochrane 14 and JBI 15 ) recommend overview authors use ROBIS or AMSTAR-2 when comparing and critically appraising systematic reviews over other available tools. Figure 1 presents two example assessments conducted by our team, the ROBIS assessment of Normansell and colleagues 16 is presented at the domain level, and the AMSTAR-2 assessment of Puig and colleagues 17 is presented by item. Items are backed by quotes and rationales to support the answers chosen, for full transparency, and to help when comparing assessments between two independent assessors ( figure 2 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Example assessments using ROBIS of Normansell 16 and AMSTAR-2 of Puig 17 . The ROBIS assessment is presented by domain and the AMSTAR-2 assessment by individual items. ROBIS's phase one, where the assessor considers the relevance of the systematic review questions to the overview's question, is not shown. The decision about how to evaluate overall risk of bias for ROBIS is made at the assessors' discretion, as opposed to the AMSTAR-2 overall judgement, which is prescribed by AMSTAR-2 guidance

PICO framework stands for patient or problem, intervention or exposure, comparison or control, and outcomes. DLQI=dermatology life quality index; DMARDs=disease modifying anti-rheumatic drugs; PASI=psoriasis area and severity index; RCT=randomised controlled trial

Comparison of AMSTAR-2 and ROBIS

Both the AMSTAR-2 and ROBIS tools provide structured guidelines for reviewers to evaluate and report on methodological strengths and weaknesses as well as potential biases in systematic reviews, contributing to the overall reliability and credibility of the evidence presented.Considerable overlap exists between the items of the two tools ( figure 1 ). In the documentation for each tool, AMSTAR-2 states that it was developed for systematic reviews of healthcare interventions whereas ROBIS states that it is aimed at reviews of healthcare interventions, diagnosis, prognosis, and biological cause. In practice, the ROBIS tool is generic and its signalling questions relate to interventions in the clinical or public health fields. Questions specific to systematic reviews of diagnosis, prognosis, and biological cause are not found in the tool. AMSTAR-2 was developed to assess methodological quality (which includes indicators of risk of bias) while ROBIS was developed primarily to assess risk of bias but also includes items that address methodological quality.

AMSTAR-2 focuses more on reporting comprehensiveness (eg, reporting of study designs for inclusion and reporting on excluded studies with justification) and methodological quality or transparency constructs (eg, pre-established protocol, sources of funding of primary studies, and reviewers' competing interests). Whereas ROBIS focuses on items related to identification of the different biases (eg, selective reporting of outcomes or analyses and publication bias). Bias occurs when factors systematically affect the results and conclusions of a review and cause them to be systematically different from the truth. 1 Systematic reviews affected by bias can be inaccurate; for example, finding false positive or false negative intervention effects by systematically over or under estimating the true effect in the target population. Methodological quality focuses on methodological features associated with internal validity. In theory, assessing risk of bias is the preferred approach because a review might have good methodological quality while still being at high risk of bias. For example, a systematic review might have been conducted according to stated guidance, but some relevant databases were not searched for evidence (database selection bias) leaving out crucial primary studies that may affect the results of the review.

In general, assessors found that AMSTAR-2 was more straightforward and user friendly than ROBIS. 18 19 The two tools had similar inter-rater reliability. 18 20 21 The range in time taken to use AMSTAR-2 was similar to ROBIS (14-60 v 16-60 min) across three comparison studies 18 20 21 ( table 1 ). ROBIS users required training and practice in using the tool 22 23 and it was often understood and applied differently. 20 AMSTAR-2 has been criticised for unclear guidance on some items, 24–26 which can lead to varying interpretations and applications. ROBIS is accompanied by voluminous guidance, which can be difficult to manage by the user. 21–23

  • View inline

Comparison of the AMSTAR-2 and ROBIS tools

While AMSTAR-2 and ROBIS are both widely used tools for assessing systematic reviews, in some situations, one may be preferred over the other. AMSTAR-2 may be preferred when:

the primary focus is evaluating the methodological quality of a systematic review of interventions;

the aim is to broadly assess aspects of review conduct, reporting comprehensiveness, and risk of bias; or

a relatively quick and easy to use tool is sought, because AMSTAR-2 has fewer items compared with ROBIS.

ROBIS may be preferred when:

the aim is to identify concerns with the review conduct that may point to risk of biases in the results and conclusions, as well as assessing relevance and minimising interpretation bias or ‘spin’;

a more nuanced tool is sought, which may involve more thoughtful assessment and time, because ROBIS contains more items compared with AMSTAR-2;

the aim is to assess multiple types of systematic reviews to compare risk of bias across them (eg, when preparing a clinical practice guideline).

Reporting and interpretation

When reporting and interpreting the overview results, assessors should note some key considerations with AMSTAR-2 and ROBIS assessments. Authors should first report methodological quality or bias assessment results by item, domain, and overall judgement. In addition, assessment should be reported at the outcome level as opposed to the systematic review level. 18 Several responses to AMSTAR-2 item 13 (whether risk of bias was discussed or interpreted) are possible when multiple outcomes (eg, mortality and adverse events) are reported in one systematic review. Ideally, results of intervention overviews should be reported by qualifying the inherent methodological quality or risk of bias in the included systematic reviews as potential limitations.

Subgrouping systematic reviews by low and high risk of bias using ROBIS can be a great way to determine whether authors of reviews of interventions that have a high risk of bias over emphasised their findings and conclusions. Subgrouping also allows overview authors to exclude systematic reviews that are at a high risk of bias from the synthesis. However, using only one single criteria (ie, the systematic reviews at low risk of bias) for inclusion in analyses can result in unintended loss of information through exclusion of important systematic review data (eg, by excluding the systematic review with the greatest number of unique trials).

Conclusions

Overviews are used by guideline developers and policy makers to summarise large bodies of evidence in consideration of interventions of interest on a given topic. Using the appropriate tools to critically appraise included systematic reviews of intervention effects means that a complete assessment of methodological quality and all the potential biases are considered. Systematic reviews vary considerably by method, how data are synthesised, and how results and conclusions are reported, therefore. an assessment of potential biases is necessary to consider their reproducibility, trustworthiness, and usefulness for end users. At this time, the recommended tools to assess methodological quality and bias among systematic reviews included in overviews are AMSTAR-2 and ROBIS. Proper use of these tools for this purpose requires training, time, and methodological insight.

  • Higgins JP ,
  • Chandler J , et al
  • Neelakant T ,
  • Chen A , et al
  • Hartling L ,
  • Chisholm A ,
  • Thomson D , et al
  • Pollock A ,
  • Campbell P , et al
  • Buechter R ,
  • Jerinic P , et al
  • Begley CM , et al
  • Thomson D ,
  • Russell K ,
  • Becker L , et al
  • Brennan SE ,
  • McDonald S , et al
  • Whiting P ,
  • Savović J , et al
  • Reeves BC ,
  • Wells G , et al
  • Savović J ,
  • Higgins JPT , et al
  • Veroniki A-A ,
  • Veroniki AA ,
  • Hutton B , et al
  • Pollock M ,
  • Fernandes RM ,
  • Becker LA , et al
  • Aromataris E ,
  • Fernandez RS ,
  • Godfrey C , et al
  • Normansell R ,
  • Waterson S , et al
  • Mollon P , et al
  • Whitmarsh A ,
  • Leach V , et al
  • Eikermann M
  • Duarte G , et al
  • González-Lorenzo M , et al
  • Cinquini M ,
  • Gonzalez-Lorenzo M , et al
  • Prengel P , et al
  • Holmer HK ,
  • Wegewitz U ,
  • Weikert B ,
  • Fishta A , et al

X @carole_lunny, @PierreThabet, @Bugiukas

Contributors CL conceived the idea of the study and drafted the manuscript; CL, PS, DW, KB, SK, and ABH edited the manuscript and read and approved the final manuscript.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests We have read and understood the BMJ policy on declaration of interests and declare the following interests: none.

Provenance and peer review Commissioned; externally peer reviewed.

Read the full text or download the PDF:

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research bias findings

Home Market Research Research Tools and Apps

Research bias: What it is, Types & Examples

Research bias is a technique where the researchers conducting the experiment modify the findings in order to present a specific consequence.

The researcher sometimes unintentionally or actively affects the process while executing a systematic inquiry. It is known as research bias, and it can affect your results just like any other sort of bias.

When it comes to studying bias, there are no hard and fast guidelines, which simply means that it can occur at any time. Experimental mistakes and a lack of concern for all relevant factors can lead to research bias.

One of the most common causes of study results with low credibility is study bias. Because of its informal nature, you must be cautious when characterizing bias in research. To reduce or prevent its occurrence, you need to be able to recognize its characteristics. 

This article will cover what it is, its type, and how to avoid it.

Content Index

What is research bias?

How does research bias affect the research process, types of research bias with examples, how questionpro helps in reducing bias in a research process.

Research bias is a technique in which the researchers conducting the experiment modify the findings to present a specific consequence. It is often known as experimenter bias.

Bias is a characteristic of the research technique that makes it rely on experience and judgment rather than data analysis. The most important thing to know about bias is that it is unavoidable in many fields. Understanding research bias and reducing the effects of biased views is an essential part of any research planning process.

For example, it is much easier to become attracted to a certain point of view when using social research subjects, compromising fairness.

Research bias can majorly affect the research process, weakening its integrity and leading to misleading or erroneous results. Here are some examples of how this bias might affect the research process:

Distorted research design

When bias is present, study results can be skewed or wrong. It can make the study less trustworthy and valid. If bias affects how a study is set up, how data is collected, or how it is analyzed, it can cause systematic mistakes that move the results away from the true or unbiased values.

Invalid conclusions

It can make it hard to believe that the findings of a study are correct. Biased research can lead to unjustified or wrong claims because the results may not reflect reality or give a complete picture of the research question.

Misleading interpretations

Bias can lead to inaccurate interpretations of research findings. It can alter the overall comprehension of the research issue. Researchers may be tempted to interpret the findings in a way that confirms their previous assumptions or expectations, ignoring alternate explanations or contradictory evidence.

Ethical concerns

This bias poses ethical considerations. It can have negative effects on individuals, groups, or society as a whole. Biased research can misinform decision-making processes, leading to ineffective interventions, policies, or therapies.

Damaged credibility

Research bias undermines scientific credibility. Biased research can damage public trust in science. It may reduce reliance on scientific evidence for decision-making.

Bias can be seen in practically every aspect of quantitative research and qualitative research , and it can come from both the survey developer and the participants. The sorts of biases that come directly from the survey maker are the easiest to deal with out of all the types of bias in research. Let’s look at some of the most typical research biases.

research bias findings

Design bias

Design bias happens when a researcher fails to capture biased views in most experiments. It has something to do with the organization and its research methods. The researcher must demonstrate that they realize this and have tried to mitigate its influence.

Another design bias develops after the research is completed and the results are analyzed. It occurs when the researchers’ original concerns are not reflected in the exposure, which is all too often these days.

For example, a researcher working on a survey containing questions concerning health benefits may overlook the researcher’s awareness of the sample group’s limitations. It’s possible that the group tested was all male or all over a particular age.

Selection bias or sampling bias

Selection bias occurs when volunteers are chosen to represent your research population, but those with different experiences are ignored. 

In research, selection bias manifests itself in a variety of ways. When the sampling method puts preference into the research, this is known as sampling bias . Selection bias is also referred to as sampling bias.

For example, research on a disease that depended heavily on white male volunteers cannot be generalized to the full community, including women and people of other races or communities.

Procedural bias

Procedural bias is a sort of research bias that occurs when survey respondents are given insufficient time to complete surveys. As a result, participants are forced to submit half-thoughts with misinformation, which does not accurately reflect their thinking.

Another sort of study bias is using individuals who are forced to participate, as they are more likely to complete the survey fast, leaving them with enough time to accomplish other things.

For Example, If you ask your employees to survey their break, they may be pressured, which may compromise the validity of their results.

Publication or reporting bias

A sort of bias that influences research is publication bias. It is also known as reporting bias. It refers to a condition in which favorable outcomes are more likely to be reported than negative or empty ones. Analysis bias can also make it easier for reporting bias to happen.

The publication standards for research articles in a specific area frequently reflect this bias on them. Researchers sometimes choose not to disclose their outcomes if they believe the data do not reflect their theory.

As an example, there was seven research on the antidepressant drug Reboxetine. Among them, only one got published, and the others were unpublished.

Measurement of data collecting bias

A defect in the data collection process and measuring technique causes measurement bias. Data collecting bias is also known as measurement bias. It occurs in both qualitative and quantitative research methodologies. 

Data collection methods might occur in quantitative research when you use an approach that is not appropriate for your research population. Instrument bias is one of the most common forms of measurement bias in quantitative investigations. A defective scale would generate instrument bias and invalidate the experimental process in a quantitative experiment.

For example, you may ask those who do not have internet access to survey by email or on your website.

Data collection bias occurs in qualitative research when inappropriate survey questions are asked during an unstructured interview. Bad survey questions are those that lead the interviewee to make presumptions. Subjects are frequently hesitant to provide socially incorrect responses for fear of criticism.

For example, a topic can avoid coming across as homophobic or racist in an interview.

Some more types of bias in research include the ones listed here. Researchers must understand these biases and reduce them through rigorous study design, transparent reporting, and critical evidence review: 

  • Confirmation bias: Researchers often search for, evaluate, and prioritize material that supports their existing hypotheses or expectations, ignoring contradictory data. This can lead to a skewed perception of results and perhaps biased conclusions.
  • Cultural bias: Cultural bias arises when cultural norms, attitudes, or preconceptions influence the research process and the interpretation of results.
  • Funding bias: Funding bias takes place when powerful motives support research. It can bias research design, data collecting, analysis, and interpretation toward the funding source.
  • Observer bias: Observer bias arises when the researcher or observer affects participants’ replies or behavior. Collecting data might be biased by accidental clues, expectations, or subjective interpretations.

LEARN ABOUT: Theoretical Research

QuestionPro offers several features and functionalities that can contribute to reducing bias in the research process. Here’s how QuestionPro can help:

Randomization

QuestionPro allows researchers to randomize the order of survey questions or response alternatives. Randomization helps to remove order effects and limit bias from the order in which participants encounter the items.

Branching and skip logic

Branching and skip logic capabilities in QuestionPro allow researchers to design customized survey pathways based on participants’ responses. It enables tailored questioning, ensuring that only pertinent questions are asked of participants. Bias generated by such inquiries is reduced by avoiding irrelevant or needless questions.

Diverse question types

QuestionPro supports a wide range of questions kinds, including multiple-choice, Likert scale, matrix, and open-ended questions. Researchers can choose the most relevant question kinds to get unbiased data while avoiding leading or suggestive questions that may affect participants’ responses.

Anonymous responses

QuestionPro enables researchers to collect anonymous responses, protecting the confidentiality of participants. It can encourage participants to provide more unbiased and equitable feedback, especially when dealing with sensitive or contentious issues.

Data analysis and reporting

QuestionPro has powerful data analysis and reporting options, such as charts, graphs, and statistical analysis tools. These properties allow researchers to examine and interpret obtained data objectively, decreasing the role of bias in interpreting results.

Collaboration and peer review

QuestionPro supports peer review and researcher collaboration. It helps uncover and overcome biases in research planning, questionnaire formulation, and data analysis by involving several researchers and soliciting external opinions.

You must comprehend biases in research and how to deal with them. Knowing the different sorts of biases in research allows you to readily identify them. It is also necessary to have a clear idea to recognize it in any form.

QuestionPro provides many research tools and settings that can assist you in dealing with research bias. Try QuestionPro today to undertake your original bias-free quantitative or qualitative research.

FREE TRIAL         LEARN MORE

Frequently Asking Questions

Research bias affects the validity and dependability of your research’s findings, resulting in inaccurate interpretations of the data and incorrect conclusions.

Bias should be avoided in research to ensure that findings are accurate, valid, and objective.

 To avoid research bias, researchers should take proactive steps throughout the research process, such as developing a clear research question and objectives, designing a rigorous study, following standardized protocols, and so on.

MORE LIKE THIS

Life@QuestionPro: The Journey of Kristie Lawrence

Life@QuestionPro: The Journey of Kristie Lawrence

Jun 7, 2024

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

research bias findings

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Incorporate STEM journalism in your classroom

  • Exercise type: Activity
  • Topic: Science & Society
  • Category: Research & Design
  • Category: Diversity in STEM

How bias affects scientific research

  • Download Student Worksheet

Purpose: Students will work in groups to evaluate bias in scientific research and engineering projects and to develop guidelines for minimizing potential biases.

Procedural overview: After reading the Science News for Students article “ Think you’re not biased? Think again ,” students will discuss types of bias in scientific research and how to identify it. Students will then search the Science News archive for examples of different types of bias in scientific and medical research. Students will read the National Institute of Health’s Policy on Sex as a Biological Variable and analyze how this policy works to reduce bias in scientific research on the basis of sex and gender. Based on their exploration of bias, students will discuss the benefits and limitations of research guidelines for minimizing particular types of bias and develop guidelines of their own.

Approximate class time: 2 class periods

How Bias Affects Scientific Research student guide

Computer with access to the Science News archive

Interactive meeting and screen-sharing application for virtual learning (optional)

Directions for teachers:

One of the guiding principles of scientific inquiry is objectivity. Objectivity is the idea that scientific questions, methods and results should not be affected by the personal values, interests or perspectives of researchers. However, science is a human endeavor, and experimental design and analysis of information are products of human thought processes. As a result, biases may be inadvertently introduced into scientific processes or conclusions.

In scientific circles, bias is described as any systematic deviation between the results of a study and the “truth.” Bias is sometimes described as a tendency to prefer one thing over another, or to favor one person, thing or explanation in a way that prevents objectivity or that influences the outcome of a study or the understanding of a phenomenon. Bias can be introduced in multiple points during scientific research — in the framing of the scientific question, in the experimental design, in the development or implementation of processes used to conduct the research, during collection or analysis of data, or during the reporting of conclusions.

Researchers generally recognize several different sources of bias, each of which can strongly affect the results of STEM research. Three types of bias that often occur in scientific and medical studies are researcher bias, selection bias and information bias.

Researcher bias occurs when the researcher conducting the study is in favor of a certain result. Researchers can influence outcomes through their study design choices, including who they choose to include in a study and how data are interpreted. Selection bias can be described as an experimental error that occurs when the subjects of the study do not accurately reflect the population to whom the results of the study will be applied. This commonly happens as unequal inclusion of subjects of different races, sexes or genders, ages or abilities. Information bias occurs as a result of systematic errors during the collection, recording or analysis of data.

When bias occurs, a study’s results may not accurately represent phenomena in the real world, or the results may not apply in all situations or equally for all populations. For example, if a research study does not address the full diversity of people to whom the solution will be applied, then the researchers may have missed vital information about whether and how that solution will work for a large percentage of a target population.

Bias can also affect the development of engineering solutions. For example, a new technology product tested only with teenagers or young adults who are comfortable using new technologies may have user experience issues when placed in the hands of older adults or young children.

Want to make it a virtual lesson? Post the links to the  Science News for Students article “ Think you’re not biased? Think again ,” and the National Institutes of Health information on sickle-cell disease . A link to additional resources can be provided for the students who want to know more. After students have reviewed the information at home, discuss the four questions in the setup and the sickle-cell research scenario as a class. When the students have a general understanding of bias in research, assign students to breakout rooms to look for examples of different types of bias in scientific and medical research, to discuss the Science News article “ Biomedical studies are including more female subjects (finally) ” and the National Institute of Health’s Policy on Sex as a Biological Variable and to develop bias guidelines of their own. Make sure the students have links to all articles they will need to complete their work. Bring the groups back together for an all-class discussion of the bias guidelines they write.

Assign the Science News for Students article “ Think you’re not biased? Think again ” as homework reading to introduce students to the core concepts of scientific objectivity and bias. Request that they answer the first two questions on their guide before the first class discussion on this topic. In this discussion, you will cover the idea of objective truth and introduce students to the terminology used to describe bias. Use the background information to decide what level of detail you want to give to your students.

As students discuss bias, help them understand objective and subjective data and discuss the importance of gathering both kinds of data. Explain to them how these data differ. Some phenomena — for example, body temperature, blood type and heart rate — can be objectively measured. These data tend to be quantitative. Other phenomena cannot be measured objectively and must be considered subjectively. Subjective data are based on perceptions, feelings or observations and tend to be qualitative rather than quantitative. Subjective measurements are common and essential in biomedical research, as they can help researchers understand whether a therapy changes a patient’s experience. For instance, subjective data about the amount of pain a patient feels before and after taking a medication can help scientists understand whether and how the drug works to alleviate pain. Subjective data can still be collected and analyzed in ways that attempt to minimize bias.

Try to guide student discussion to include a larger context for bias by discussing the effects of bias on understanding of an “objective truth.” How can someone’s personal views and values affect how they analyze information or interpret a situation?

To help students understand potential effects of biases, present them with the following scenario based on information from the National Institutes of Health :

Sickle-cell disease is a group of inherited disorders that cause abnormalities in red blood cells. Most of the people who have sickle-cell disease are of African descent; it also appears in populations from the Mediterranean, India and parts of Latin America. Males and females are equally likely to inherit the condition. Imagine that a therapy was developed to treat the condition, and clinical trials enlisted only male subjects of African descent. How accurately would the results of that study reflect the therapy’s effectiveness for all people who suffer from sickle-cell disease?

In the sickle-cell scenario described above, scientists will have a good idea of how the therapy works for males of African descent. But they may not be able to accurately predict how the therapy will affect female patients or patients of different races or ethnicities. Ask the students to consider how they would devise a study that addressed all the populations affected by this disease.

Before students move on, have them answer the following questions. The first two should be answered for homework and discussed in class along with the remaining questions.

1.What is bias?

In common terms, bias is a preference for or against one idea, thing or person. In scientific research, bias is a systematic deviation between observations or interpretations of data and an accurate description of a phenomenon.

2. How can biases affect the accuracy of scientific understanding of a phenomenon? How can biases affect how those results are applied?

Bias can cause the results of a scientific study to be disproportionately weighted in favor of one result or group of subjects. This can cause misunderstandings of natural processes that may make conclusions drawn from the data unreliable. Biased procedures, data collection or data interpretation can affect the conclusions scientists draw from a study and the application of those results. For example, if the subjects that participate in a study testing an engineering design do not reflect the diversity of a population, the end product may not work as well as desired for all users.

3. Describe two potential sources of bias in a scientific, medical or engineering research project. Try to give specific examples.

Researchers can intentionally or unintentionally introduce biases as a result of their attitudes toward the study or its purpose or toward the subjects or a group of subjects. Bias can also be introduced by methods of measuring, collecting or reporting data. Examples of potential sources of bias include testing a small sample of subjects, testing a group of subjects that is not diverse and looking for patterns in data to confirm ideas or opinions already held.

4. How can potential biases be identified and eliminated before, during or after a scientific study?

Students should brainstorm ways to identify sources of bias in the design of research studies. They may suggest conducting implicit bias testing or interviews before a study can be started, developing guidelines for research projects, peer review of procedures and samples/subjects before beginning a study, and peer review of data and conclusions after the study is completed and before it is published. Students may focus on the ideals of transparency and replicability of results to help reduce biases in scientific research.

Obtain and evaluate information about bias

Students will now work in small groups to select and analyze articles for different types of bias in scientific and medical research. Students will start by searching the Science News or Science News for Students archives and selecting articles that describe scientific studies or engineering design projects. If the Science News or Science News for Students articles chosen by students do not specifically cite and describe a study, students should consult the Citations at the end of the article for links to related primary research papers. Students may need to read the methods section and the conclusions of the primary research paper to better understand the project’s design and to identify potential biases. Do not assume that every scientific paper features biased research.

Student groups should evaluate the study or engineering design project outlined in the article to identify any biases in the experimental design, data collection, analysis or results. Students may need additional guidance for identifying biases. Remind them of the prior discussion about sources of bias and task them to review information about indicators of bias. Possible indicators include extreme language such as all , none or nothing ; emotional appeals rather than logical arguments; proportions of study subjects with specific characteristics such as gender, race or age; arguments that support or refute one position over another and oversimplifications or overgeneralizations. Students may also want to look for clues related to the researchers’ personal identity such as race, religion or gender. Information on political or religious points of view, sources of funding or professional affiliations may also suggest biases.

Students should also identify any deliberate attempts to reduce or eliminate bias in the project or its results. Then groups should come back together and share the results of their analysis with the class.

If students need support in searching the archives for appropriate articles, encourage groups to brainstorm search terms that may turn up related articles. Some potential search terms include bias , study , studies , experiment , engineer , new device , design , gender , sex , race , age , aging , young , old , weight , patients , survival or medical .

If you are short on time or students do not have access to the Science News or Science News for Students archive, you may want to provide articles for students to review. Some suggested articles are listed in the additional resources  below.

Once groups have selected their articles, students should answer the following questions in their groups.

1. Record the title and URL of the article and write a brief summary of the study or project.

Answers will vary, but students should accurately cite the article evaluated and summarize the study or project described in the article. Sample answer: We reviewed the Science News article “Even brain images can be biased,” which can be found at www.sciencenews.org/blog/scicurious/even-brain-images-can-be-biased. This article describes how scientific studies of human brains that involve electronic images of brains tend to include study subjects from wealthier and more highly educated households and how researchers set out to collect new data to make the database of brain images more diverse.

2. What sources of potential bias (if any) did you identify in the study or project? Describe any procedures or policies deliberately included in the study or project to eliminate biases.

The article “Even brain images can be biased” describes how scientists identified a sampling bias in studies of brain images that resulted from the way subjects were recruited. Most of these studies were conducted at universities, so many college students volunteer to participate, which resulted in the samples being skewed toward wealthier, educated, white subjects. Scientists identified a database of pediatric brain images and evaluated the diversity of the subjects in that database. They found that although the subjects in that database were more ethnically diverse than the U.S. population, the subjects were generally from wealthier households and the parents of the subjects tended to be more highly educated than average. Scientists applied statistical methods to weight the data so that study samples from the database would more accurately reflect American demographics.

3. How could any potential biases in the study or design project have affected the results or application of the results to the target population?

Scientists studying the rate of brain development in children were able to recognize the sampling bias in the brain image database. When scientists were able to apply statistical methods to ensure a better representation of socioeconomically diverse samples, they saw a different pattern in the rate of brain development in children. Scientists learned that, in general, children’s brains matured more quickly than they had previously thought. They were able to draw new conclusions about how certain factors, such as family wealth and education, affected the rate at which children’s brains developed. But the scientsits also suggested that they needed to perform additional studies with a deliberately selected group of children to ensure true diversity in the samples.

In this phase, students will review the Science News article “ Biomedical studies are including more female subjects (finally) ” and the NIH Policy on Sex as a Biological Variable , including the “ guidance document .” Students will identify how sex and gender biases may have affected the results of biomedical research before NIH instituted its policy. The students will then work with their group to recommend other policies to minimize biases in biomedical research.

To guide their development of proposed guidelines, students should answer the following questions in their groups.

1. How have sex and gender biases affected the value and application of biomedical research?

Gender and sex biases in biomedical research have diminished the accuracy and quality of research studies and reduced the applicability of results to the entire population. When girls and women are not included in research studies, the responses and therapeutic outcomes of approximately half of the target population for potential therapies remain unknown.

2. Why do you think the NIH created its policy to reduce sex and gender biases?

In the guidance document, the NIH states that “There is a growing recognition that the quality and generalizability of biomedical research depends on the consideration of key biological variables, such as sex.” The document goes on to state that many diseases and conditions affect people of both sexes, and restricting diversity of biological variables, notably sex and gender, undermines the “rigor, transparency, and generalizability of research findings.”

3. What impact has the NIH Policy on Sex as a Biological Variable had on biomedical research?

The NIH’s policy that sex is factored into research designs, analyses and reporting tries to ensure that when developing and funding biomedical research studies, researchers and institutes address potential biases in the planning stages, which helps to reduce or eliminate those biases in the final study. Including females in biomedical research studies helps to ensure that the results of biomedical research are applicable to a larger proportion of the population, expands the therapies available to girls and women and improves their health care outcomes.

4. What other policies do you think the NIH could institute to reduce biases in biomedical research? If you were to recommend one set of additional guidelines for reducing bias in biomedical research, what guidelines would you propose? Why?

Students could suggest that the NIH should have similar policies related to race, gender identity, wealth/economic status and age. Students should identify a category of bias or an underserved segment of the population that they think needs to be addressed in order to improve biomedical research and health outcomes for all people and should recommend guidelines to reduce bias related to that group. Students recommending guidelines related to race might suggest that some populations, such as African Americans, are historically underserved in terms of access to medical services and health care, and they might suggest guidelines to help reduce the disparity. Students might recommend that a certain percentage of each biomedical research project’s sample include patients of diverse racial and ethnic backgrounds.

5. What biases would your suggested policy help eliminate? How would it accomplish that goal?

Students should describe how their proposed policy would address a discrepancy in the application of biomedical research to the entire human population. Race can be considered a biological variable, like sex, and race has been connected to higher or lower incidence of certain characteristics or medical conditions, such as blood types or diabetes, which sometimes affect how the body reponds to infectious agents, drugs, procedures or other therapies. By ensuring that people from diverse racial and ethnic groups are included in biomedical research studies, scientists and medical professionals can provide better medical care to members of those populations.

Class discussion about bias guidelines

Allow each group time to present its proposed bias-reducing guideline to another group and to receive feedback. Then provide groups with time to revise their guidelines, if necessary. Act as a facilitator while students conduct the class discussion. Use this time to assess individual and group progress. Students should demonstrate an understanding of different biases that may affect patient outcomes in biomedical research studies and in practical medical settings. As part of the group discussion, have students answer the following questions.

1. Why is it important to identify and eliminate biases in research and engineering design?

The goal of most scientific research and engineering projects is to improve the quality of life and the depth of understanding of the world we live in. By eliminating biases, we can better serve the entirety of the human population and the planet .

2. Were there any guidelines that were suggested by multiple groups? How do those actions or policies help reduce bias?

Answers will depend on the guidelines developed and recommended by other groups. Groups could suggest policies related to race, gender identity, wealth/economic status and age. Each group should clearly identify how its guidelines are designed to reduce bias and improve the quality of human life.

3. Which guidelines developed by your classmates do you think would most reduce the effects of bias on research results or engineering designs? Support your selection with evidence and scientific reasoning.

Answers will depend on the guidelines developed and recommended by other groups. Students should agree that guidelines that minimize inequities and improve health care outcomes for a larger group are preferred. Guidelines addressing inequities of race and wealth/economic status are likely to expand access to improved medical care for the largest percentage of the population. People who grow up in less economically advantaged settings have specific health issues related to nutrition and their access to clean water, for instance. Ensuring that people from the lowest economic brackets are represented in biomedical research improves their access to medical care and can dramatically change the length and quality of their lives.

Possible extension

Challenge students to honestly evaluate any biases they may have. Encourage them to take an Implicit Association Test (IAT) to identify any implicit biases they may not recognize. Harvard University has an online IAT platform where students can participate in different assessments to identify preferences and biases related to sex and gender, race, religion, age, weight and other factors. You may want to challenge students to take a test before they begin the activity, and then assign students to take a test after completing the activity to see if their preferences have changed. Students can report their results to the class if they want to discuss how awareness affects the expression of bias.

Additional resources

If you want additional resources for the discussion or to provide resources for student groups, check out the links below.

Additional Science News articles:

Even brain images can be biased

Data-driven crime prediction fails to erase human bias

What we can learn from how a doctor’s race can affect Black newborns’ survival

Bias in a common health care algorithm disproportionately hurts black patients

Female rats face sex bias too

There’s no evidence that a single ‘gay gene’ exists

Positive attitudes about aging may pay off in better health

What male bias in the mammoth fossil record says about the animal’s social groups

The man flu struggle might be real, says one researcher

Scientists may work to prevent bias, but they don’t always say so

The Bias Finders

Showdown at Sex Gap

University resources:

Project Implicit (Take an Implicit Association Tests)

Catalogue of Bias

Understanding Health Research

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Issachar (Issi) Rosen-Zvi.

Examining the duality of Israel

Boy looking at tutor doing math on a chalkboard via a computer.

One way to help big groups of students? Volunteer tutors.

Headshot of Robin Bernstein.

Footnote leads to exploration of start of for-profit prisons in N.Y.

Mahzarin Banaji opened the symposium on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The final talk is today at 9 a.m.

Kris Snibbe/Harvard Staff Photographer

Turning a light on our implicit biases

Brett Milano

Harvard Correspondent

Social psychologist details research at University-wide faculty seminar

Few people would readily admit that they’re biased when it comes to race, gender, age, class, or nationality. But virtually all of us have such biases, even if we aren’t consciously aware of them, according to Mahzarin Banaji, Cabot Professor of Social Ethics in the Department of Psychology, who studies implicit biases. The trick is figuring out what they are so that we can interfere with their influence on our behavior.

Banaji was the featured speaker at an online seminar Tuesday, “Blindspot: Hidden Biases of Good People,” which was also the title of Banaji’s 2013 book, written with Anthony Greenwald. The presentation was part of Harvard’s first-ever University-wide faculty seminar.

“Precipitated in part by the national reckoning over race, in the wake of George Floyd, Breonna Taylor and others, the phrase ‘implicit bias’ has almost become a household word,” said moderator Judith Singer, Harvard’s senior vice provost for faculty development and diversity. Owing to the high interest on campus, Banaji was slated to present her talk on three different occasions, with the final one at 9 a.m. Thursday.

Banaji opened on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and neuroscience. Banaji’s first experiments found, not surprisingly, that New Englanders associated good things with the Red Sox and bad things with the Yankees.

She then went further by replacing the sports teams with gay and straight, thin and fat, and Black and white. The responses were sometimes surprising: Shown a group of white and Asian faces, a test group at Yale associated the former more with American symbols though all the images were of U.S. citizens. In a further study, the faces of American-born celebrities of Asian descent were associated as less American than those of white celebrities who were in fact European. “This shows how discrepant our implicit bias is from even factual information,” she said.

How can an institution that is almost 400 years old not reveal a history of biases, Banaji said, citing President Charles Eliot’s words on Dexter Gate: “Depart to serve better thy country and thy kind” and asking the audience to think about what he may have meant by the last two words.

She cited Harvard’s current admission strategy of seeking geographic and economic diversity as examples of clear progress — if, as she said, “we are truly interested in bringing the best to Harvard.” She added, “We take these actions consciously, not because they are easy but  because they are in our interest and in the interest of society.”

Moving beyond racial issues, Banaji suggested that we sometimes see only what we believe we should see. To illustrate she showed a video clip of a basketball game and asked the audience to count the number of passes between players. Then the psychologist pointed out that something else had occurred in the video — a woman with an umbrella had walked through — but most watchers failed to register it. “You watch the video with a set of expectations, one of which is that a woman with an umbrella will not walk through a basketball game. When the data contradicts an expectation, the data doesn’t always win.”

Expectations, based on experience, may create associations such as “Valley Girl Uptalk” is the equivalent of “not too bright.” But when a quirky way of speaking spreads to a large number of young people from certain generations,  it stops being a useful guide. And yet, Banaji said, she has been caught in her dismissal of a great idea presented in uptalk.  Banaji stressed that the appropriate course of action is not to ask the person to change the way she speaks but rather for her and other decision makers to know that using language and accents to judge ideas is something people at their own peril.

Banaji closed the talk with a personal story that showed how subtler biases work: She’d once turned down an interview because she had issues with the magazine for which the journalist worked.

The writer accepted this and mentioned she’d been at Yale when Banaji taught there. The professor then surprised herself by agreeing to the interview based on this fragment of shared history that ought not to have influenced her. She urged her colleagues to think about positive actions, such as helping that perpetuate the status quo.

“You and I don’t discriminate the way our ancestors did,” she said. “We don’t go around hurting people who are not members of our own group. We do it in a very civilized way: We discriminate by who we help. The question we should be asking is, ‘Where is my help landing? Is it landing on the most deserved, or just on the one I shared a ZIP code with for four years?’”

To subscribe to short educational modules that help to combat implicit biases, visit outsmartinghumanminds.org .

Share this article

You might like.

Expert in law, ethics traces history, increasing polarization, steps to bolster democratic process

Boy looking at tutor doing math on a chalkboard via a computer.

Research finds low-cost, online program yields significant results

Headshot of Robin Bernstein.

Historian traces 19th-century murder case that brought together historical figures, helped shape American thinking on race, violence, incarceration

Bringing back a long extinct bird

Scientists sequence complete genome of bush moa, offering insights into its natural history, possible clues to evolution of flightless birds

Women who follow Mediterranean diet live longer

Large study shows benefits against cancer, cardiovascular mortality, also identifies likely biological drivers of better health

Harvard-led study IDs statin that may block pathway to some cancers

Cholesterol-lowering drug suppresses chronic inflammation that creates dangerous cascade

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMJ Open Sci
  • v.5(1); 2021

Logo of bmjos

Moving towards less biased research

Mark yarborough.

Bioethics Program, University of California Davis, Sacramento, California, USA

Introduction

Bias, perhaps best described as ‘any process at any stage of inference which tends to produce results or conclusions that differ systematically from the truth,’ can pollute the entire spectrum of research, including its design, analysis, interpretation and reporting. 1 It can taint entire bodies of research as much as it can individual studies. 2 3 Given this extensive detrimental impact, effective efforts to combat bias are critically important to biomedical research’s goal of improving healthcare. Champions for such efforts can currently be found among individual investigators, journals, research sponsors and research regulators. The central focus of this essay is assessing the effectiveness of some of the efforts currently being championed and proposing new ones.

Current efforts fall mainly into two domains, one meant to prevent bias and one meant to detect it. Much like a proverbial chain, efforts in either domain are hampered by their weakest components. Hence, it behoves us to constantly probe antibias tools so that we can identify weak components and seek ways to compensate for them. Further, given the high stakes—conclusions that align with rather than diverge from truth—it further behoves the biomedical research community to prioritise to the extent possible bias prevention over bias detection. The less likely any given study is to be tainted by bias, the fewer research publications reporting biased results there will be. The value of detected bias pales in comparison, for it extends only as far as those who are aware of that detection after the fact, meaning that biased conclusions at variance with the truth can mislead those unaware of the bias that taints them for as long as the affected publications endure.

With these preliminary considerations about bias in mind, let us first examine some current antibias efforts and probe their weaknesses. Doing so will show why we need to develop additional strategies for preventing bias in the first place, and space is set aside at the end to examine two related candidate strategies for how we could attempt to do that.

Current bias countermeasures

Table 1 reflects some current countermeasures being employed to combat various kinds of biases. Though the table is far from comprehensive, (dozens of biases have been catalogued) 1 it does include major biases of concern, representative countermeasures to combat them, whether those countermeasures prevent or detect bias, and their likely relative strength.

Bias exampleExamples of harm resulting from biasCurrent prevalent bias countermeasuresCountermeasures goalLikely strength of the current bias countermeasures
Sponsorship bias Possible suppression of critical evidenceDisclosure of financial relationshipBias DetectionWeak
Selection, performance and detection biases Publications that report what are likely to be false positive findings Bias Detection and Bias Prevention
Publication biases (eg, selective reporting and non-reporting of outcomes)Inaccurate and/or irreproducible findings Bias Detection

Sponsorship bias

The bias that probably draws the most attention is what is known as sponsorship bias, 4 5 wherein pecuniary interests undermine the disinterestedness meant to prevail in scientific investigations. 6 The most prominent countermeasure against it consists in multiple disclosure practices that flag financial relationships between scientists and private companies. For example, academic institutions may require faculty to disclose annually their financial relationships with private companies; research sponsors may require applicants to make such disclosures when submitting applications; and journals typically require authors to make such disclosures when submitting manuscripts. The right-hand column of table 1 prompts the question, ‘to what extent do such disclosures actually prevent sponsorship bias?’ There is now ample conceptual analysis 7–10 and empirical evidence produced over many years such that we can safely state that there is an over-reliance on disclosure.

This extensive prior work shows, for example, that journal disclosure policies targeting authors fail to capture many financial ties between researchers and industry. Recent studies show that consulting agreements between researchers and companies, as well as financial ties between biomedical companies and organisations that produce clinical practice guidelines, often go undisclosed. 11 12 Looking at journal disclosure policies, we see further evidence of disclosure’s limited ameliorative effect. A recent study that randomised article reviewers into one group that received financial interests disclosures along with the manuscripts to be reviewed and another group that did not found that the disclosures had no effect on reviewer assessments of the manuscripts. 13 Another recent study looked at editorial practices regarding the financial interests of authors at 30 leading medical journals and found that none had actual tools for determining whether and how disclosed financial relationships might have impacted any given research report. 14

Additional considerations help to further explain the weaknesses of journal disclosure policies. First, disclosures are usually mistimed. When financial relationships bias studies, that bias occurs long before anyone discloses the relationships in reports about the studies. 15 Second, it is those, and only those, designated as authors who are subject to them. Often those who lead the design, conduct, analysis and reporting of a study are not in fact considered authors of it. 16 Private companies that sponsor the majority of drug studies and/or contract research organisations they hire control the design, manage the conduct, and analyse the data, as well as write the articles about that analysis for studies. 17 Journal disclosure mandates leave untouched the bias that these conflicted sponsors can introduce into clinical trials because of sizeable holes in the International Committee of Medical Journal Editors (ICMJE) authorship policy. Followed by an outsized portion of biomedical research journals, it ‘support[s] practices of commercial data control, content development and attribution that run counter to science’s values of openness, objectivity and truthfulness’ because ‘the ICMJE accepts the use of commercial editorial teams to produce manuscripts, which is a potential source of bias, and accepts private company ownership and analysis of clinical trial data.’ 16 In other words, even though readers of journals assume that journals accurately attribute those, and only those, who are responsible for the design, conduct, analysis and reporting of a study, authorship practices do not in fact require such accurate attribution. Thus, we are relying on disclosure, often after the fact of conducting a study, to combat the bias that financial entanglements can cause prior to a study’s launch and the disclosure practices themselves often mistarget those who should be making the disclosures. The end result is that current disclosure practices can conceal rather than reveal the prospect of sponsorship bias.

Furthermore, even if disclosures were better targeted, this would not negate the potential that disclosures themselves have to cause unintended detrimental consequences. Commentators long ago noted that disclosing financial relationships may contribute to people having a sense of ‘moral license to (act in biased ways more) than they would without disclosure. With disclosure, (acting in a biased way) might seem like fair play. While most professionals might care about their (audience), disclosure (practices) can encourage these professionals to exhibit this concern in a merely perfunctory way.’ 18

There are two final considerations about disclosure that need to be noted. First, disclosure is not meant to actually detect bias. Rather, it is meant to alert people to its possibility. Thus, even though disclosure is our major tool for combating one of the most detrimental forms of bias, it is not clear what good it actually does, which leads us to the second consideration. Since disclosure does nothing to prevent sponsorship bias, more substantial countermeasures aimed at prevention are needed. It is beyond the scope of this essay to examine the suitability of possible countermeasures for preventing sponsorship bias, such as sequestering investigators from private companies whenever possible. 15 Referencing this one example, though, highlights the substantial difference there can be between detecting bias on the one hand and actually preventing it on the other, a topic we will return to later.

Returning for the moment, though, to detection of sponsorship bias, these collective concerns about the most prevalent safeguard against it suggest that it can facilitate rather than detect, let alone prevent, bias. By stopping at disclosure, it suggests that financial entanglements are often permissible; we just need to make sure they are relatively transparent to others. The end result is that there is a pall of uncertainty cast over a large body of published research, including a major portion of the clinical trials that society relies on to improve healthcare. 17

Additional major sources of bias

Evidence about the effectiveness of safeguards against other prominent sources of bias besides sponsorship bias is equally disconcerting. Consider, for example, biases that impact the design, conduct and reporting of preclinical animal studies. This class of studies is of particular concern for multiple reasons, not the least of which is the fact that early phase clinical trials, and the risks intrinsic to them, can launch on a single, highly prized ‘proof-of-concept finding in an animal model without wider preclinical validation.’ 19 This risk is particularly grave when we consider the interests and welfare of the patients who volunteer for the early phase clinical trials. 20

Given such high stakes, it is critical that there be effective safeguards that, once again, counter biases that undermine the rigour that studies capable of producing reliable findings require. Here too table 1 prompts investigation of how well current safeguards actually work. Evidence about excess significance bias, a publishing bias due in large part to selective publishing of results by both authors and journals, shows major limitations in their effectiveness. Looking, for example, at the neurosciences preclinical studies generally 2 and stroke studies specifically, 21 we see that excess significance bias is a major contributor to well documented failure 22 23 to successfully ‘translate preclinical animal research (results) to clinical trials.’ 24

When we look at biases resulting from poor study design, across all fields of preclinical inquiry, we find that studies that lack construct, internal and/or external validity that produce biased research reports are ubiquitous. 25 Not only have such findings contributed to ‘spectacular failures of irreproducibility’ 25 that cast concern over entire fields of research, 3 they also forecast failure for the clinical trials that seek to translate preclinical findings into clinical therapies. 26 Illustrating this is a recent study estimating that a majority of the reports of positive findings from animal studies meant to inform clinical studies of acute stroke actually report what are likely to be false positive results. 27

With this evidence in mind, we must consider anew the harm caused by, for example, toxicities, personal expenses and opportunity costs 28 that phase 1 trial participants endure in trials that launch on the basis of preclinical studies whose biased design produces unreliable research reports used to justify the clinical trials. 29 Those participants have no choice but to rely on a properly functioning research oversight system to protect their interests and welfare. Alas, that oversight system is much weaker than the research and research oversight communities likely would care to admit. 30 All the more reason, then, that our efforts to guard against bias should be as varied and robust as its many sources.

The fact of the matter, though, is that the most prominent safeguard against them is peer review. Since it occurs at the reporting stage of the research continuum, it is preceded by other safeguards, such as reporting guidelines, which are reviewed below. None of these other safeguards are as ubiquitous as peer review, however, and it is the gate that publications must ultimately navigate through. Given this level of significance, its effectiveness warrants careful scrutiny. Scrutiny begins by noting that peer review is meant to detect rather than prevent bias. One perhaps could counter that peer review actually is a hybrid countermeasure since it is capable of actually preventing bias at times, or at the least the dissemination of reports tainted by it since, when peer review works, it can prevent publication of suspect findings. However, though it is no doubt true that peer reviewers can reject manuscripts out of concern for bias, concerns about false positive findings, and the like, there is no assurance that manuscripts rejected at one journal will be rejected by all journals. Hence, even if one were to confer it a hybrid status wherein it can both prevent and detect bias, the extent of bias that has long been documented in peer-reviewed journals reveals major weaknesses in peer review. Recent high-profile COVID-19 -related retractions 31 and commentary 32 further confirms these weaknesses. Consequently, we need to be guarded in our expectations about the central antibias safeguard and its ability to assure the reliability of published research findings.

The upshot of all this is that current bias safeguards do little to alert clinical investigators, research ethics review committees, and others to the prospects of biased findings in either pivotal preclinical studies that are the precursors to clinical trials or the full spectrum of clinical trials themselves. This raises genuine concerns that far too many ill-advised clinical trials get conducted rather than avoided. It also underscores the need for conducting the individual studies that constitute any given body of preclinical or clinical research in a manner that is free of bias in the first place. Additional safeguards that prevent rather than detect bias will be needed if we are to succeed at this. No doubt multiple ones are needed. In the balance of this piece, I will focus on ones that could be used for preclinical studies, leaving clinical studies safeguards for other occasions.

Preventing bias

Examples of current bias prevention tools.

We are fortunate that there are some safeguards for combatting bias in preclinical studies already in place. Perhaps the most notable are reporting guidelines such as the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines. 33 Recently revised, 34 the guidelines are designed to assure transparency of critical methodological aspects of animal studies. If widely enough adopted, they should promote greater rigour in animal research and thus prevent much of the bias that currently plagues it. Unfortunately, though, uptake of the guidelines has been lacklustre to date, mainly because too many animal researchers are either unaware of them or do not follow them. 35 Not all the evidence about reporting guidelines is so discouraging though. A recent study of reporting guidelines tailored for the journal Stroke found that they substantially improved the quality of published preclinical studies when compared with reports in other journals that did not require use of the same guidelines. 36 37

Despite the mixed evidence about the effectiveness of reporting guidelines, both general and journal-tailored reporting guidelines do have value that is worth noting. Even though they target the reporting stage of research, their use can influence how researchers design and conduct their studies. This highlights the true promise of reporting guidelines: they can incline researchers toward well-designed research and robust reports about it. To the extent that this occurs, they function as true bias prevention safeguards.

Nevertheless, enthusiasm for reporting guidelines must be tempered by the mixed evidence about them to date. It suggests that reporting guidelines will have an incremental effect at best on preventing bias. This is borne out by evidence, for example, pertaining to the TREAT-NMD Advisory Committee for Therapeutics. Although this committee does not promulgate specific reporting guidelines, it does promote the kinds of research practices that reporting guidelines are meant to foster. It does this by ‘provid(ing) detailed constructive feedback on clinical proposals for neuromuscular diseases submitted by researchers in both academia and industry.’ This group provided feedback on just under 60 preclinical research programmes between 2010 and 2019. It reports having raised concerns in just under a third of their reviews about the use of control groups, blinding and randomisation with researchers whose preclinical research they reviewed. They also report raising concerns about a misalignment between preclinical data and claimed preclinical efficacy almost a third of the time as well. 19 While some may take comfort in the fact that the group’s reviews found deficiencies in basic elements of sound research in far less than half of the studies they reviewed, all likely agree that the frequency of deficiencies still remains troubling.

Two new strategies for preventing bias in preclinical studies

Experience with the ARRIVE guidelines to date suggest that systematic adoption of new research practices will be sporadic, though, rather than widespread until we find ways to systematically move towards widespread adoption of reforms aimed at preventing bias. Perhaps the first step in moving in that direction is collectively grappling with an obvious inference to be drawn from all the evidence noted above: current success metrics in research can too often reward rather than prevent biased research. People may enjoy rewards from design-deficient studies, in the form of publications and funding, as well as the prestige that follows both. This suggests that efforts to combat bias are not just hampered by ineffective and often ill-timed bias countermeasures. They are also hampered by current flawed and entrenched incentive structures and researcher performance metrics that Hardwicke and Ioannidis contend ‘preferentially valu[e] aesthetics over authenticity.’ 38 While many readers may not agree that the current incentive structures are this far askew, we nevertheless must worry, based on the assembled evidence, that research institutions and sponsors may often incentivise biases in very much the same way that private sponsors can cause sponsorship bias.

If this analysis is sound, then widespread adoption of research practices capable of preventing bias will hinge on resisting current incentive structures. The most logical opportunity for generating such resistance resides jointly, I think, with institutional leaders and individual investigators. Though systems-level incentive structures contribute to biased research, the fact of the matter is that investigative teams conduct research and their members are trained at and often employed by research institutions. Thus, the path forward seems to depend on finding ways to get both investigators and research institutions to prize ‘authenticity’ more. This, no doubt, will prove challenging given the extent to which both groups can flourish under current rewards structures.

There are at least two complimentary strategies to look at that might prove beneficial. One encourages both investigators and research institutions to recognise the extent to which they are entangled in a major conflict of interest. Their primary interest in conducting authentic science is too often at odds with the secondary interest in being successful and enjoying the individual and institutional rewards of that success. Though we typically do not label this situation as a conflict of interest, often preferring instead the nomenclature of conflicts of commitment, the situation most assuredly is just as deeply conflicted as are the financial relationships that create sponsorship bias. If it was so designated, continued indifference about it would be difficult to maintain. That prospect alone warrants us labelling the situation the conflict of interest that it is.

The other strategy might provide additional motivation. It requires research teams and research institutions, either separately or jointly, to carefully examine the extent to which they may be contributing to the production of biased research. Here is one way they could do that: identify a systematic review of a given body of research in a given field that those participating in the exercise agree employed a reliable meta-analysis plan that identified bias and/or research deficiencies, determine whether any of the published studies included in the review originated from one’s lab or institution, and determine whether that study may have been at risk for contributing to the bias/deficiencies reported in the systematic review. If no studies from a lab group or the institution were included in the systematic review, they could still determine whether there are any published studies from the lab or institution that could have been included in the systematic review and, if so, whether their studies would have contributed to the worrisome findings reported in the systematic review. With these results in hand, the next step would be to develop a prevention plan that is designed to prevent future studies from exhibiting those problems. With the prevention plan in place, one could then determine what institutional and/or lab-level changes would be required in order to implement the prevention plan.

It is likely that few, if any, prevention plans would need to start from scratch. As most readers of this journal are no doubt aware, there is already a wealth of published scholarship about how to improve the quality of biomedical research. Some of the most relevant examples from it include routine use of study preregistration 39 40 and research reports, 38 41 42 supplementing the 3Rs 43 of animal studies with the 3 Vs of scientific validity, 25 and clearly reporting whether a study is a hypothesis generating or a hypothesis confirming study. 26

We must acknowledge at the outset, though, that developing a prevention plan will likely prove much easier than fully adopting one because adoption will reveal how deeply entrenched the conflict of interest between professional success and rewards and good science often is. For example, clearly labelling research studies as exploratory ones in publications will temper claims about innovation that researchers may be accustomed to making about their work. Similarly, employing research reports will restrict study analyses and descriptions, which will often result in more constrained publications. 41 Different researchers no doubt will respond differently to these changes, but one can hope that enough of them will feel empowered by the changes to become champions of science reforms within their institutions and professional societies meant to align success metrics with good research. Supporting this expectation are recent studies reporting that researchers are eager for improved research climates at their organisations. 44 45

While research teams develop and implement prevention plans, institutional leaders will need to take responsibility for eliminating the conflicts of interest that promote bias in research. They would not need to start from scratch either, since important preliminary work that could help with this is already underway. This work includes efforts that show how to align institutional metrics of professional success with good science. 46–48 An additional resource they could fruitfully draw from is the recently published ‘Hong Kong Principles for assessing researchers.’ 49 Here too it will no doubt be easier to develop than implement plans meant to avoid the entrenched conflict of interest. But benefits may quickly materialise as soon as the work to develop prevention plans materialise. Once institutions name, and thus acknowledge, the conflict of interest that they are helping to perpetuate, maintaining the status quo should prove that much more difficult. This should help to create at least some inertia tilted toward reform and thus away from stasis.

Many readers will no doubt be less sanguine about the success prospects for either strategy. The teams and institutions that choose to adopt them would no doubt have concerns that they would be unilaterally placing themselves at a disadvantage to those that choose not to burden themselves with the demands of either of the proposed strategies. With such concerns in mind, it is helpful to ponder how we might address them. Probably the best option for doing so is to implement some pilot projects to test the use of systematic reviews to develop bias prevention plans. There are at least two options for implementing such pilot projects.

One is for either an institution or a professional society to host a competition where the team that develops the best prevention plan for their work receives some kind of institutional/professional society recognition or reward. Institutional rewards might be monetary in the form of travel stipends for graduate students or postdoctoral fellows to attend conferences. Professional society rewards might be a plenary session at a society’s annual meeting where the winning team could present its bias prevention plan.

The other option is for research institutions to work through their main research officers to sponsor audits of the work of research teams. The audits would be informed by relevant systematic reviews. The audits could either be random or limited to teams that volunteer. To ensure that the audits are not seen or experienced as punitive, the launch of the audits would need to be preceded by a communication campaign that explained the purpose and value of the audits. Others may identify additional options for implementing pilot projects. Whatever options research teams, institutions, and/or professional societies might use, such pilot projects should prove valuable. They are likely the quickest way to learn whether systematic reviews could be used to interrogate research quality at the local level and to develop prevention plans for reducing bias in research.

There is no one panacea capable of turning away all the contributors to decades of disappointing clinical translation efforts. And even if we could snap our fingers and banish overnight the biases that are among the contributors to the disappointing results, science still may not take us to the goal of improved clinical treatments that we seek. After all, we are dealing with science, not magic. But if we could muster the desire and discipline to better combat bias in research, at least we could take comfort in the fact that what we are calling science is in fact actual science, as free of bias as we can possibly make it. The two complimentary strategies described above are offered in hopes that they could help to muster that desire and discipline. If either or both were to prove beneficial, we would find ourselves in a place far preferable to the one we are in now.

Acknowledgments

The author would also like to acknowledge the support of Fondation Brocher, the thoughtful suggestions of several reviewers, and useful input from colleagues Robert Nadon and Fernando Fierro.

Correction notice: This article has been corrected since it was published Online First. In the Acknowledgments, name "Fernando Feraro" has been corrected to "Fernando Fierro".

Contributors: The author conceived the ideas for the manuscript and exclusively wrote all versions of the manuscript, including the final one.

Funding: A portion of the author’s time was supported by the National Centre for Advancing Translational Sciences, National Institutes of Health, through grant number UL1 TR001860.

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data sharing not applicable as no datasets generated and/or analysed for this study. This manuscript does not report about any original empirical research and thus there are no research data to share.

Open peer review: Prepublication and Review History is available online at http://dx.doi.org/10.1136/bmjos-2020-100116 .

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think! opens in new tab/window

What is peer review?

Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued successfully with relatively minor changes for some 350 years.

Elsevier relies on the peer review process to uphold the quality and validity of individual articles and the journals that publish them.

Peer review has been a formal part of scientific communication since the first scientific journals appeared more than 300 years ago. The Philosophical Transactions opens in new tab/window of the Royal Society is thought to be the first journal to formalize the peer review process opens in new tab/window under the editorship of Henry Oldenburg (1618- 1677).

Despite many criticisms about the integrity of peer review, the majority of the research community still believes peer review is the best form of scientific evaluation. This opinion was endorsed by the outcome of a survey Elsevier and Sense About Science conducted in 2009 opens in new tab/window and has since been further confirmed by other publisher and scholarly organization surveys. Furthermore, a  2015 survey by the Publishing Research Consortium opens in new tab/window , saw 82% of researchers agreeing that “without peer review there is no control in scientific communication.”

To learn more about peer review, visit Elsevier’s free e-learning platform  Researcher Academy opens in new tab/window and see our resources below.

The review process

The peer review process

Types of peer review.

Peer review comes in different flavours. Each model has its own advantages and disadvantages, and often one type of review will be preferred by a subject community. Before submitting or reviewing a paper, you must therefore check which type is employed by the journal so you are aware of the respective rules. In case of questions regarding the peer review model employed by the journal for which you have been invited to review, consult the journal’s homepage or contact the editorial office directly.  

Single anonymized review

In this type of review, the names of the reviewers are hidden from the author. This is the traditional method of reviewing and is the most common type by far. Points to consider regarding single anonymized review include:

Reviewer anonymity allows for impartial decisions , as the reviewers will not be influenced by potential criticism from the authors.

Authors may be concerned that reviewers in their field could delay publication, giving the reviewers a chance to publish first.

Reviewers may use their anonymity as justification for being unnecessarily critical or harsh when commenting on the authors’ work.

Double anonymized review

Both the reviewer and the author are anonymous in this model. Some advantages of this model are listed below.

Author anonymity limits reviewer bias, such as on author's gender, country of origin, academic status, or previous publication history.

Articles written by prestigious or renowned authors are considered based on the content of their papers, rather than their reputation.

But bear in mind that despite the above, reviewers can often identify the author through their writing style, subject matter, or self-citation – it is exceedingly difficult to guarantee total author anonymity. More information for authors can be found in our  double-anonymized peer review guidelines .

Triple anonymized review

With triple anonymized review, reviewers are anonymous to the author, and the author's identity is unknown to both the reviewers and the editor. Articles are anonymized at the submission stage and are handled in a way to minimize any potential bias towards the authors. However, it should be noted that: 

The complexities involved with anonymizing articles/authors to this level are considerable.

As with double anonymized review, there is still a possibility for the editor and/or reviewers to correctly identify the author(s) from their writing style, subject matter, citation patterns, or other methodologies.

Open review

Open peer review is an umbrella term for many different models aiming at greater transparency during and after the peer review process. The most common definition of open review is when both the reviewer and author are known to each other during the peer review process. Other types of open peer review consist of:

Publication of reviewers’ names on the article page 

Publication of peer review reports alongside the article, either signed or anonymous 

Publication of peer review reports (signed or anonymous) with authors’ and editors’ responses alongside the article 

Publication of the paper after pre-checks and opening a discussion forum to the community who can then comment (named or anonymous) on the article 

Many believe this is the best way to prevent malicious comments, stop plagiarism, prevent reviewers from following their own agenda, and encourage open, honest reviewing. Others see open review as a less honest process, in which politeness or fear of retribution may cause a reviewer to withhold or tone down criticism. For three years, five Elsevier journals experimented with publication of peer review reports (signed or anonymous) as articles alongside the accepted paper on ScienceDirect ( example opens in new tab/window ).

Read more about the experiment

More transparent peer review

Transparency is the key to trust in peer review and as such there is an increasing call towards more  transparency around the peer review process . In an effort to promote transparency in the peer review process, many Elsevier journals therefore publish the name of the handling editor of the published paper on ScienceDirect. Some journals also provide details about the number of reviewers who reviewed the article before acceptance. Furthermore, in order to provide updates and feedback to reviewers, most Elsevier journals inform reviewers about the editor’s decision and their peers’ recommendations. 

Article transfer service: sharing reviewer comments

Elsevier authors may be invited to  transfer  their article submission from one journal to another for free if their initial submission was not successful. 

As a referee, your review report (including all comments to the author and editor) will be transferred to the destination journal, along with the manuscript. The main benefit is that reviewers are not asked to review the same manuscript several times for different journals. 

Tools and resources

Interesting reads.

Chapter 2 of Academic and Professional Publishing, 2012, by Irene Hames in 2012 opens in new tab/window

"Is Peer Review in Crisis?" Perspectives in Publishing No 2, August 2004, by Adrian Mulligan opens in new tab/window

“The history of the peer-review process” Trends in Biotechnology, 2002, by Ray Spier opens in new tab/window

Reviewers’ Update articles

Peer review using today’s technology

Lifting the lid on publishing peer review reports: an interview with Bahar Mehmani and Flaminio Squazzoni

How face-to-face peer review can benefit authors and journals alike

Innovation in peer review: introducing “volunpeers”

Results masked review: peer review without publication bias

Elsevier Researcher Academy modules

The certified peer reviewer course opens in new tab/window

Transparency in peer review opens in new tab/window

  • Open access
  • Published: 04 June 2024

Effects of mobile Internet use on the health of middle-aged and older adults: evidences from China health and retirement longitudinal study

  • Ying Wang 1 &
  • Hong Chen 2  

BMC Public Health volume  24 , Article number:  1490 ( 2024 ) Cite this article

330 Accesses

1 Altmetric

Metrics details

The rapid development of digital technology has radically changed people’s lives. Simultaneously, as the population is rapidly aging, academic research is focusing on the use of Internet technology to improve middle-aged and older people’s health, particularly owing to the popularity of mobile networks, which has further increased the population’s accessibility to the Internet. However, related studies have not yet reached a consensus. Herein, empirical analysis of the influence of mobile Internet use on the subjective health and chronic disease status of individuals in their Middle Ages and above was conducted utilizing ordered logit, propensity score matching (PSM), and ordered probit models with data from the 2020 China Health and Retirement Longitudinal Study. The study aimed to provide a theoretical basis and reference for exploring technological advances to empower the development of a healthy Chinese population and to advance the process of healthy aging. The health of middle-aged and older adults mobile Internet users was greatly improved, according to our findings. Further, the use of mobile Internet by these persons resulted in improvements to both their self-assessed health and the state of their chronic diseases. As per the findings of the heterogeneity analysis, the impact of mobile Internet use was shown to be more pronounced on the well-being of middle-aged persons aged 45–60 years compared to those aged ≥ 60 years. Further, the endogeneity test revealed that the PSM model could better eliminate bias in sample selection. The results suggest that the estimates are more robust after eliminating endogeneity, and that failure to disentangle sample selectivity bias would overestimate not only the facilitating effect of mobile Internet use on the self-assessed health impacts of middle-aged and older adults, but also the ameliorating effect of mobile Internet use on the chronic diseases of middle-aged and older adults. The results of the mechanistic analysis suggest that social engagement is an important mediating mechanism between mobile Internet use and the health of middle-aged and older adults. This implies that mobile Internet use increases opportunities for social participation among middle-aged and older adults, thereby improving their health.

Peer Review reports

Introduction

Health is a critical foundation for promoting the well-being of people and maintaining social security and stability [ 1 , 2 ]. China has always given substantial importance to the construction of a medical security system and has always prioritized improving people’s health [ 3 , 4 ]. Continuous efforts have considerably improved the medical level of our hair; the health level of people has also been continuously improved [ 5 , 6 ]. The National Health Commission projects that by 2021, the average life expectancy in China would reach 78.2 years; this is a great improvement compared with the average lifespan in the early years of the nation was just 35 [ 7 , 8 ]. Unfortunately, the rising prevalence of chronic diseases among middle-aged and older adults have been a side effect of the general increase in people’s standard of living [ 9 , 10 ]. According to relevant medical survey data, the chronic disease prevalence among Chinese people aged ≥ 45 reached 75% and that among people aged ≥ 60 was as high as nearly 80%; these data suggest that the health conditions of these age groups in China are not optimistic. Simultaneously, the seventh national data suggested that 18.7% of Chinese citizens are aged 60 or older. The World Health Organization (WHO) has predicted that China would soon become a “super-aging” society. Relevant studies also show that with the increasing aging trend, the health problem of the elderly population will not be ignored, which is related to the stable development of the national economy and society [ 11 , 12 , 13 , 14 ]. To cope with these challenges, China has actively formulated many policies aimed at reducing middle-aged and older people’s susceptibility to developing chronic diseases and improving their health in general. The “Health China Strategy” was initially proposed in 2017 by General Secretary Xi Jinping in the report to the 19th Party Congress, which proposed to improve the national health policy, actively cope with population aging, enhance the integration of health services and revitalize the economy by improving the growth of aging businesses and industries, and offer a wide variety of medical care options to the public [ 15 ]. Since then, the State Council has implemented the “Health China 2030” planning outline, “Opinions on the Implementation of Health China Action,” and other policies to actively promote the development of China as a healthy country and improve the overall health of the nation [ 16 , 17 ]. The 20th Party Congress was successfully held in 2022, during which, General Secretary Xi Jinping further emphasized his prioritization of healthy China growth, a national policy to actively manage issues related to the aging population, and an improved public health system [ 18 , 19 ]. These points suggest that improving people’s health will be a long-term concern of the Party and the State for the people’s livelihood.

Continuous development of the economy and society has led to the penetration of Internet technologies represented by big data, blockchain, the Internet of things, and artificial intelligence into every aspect of residents’ lives [ 20 , 21 , 22 ]. Particularly, the increase in various mobile short video applications has allowed people of all ages to join the network wave [ 23 , 24 ]. Based on the current figures from the 50th China Internet Development Statistics Report, the country’s Internet penetration rate was approximately 74.4% as of June 2022, and the number of unique net names in China has climbed as high as 1.051 billion. In addition, the report details that as of June 2022, there were 962 million short video users in China, an increase of 28.05 million from December 2021 and a percentage share of 91.5% of all Internet users. The above data suggest that populations in the middle-aged and older adults ranges are rapidly integrating into online society and that the era of a “universal network” has arrived. The greatest benefit of Internet technology is that it can overcome the limitations of time and space and allow interaction with information anytime and anywhere, thereby enhancing work efficiency and convenience of life in general [ 25 ]. The popularity of the mobile Internet has further enhanced its accessibility, with many people being exposed to the Internet social environment [ 26 , 27 ]. The National Health Plan for the 14th Five-Year Plan was released by the State Council in 2020, and it explicitly states that the country’s citizens should leverage the Internet and related technologies to enhance the quality of their health care. Internet technology can broaden the channels for residents to search for health information so that they can search and gain health knowledge online anytime and anywhere, thereby improving their health literacy [ 28 ]. Furthermore, the use of the Internet may enhance the convenience of residents’ life, overcome the limitations of time and space, and facilitate online medical consultations and other medical behaviors [ 29 ]. As China enters a new stage of development and faces a new developmental pattern, the application of digital technology to advance the medical security system and enhance people’s health is practically significant for China to modernize the medical service system and actively and comprehensively promote healthy aging and the “Health China Strategy.”

Based on this background, the present study aimed to use data from the 2020 China Health and Aging Tracking Survey (CHARLS) to empirically examine the influence that using mobile Internet has on the health of persons in their middle age and later years. The study results not only encourage a new academic perspective and thinking for studying health policies for the elderly population in the context of technological progress but also provide new empirical evidence for government departments to develop health policies in the era of the digital network. The remaining chapters of this study have been organized as follows: the second section is the literature review, in which existing literature has been reviewed and summarized; the third section highlights the data and methods, including data sources, variable design, and model construction; the fourth section summarizes the study results, with statistical results of baseline regression, robustness test, heterogeneity analysis, and endogeneity elimination; the fifth section is the discussion section, in which the results of this study have been discussed in detail and the strengths and limitations have been stated; the last section is the conclusion of this study.

Literature review

With the continuous development of digital technology, there has been a surge in the number of research reports that investigate the connection between breakthroughs in technology and improvements in public health. The present study compared and summarized the findings of relevant studies and observed that the majority of contemporary research has been on analyzing how conventional Internet use affects the health of populations and that the findings of the research that investigated the connection between people’s use of the Internet and their overall health mostly focused on two aspects: Internet use and citizens’ health have a strong favorable association, i.e., Internet use improves residents’ health, and that Internet use is inversely correlated with citizens’ health., i.e., the Internet is detrimental to residents’ health.

The first conclusion suggests that Internet use may improve population health. Wang et al. analyzed the influence of Internet use on the health of senior individuals using data from the 2012 and 2015 China General Social Survey (CGSS), and they discovered that Internet use significantly affected the users’ physical and mental health, but had no significant influence on self-reported health [ 30 ]. The connection between Internet use and psychological and physiological health was also moderated negatively by the user’s cognitive abilities. Using data from the 2018 CHARLS, Li et al. conducted an empirical study to determine whether or not frequent Internet use negatively impacts the health of Chinese adults aged 40 and above. They found that adults in their middle years and beyond who used the Internet had better estimates of their health and were less likely to develop chronic disorders [ 31 ]. Nonetheless, Internet use had a greater impact on boosting the status of chronic diseases than on users’ perceptions of their health. Using data from the 2017 CGSS, Han and Zhao categorized overall health into social, mental, and physical health and studied the effects of Internet use on multidimensional health using the ordered probability model. As per their findings, using the Internet regularly may benefit health in a variety of ways [ 32 ]. Liu et al. reported that self-reported health was better among Chinese seniors who utilized the Internet, and seniors who reported high levels of social support from family and friends also reported much better health [ 33 ]. According to Guo et al., depressive symptoms in the elderly were shown to be decreased by 3.370 points, or around 37.19%, when Internet use was compared to non-Internet use. The health effects were particularly prominent for agricultural workers, women, and older adults [ 34 ]. A double-difference approach was used by Fan and Yang, who analyzed CHARLS panel data from 2013, 2015, and 2018, to determine whether or not Internet use negatively impacted the mental health of those in their middle ages and beyond in rural China and reported that the participants’ mental health greatly benefited from their Internet use [ 35 ]. Guo et al. used data from 2014, 2016, 2018, and 2020 Chinese Household Panel Surveys to study Internet use and its consequences on the health of the elderly. They found that Internet use has a considerable impact on the physical health of older adults, particularly women, rural residents, and the Midwest population [ 36 ].

The second finding is that excessive use of the Internet has a detrimental impact on the overall health of individuals. Chen et al. found that compared with females, males are more likely to be addicted to Internet gaming, leading to Internet addiction, which, in turn, affects their health [ 37 ]. Additionally, Kwak found that residents who spent more time online than their peers reported having lower subjective health, greater stress levels, and more intense emotions of despair and suicidal thoughts than those who spent less time online [ 38 ]. Cai et al. reported that concerns regarding the negative impacts of excessive Internet use on users’ psychological health have grown as its use has increased. They discovered that excessive online activity was marginally linked to symptoms of depression, anxiety, solitude, and other psychological health problems, and was adversely linked to subjective well-being [ 39 ]. A study by Xie et al. found that Internet use affects the mental health of older adults and increases their prevalence of depressive symptoms. Further heterogeneity analyses showed more pronounced negative impacts on mental health for specific groups of older persons, such as females, young and middle-aged, high income, non-rural, less educated, and older persons living with others [ 40 ]. Zhang et al. used data from the 2018 Chinese Family Panel Studies (CFPS2018) to assess the impact of Internet use on the mental health of 14,497 middle-aged and elderly people. The findings suggest that excessive Internet use can lead to increased levels of depression and decreased cognitive function [ 41 ].

In terms of Internet use and social participation, studies have shown that Internet use can significantly enhance the social participation of the middle-aged and elderly population. Gong et al. analyzed the impact of Internet use on the social participation of the elderly using data from the China Longitudinal Aging Social Survey (CLASS) 2018, and found that Internet use has expanded the areas of social participation of the elderly in China [ 42 ]. Using data based on the 2018 China Family Panel Studies, Dong et al. empirically analyzed the impact of Internet use on the social participation of urban older adults. The study found that Internet use has a significant positive impact on political participation and voluntary participation of urban older adults [ 43 ]. In terms of social participation and health, studies have shown that social participation can significantly improve the health status of middle-aged and elderly people. Liang et al. used the data from the 2018 Chinese Longitudinal Healthy Longevity Survey (CLHLS) to comprehensively analyze the impact of social participation on the health of the elderly using logistic regression modeling and propensity score matching method, and found that social participation can improve the physical and mental health of the elderly [ 44 ]. He et al. argues that social participation is an important action program to promote mental health in old age and realize active aging. They analyzed the impact of social participation models on the mental health of older adults using data from the 2015 and 2018 China Health and Retirement Longitudinal Study. The study found that different modes of social participation were able to improve the mental health of older adults [ 45 ].

In summary, studies have drawn two main conclusions about the correlations between Internet use and people’s health; nonetheless, no agreement has been achieved. As China advances into a new stage of development, demographic structure, socioeconomic environment, and social development pattern have undergone drastic changes. Therefore, it is practically important to revisit and discuss the correlation of technological advances with population health to realize the strategy of a healthy China, alleviate the problems caused by an aging society, promote the health and well-being of the population, and comprehensively promote the modernization of the national health care system. Past research has mostly concentrated on conventional Internet use; however, the primary objective of this research was to investigate the implications that using mobile Internet has on the health of populations; this is particularly important for exploring the health effects of mobile Internet in a comprehensive network era. Second, previous studies mostly focused on the entire population, elderly people aged ≥ 60, and adolescents under the age of 18; research including both middle-aged and older adults (aged ≥ 45) as subjects are lacking. In addition, fewer studies have analyzed social engagement as a mechanism of action between Internet use and the health of middle-aged and older adults. With the increase in average life expectancy and changes in population outcome, individuals in the middle age range and those in their later years become the main workforce for our social and economic development. Meanwhile, as the government’s policy of delaying retirement age will soon be implemented, re-employment of elderly people will become the main concern of the government in terms of future social security. Therefore, investigating the connection between the use of technology and the health of adults in their middle years and above is of the utmost importance to enhance the healthcare system and improve their quality of life. In terms of research methodology, previous research has mostly ignored the issue of endogeneity. However, to better quantify the potential risks of mobile Internet use on users’ health, we employed the propensity score matching (PSM) model to control for bias in sample selection.

By using information from the 2020 CHARLS, this research sought to empirically examine the influence of mobile Internet use on the health of individuals within the middle age range and older adults. Robustness tests and heterogeneity analysis were conducted and the PSM model was used to determine the overall impact that using mobile Internet has on the health of persons in their middle age and later years, thereby eliminating the issue of endogeneity due to sample selectivity bias.

Data and methods

Data sources.

The CHARLS 2020 database served as the source for the information presented in this article. It is a large-scale project hosted by Peking University’s Institute of Development Studies and jointly implemented by Peking University’s China Social Science Survey Center and Peking University’s Youth League Committee. The major aim of this project is to collect high-quality micro-survey data on Chinese middle-aged and older households and individuals aged ≥ 45. In 2011, a nationwide baseline survey was carried out; There were a total of 450 localities (villages) and 150 counties surveyed throughout 28 provinces (incorporated municipal authorities and autonomous areas reporting directly to the Central Government) in 2011, 2013, 2015, 2018 and 2020; this encompasses a wide range of data and has a strong representation in China. In our study, we used the latest 2020 database. Because the research topic of this study is the impact of Internet use on health, the individual pool was selected for analysis. After data processing, censoring, and elimination of invalid variables, a final valid sample of 8491 was acquired.

Variable design

Dependent variable.

The health of individuals in their middle years and beyond served as the study’s dependent variable. To comprehensively measure this variable, similar to previous studies, the health of the study population was evaluated by combining self-reported well-being with the presence or absence of chronic disease [ 46 , 47 ]. In the 2020 CHARLS questionnaire, for the question “How do you think your health is? Is it very good, good, fair, bad or very bad,” we allotted a scale of 1 to very bad, 2 to bad, 3 to fair, 4 to good, and 5 to very good. The quantiles of 25%, 50%, 75% of self-rated health were classified as 2, 3, 4. The questionnaire listed 15 chronic diseases, specifically, stomach or digestive system disease, kidney disease, stroke, liver disease, chronic lung disease, heart disease, cancer (malignancy), diabetes or elevated blood glucose, dyslipidemia (high or low blood lipids), hypertension, emotional and mental disease, dementia, parkinson, arthritis or rheumatism, and asthma. If the responder possessed any of these conditions, a score of 1 was allotted, signifying that they have a chronic disease; otherwise, the default score of 0 was set, signifying that they do not have a chronic disease.

Independent variable

Mobile Internet use was used as the independent variable. The questionnaire included the following question: “Which of the following tools do you use to access the Internet?” If the respondent chose to use a tablet or mobile to access the Internet, the value assigned was 1, implying that they are active users of the mobile web; otherwise, the value assigned was 0, implying that they are not mobile Internet users. In order to further portray mobile Internet usage, we utilize the number of mobile Internet features used by middle-aged and older adults as a proxy variable, which is used to conduct robustness tests. The health of middle-aged and elderly people is also influenced by personal characteristics, such as gender, age, education level; Lifestyle, such as smoking, alcohol and other factors. Drawing on related studies [ 48 , 49 ], the variables respondents’ sex, age, marital status, education level, residential address (rural/urban), employment status, health insurance participation, and lifestyle variables (e.g., smoking, alcohol consumption, etc.) were also included in the model as control variables for analysis. In the 2020 CHARLS database, the question “What is your gender?” Female = 0, Male = 1. “What is your marital status?” Married = 1, separated = 2, divorced = 3, widowed = 4, unmarried = 5. “What is your education level?” Illiterate = 1, primary = 2, secondary = 3, university and above = 4. “What is your residential address?” urban = 1, urban-rural = 2, rural = 3. “Are you currently employed? No = 0, Yes = 1.“Do you participate in medical insurance? No = 0, Yes = 1. “Do you smoke?” No = 0, Yes = 1. “Do you drink alcohol?” No = 0, Yes = 1. The assignment of variables and descriptive statistics findings are displayed in Table  1 .

Intermediate variable

Social participation is the mediating variable in this paper. In order to further analyze the channel of the mechanism of action of the impact of mobile Internet use on the health of middle-aged and elderly people. In this paper, we choose the social participation variable for the mediation test. In the questionnaire, there is a question “Did you do any social activities in the past month?” The answer options include: visiting the door to socialize with friends, playing mahjong or chess, providing help to others, dancing or working out, participating in club activities, participating in volunteer activities, attending training courses, and others. If the respondent participates in any of the above social activities, he/she is considered to have engaged in social participation, which is replicated as 1. Otherwise, a value of 0 is assigned.

Model design

Empirical research on the topic of the health of older adults was carried out using the ordered logit and binary logit models because the explanatory factors consisted of five categorical and dichotomous variables. The specific econometric model is as follows:

In the above equation, Health denotes the explanatory variable, i.e., middle-aged and elderly people’s health conditions, Internet denotes the explanatory variable Internet use, \({\rm{\beta }}\) denotes the estimated coefficients, X denotes the control variables, and \({\rm{\varepsilon }}\) denotes the random perturbation term. To ensure the robustness of the analysis, we used the ordered probit and probit models as replacement econometric models. The specific econometric models are as follows:

In Eq. ( 2 ), \({\text{Health}}_{\text{i}}\) indicates the state of health for those in their middle and later years, Internet denotes whether the individual uses mobile Internet, \({\text{X}}_{\text{i}}\) denotes the control variables, and \({{\rm{\varepsilon }}_{\rm{i}}}\) is the random perturbation term. \({{\rm{\beta }}_{\rm{i}}}\) is the estimated coefficient of the model. Because mobile Internet use belongs to sample self-selection behavior, the model results will have the issue of endogeneity; therefore, to avoid the presence of endogeneity, we employed the PSM model [ 50 , 51 ]. The econometric model is as follows:

In Eq. ( 3 ), \({\text{Z}}_{\text{i}}\) denotes the treatment variable; If the score is 1, then individual i is part of the study’s experimental group, and if the score is 0, it indicates that the individual is in the control group. Equation ( 4 ) denotes the average treatment effect on the treated (ATT), which is the impact that using mobile Internet has on the health of persons in their middle years and older after eliminating endogeneity.

Baseline regression outcomes

Models (1) and (2) show the baseline regression outcomes of the influence of using mobile Internet on people’s perceptions of their health, specifically those of middle-aged and elderly individuals. Models (3) and (4) show the findings on the impact that using mobile Internet has on the prevalence of chronic diseases across individuals of middle age and older. According to the findings, using mobile Internet has a substantial impact on both self-reported health and the prevalence of chronic diseases among persons of middle age and older. Concerning self-rated health, model (1) displays the regression findings in the absence of control variables, while model (2) displays the regression findings with control factors included. Both models indicate that mobile use of the Internet has the potential to substantially enhance populations’ health, which means that self-reported health is better among those in their middle age and above who use mobile Internet compared to those of the same demographic who do not. In terms of chronic disease prevalence, both models (3) and (4) indicate that the health of those in their middle years and senior citizens may be greatly enhanced by their Internet use. Thus, there is a link between mobile web activity and a decreased risk of chronic disease in middle-aged and elderly persons. Therefore, mobile Internet positively contributes to the health condition of those in their middle and later years.

The findings for the control variable generally matched those of earlier research. When comparing the sexes, males had a high-self rating score on their health than women. Generally, middle-aged and older adults health deteriorates as they age, and the chances of suffering from chronic diseases are higher, which is consistent with the current social situation in China [ 52 , 53 ]. As far as marital status is concerned, the unmarried group is significantly healthier than the married going group. In terms of education levels, adults who were more educated showed worse self-rated health statuses; this may be because highly educated individuals in the middle age range and above can make more objective judgments about their health status. The results show that people who drink alcohol tend to have worse self-rated health, but the results of this study show that drinking alcohol does not adversely affect chronic disease. This result may be due to the heterogeneity of drinking behavior in the samples of this study, for example, individuals with different levels of education often have large differences in drinking rating. As we all know, many parties such as the World Health Organization have shown that alcohol is an important influence in causing chronic diseases, especially leading to malignant tumors [ 54 , 55 , 56 ]. Sleep duration was positively linked to better subjective and objective measures of health, including those for chronic disease, indicating that middle-aged and elderly adults with adequate sleep duration tended to have good overall health. According to the results of descriptive statistics, the mean value of sleep time of Chinese middle-aged and elderly people is 6.387, which indicates that Chinese middle-aged and elderly people have enough sleep time, and therefore the better their health condition. Table  2 displays the findings of the baseline regression.

Analyses of robustness

In order to ensure the robustness of the results obtained and to further characterize the impact of mobile Internet use on the health of middle-aged and elderly people, this paper utilizes the substitution of measures and the substitution of independent variables for the robustness test. For the multicategorical variable of self-assessed health, this paper replaces the ordered logit model with the Order Probit model; for the dichotomous variable of chronic disease status, this paper replaces the Logit model with the Probit model. In terms of replacing the independent variables, this paper utilizes the number of mobile Internet function use to replace mobile Internet use for the robustness test. Table  3 shows the results of the robustness test. Models (5) and (6) are the results of the analysis of the replacement econometric model, and models (7) and (8) are the results of replacing the independent variables. According to the results, mobile Internet use has a significant positive impact on self-assessed health and chronic disease status among middle-aged and older adults, both with the replacement of the measurement model and the replacement of the core independent variables. This result is consistent with the benchmark results, indicating that the conclusions obtained in this paper are robust.

Analyses of heterogeneity

To further analyze the influence of using mobile Internet on the health status of individuals in the middle age range and above, we examined heterogeneity from the perspectives of different ages. Adults were categorized as either middle-aged (45–60 years old) or elderly (60 + years old) based on the WHO’s definitions. The heterogeneity analyses revealed significant heterogeneity in both subjective health and chronic disease status. In terms of subjective health, mobile Internet use was meaningfully correlated for the middle-aged group but was not correlated for the elderly group. In terms of chronic disease status, mobile Internet use improved the status of chronic disease individuals in their middle ages but did not improve that of the elderly group. Table  4 presents the findings from the heterogeneity analyses.

Endogenous elimination

Sample selection bias might lead to endogeneity issues in the empirical findings since mobile Internet use is a freely chosen behavior. Related studies have shown that various education levels, marital status, and lifestyles can affect the mobile Internet use of residents [ 57 , 58 ]. We employed a PSM model to get rid of the endogeneity issues arising from bias in sample selection. PSM model dictates that a sample balance test comes first in the process, and we can enter the subsequent analysis only when the sample passes the balance test [ 59 ]. Table  5 shows that before matching, all factors were statistically significant in the balance test, whereas all of them were insignificant after matching, indicating that the matched sample had improved balance. Furthermore, the balance test plot of Fig.  1 with variable sub-balance indicates that the matched samples are well balanced. Table  5 displays the specific results of each balance test, and Fig.  1 depicts the corresponding balance test plots.

figure 1

Equilibrium chart

The outcomes of the PSM analysis, i.e., the ATT, are displayed in Table  6 . The analysis in this research included the use of both K-nearest neighbor and kernel matching to ensure the reliability of the findings. In terms of self-rated health, the pre-match ATT value was 0.0748 and the post-match values were 0.0469 and 0.0571, respectively, suggesting that the impacts of mobile web activity on the subjective health of individuals in the middle age range and above are overstated if endogeneity is not eliminated. In terms of chronic disease status, the pre-match ATT value was − 0.0699 and the post-match values were − 0.0263 and − 0.0165, respectively, indicating a decrease in the absolute values. Thus, if endogeneity is not accounted for, the beneficial benefits of mobile Internet use on middle-aged and older adults with chronic diseases would be overstated Table  6 depicts the findings of the targeted ATT estimate. Table  6 depicts the findings of the targeted ATT estimate.

Mechanism analysis

In order to further analyze in depth the relationship between mobile Internet use and the self-assessed health and chronic disease status of middle-aged and elderly people, this paper explores the mechanism of action between mobile Internet use and the self-assessed health and chronic disease status of middle-aged and elderly people by using the mediation effect model. Relevant studies have shown that with the development of digital technology, on the one hand, Internet use can increase the opportunity and frequency of residents’ social participation, and on the other hand, social participation can significantly improve residents’ health status [ 60 , 61 ]. To verify its mediating effect, this paper selects social participation as a mediating variable to verify its intrinsic mechanism between mobile Internet use and middle-aged and elderly people’s self-assessed health and chronic disease status. We draw on the basic idea of Baron [ 62 ] on mediating effect analysis and utilize the stepwise regression method for mediating effect analysis. According to the results in Table  7 , at stepwise regression, mobile Internet use has a significant positive effect on both the health status and social participation of middle-aged and older adults, respectively. This result still holds when these two variables are included together in the model for analysis. This result suggests that social participation has a significant and partially mediated effect between mobile Internet use and the health of middle-aged and older adults. The specific pathway is that mobile Internet use increases opportunities for social participation among rural middle-aged and older adults, which improves their health. The results of the mediation effect estimates are shown in Table  7 .

The use of mobile internet may considerably enhance older individuals’ health

The findings of this investigation corroborate the premise that persons of the middle age range and above may benefit greatly from using mobile Internet to strengthen their self-evaluated health and reduce their risk of developing chronic diseases. These findings are in line with what Li and Wang and colleagues found in their research [ 30 , 63 ]. Data shows that China’s 5G base stations account for > 60% of the global total, proving the success of China’s recent efforts to support the building of a digital China. This has laid a solid foundation for enhancing the Internet penetration rate of residents. Notably, With the advent of mobile Internet, it is now possible for people to obtain medical advice from specialists at any time and from any location, considerably improving the quality of care they receive [ 64 ]. In particular for individuals of middle age and above, mobile Internet use can effectively streamline mobility-related problems, enabling them to provide appropriate medical services promptly, and this can improve their health conditions [ 65 ]. On the other hand, mobile Internet eliminates information asymmetry and widens information access. Through daily observations, it is easy to understand that many middle-aged and elderly adults will learn about fitness and health through the Internet, which will remarkably improve their health-related literacy and subsequently, their health levels [ 28 ]. In summary, relevant departments should pay attention to the health-promoting effects of the Internet on older people and those in their middle years. Additionally, the departments can actively guide people in their late middle years and beyond to efficiently use the Internet and improve their mobile Internet use skills, maximizing the health benefits of mobile Internet. Furthermore, relevant departments should also note that the phenomenon of the “digital divide” continues to exist and certain middle-aged and elderly adults should not be exposed to the Internet. Thus, there is an urgent need to develop mechanisms to break the “digital divide” so that a large proportion of the adult population, including those of middle and old age can avail the dividends of the digital age.

Heterogeneity of the effects of mobile Internet use on the health of elderly and middle-aged persons in different age groups

The findings illustrate that mobile Internet use is highly heterogeneous across age groups of individuals in the middle age range and above. In particular, the health-promoting effects of mobile Internet were more pronounced among the middle-aged group of individuals aged 45–60 years than those aged ≥ 60 years. This may be attributed to the improved Internet use skills of middle-aged individuals, whereas the older age group had relatively lower odds and skills of using the Internet [ 66 ]. Overall, the middle-aged population was slightly better than the elderly population in terms of economic strength and education, and this population was not only proficient in utilizing the web for health-related research and education, but also for “Internet medical” and “Internet hospitals,” both of which may substantially enhance the quality of their health [ 67 , 68 ]. In contrast, elderly individuals are often excluded from Internet use due to the social environments of the current times and the degradation of their physical and mental functions. Even when they use the Internet, their skills are very weak and particularly weak when using the Internet to promote their health [ 69 , 70 , 71 ]. Therefore, in the future, studies should focus on elderly individuals aged ≥ 60 years to strengthen their social integration and enable them to be integrated into the universal Internet society. Furthermore, we should adopt various channels to increase the training of Internet skills for the elderly; for example, the community as a unit can be explained Internet-related knowledge and use skills through regular lectures or peer-to-peer visits, so that they can overcome the fear of using the Internet as elderly adults and improve their Internet use skills during daily life.

PSM can effectively eliminate the issue of endogeneity

Owing to differences in endowment resources, there will be differences in Internet use behavior amongst individuals in the middle age range and up with different education levels, marital status, or lifestyles; this can lead to the issue of endogeneity in model estimation results owing to bias in the selection of samples. Unless this endogeneity is taken into account, the model estimation results may be inaccurate, and the health-promoting effects of online activity on individuals of middle age and beyond will be overestimated or underestimated. Nevertheless, most previous studies have tended to ignore this endogeneity problem. In the present study, we eliminated the issue of endogeneity by using the PSM model. We observed that if the endogeneity issue was not addressed, the positive impact of mobile Internet use on the perceived health benefits of middle-aged and older adults is overestimated, and the beneficial impact of mobile Internet on the improvement of chronic disease in adults over middle age is exaggerated. This indicates that the PSM model better eliminates endogeneity and that the obtained estimation results have better scientific validity [ 72 ]. The estimated findings were made reliable by the application of the replacement econometric model. We observed that the estimated values are quite reliable and that the conclusions obtained are scientifically sound and highly credible.

Social participation as an important mechanistic channel between mobile Internet use and the health of middle-aged and older adults

The findings of this paper suggest that social participation is an important mechanism channel between mobile Internet use and the health of middle-aged and older adults. This means that mobile Internet use increases the opportunities for social participation of middle-aged and elderly people, which in turn enhances their health. With the continuous popularization of the Internet, middle-aged and elderly people can use the mobile Internet platform for social chatting, entertainment, shopping, etc., which greatly enhances the opportunities for social participation of middle-aged and elderly people. On the one hand, the Internet can break through the limitations of time and space, allowing the elderly to contact friends and family members anytime and anywhere, thus enhancing the relationship between the elderly and social communication and interaction, and thus enabling them to maintain their physical and mental well-being. On the other hand, mobile Internet technology can eliminate the asymmetry of information and enable the elderly to know more social information, thus increasing their social participation and improving their health. Therefore, the government should not only consider strengthening the popularization and promotion of mobile Internet among the elderly in its future policy formulation, providing them with more convenient platforms for information acquisition and communication, but also encouraging the elderly to actively participate in social activities, expanding the scope of social interaction and enhancing social support.

Innovations and limitations

Data from the 2020 CHARLS was used in conducting an empirical analysis of the influence of mobile Internet use on the health of Chinese adults of middle age and above utilizing the ologit and PSM models. This has important policy implications for fostering a healthy China, easing the burden of an aging population, and bettering the health of middle-aged and senior citizens everywhere. This is the first research of its kind to investigate the correlation between Internet use and persons in their middle and later years; it provides new empirical evidence to existing studies. Furthermore, to ascertain the overall influence of mobile Internet use on the health of individuals in their middle years and above, we used the PSM model to remove endogeneity due to sample selection bias.

Nevertheless, a few caveats apply to our investigation. To begin, we utilized data collected from cross-sections of the population; however, the health of those in their middle years and those in their later years is a dynamic process. Therefore, in the future, we aim to use multiyear sub-tracking data to mor the results suggest that the estimates are more robust after eliminating endogeneity, and that failure to disentangle sample selectivity bias would overestimate not only the facilitating effect of mobile Internet use on the self-assessed health impacts of middle-aged and older adults, but also the ameliorating effect of mobile Internet use on the chronic diseases of middle-aged and older adults. Second, the explanatory variable was using a mobile APP, which is a binary variable; however, the frequency of mobile APP use also has different impacts on the well-being of the population as a whole. For instance, appropriate use of mobile APPs can improve the pleasure and happiness of the population; however, excessive use of mobile Internet or addiction will negatively affect population health. Therefore, in the future, a more in-depth examination of how technological progress affects the health of a population necessitates a deeper breakdown of mobile Internet use.

Conclusions

Researching how mobile Internet utilization affects the health of the seniors and the middle-aged is practically important for promoting China’s comprehensive “Health China Strategy,” actively promoting healthy aging, and realizing the modernization of the health care system within the context of the digital China strategy and Health China Strategy. We used models like ordered logit and PSM to experimentally examine the influence of mobile Internet use on the health of those in their middle years and seniors in China leveraging data from the 2020 CHARLS. Our results imply that mobile Internet use substantially affects the state of health of individuals in their middle years and later in life, with positive effects on subjective health and the prevalence of the chronic disease. As per the findings of the heterogeneity assessment, the impacts of Internet use were more pronounced on the health of people in the middle age range and beyond between ages 45 and 60 than those aged ≥ 60 and older. In addition, to get rid of the endogeneity that came from bias in sample selection, we applied the PSM model. The results suggest that the estimates are more robust after eliminating endogeneity, and that failure to disentangle sample selectivity bias would overestimate not only the facilitating effect of mobile Internet use on the self-assessed health impacts of middle-aged and older adults, but also the ameliorating effect of mobile Internet use on the chronic diseases of middle-aged and older adults. The results of the mechanistic analysis suggest that social engagement is an important mediating mechanism between mobile Internet use and the health of middle-aged and older adults. This implies that mobile Internet use increases opportunities for social participation among middle-aged and older adults, thereby improving their health.

Data availability

The data of 2020 China Health and Retirement Longitudinal Study (CHARLS) is publicly available at https://charls.charlsdata.com/pages/data/111/zh-cn.html accessed on 16 November 2023.

Lee M, Lee H, Kim Y, Kim J, Cho M, Jang J, Jang H. Mobile app-based health promotion programs: a systematic review of the literature. Int J Environ Res Public Health. 2018;15(12):2838.

Article   PubMed   PubMed Central   Google Scholar  

Terry PE. The twenty five most important studies in Workplace Health Promotion. Volume 37. Los Angeles, CA: SAGE Publications Sage CA; 2023. pp. 156–63.

Google Scholar  

Zhuang Q. Always put people’s Health as a Strategic Priority for Development - Achievements and experiences of the Health China Initiative since the 18th Party Congress. Managing World. 2022;38(07):24–37.

Shen S. What kind of health care system do we need? Social Secur Rev. 2021;5(01):24–39.

Cen S, Ge Y. Economic growth, livelihood incentives, and population health: theory and empirical evidence. Mod Economic Discuss 2022(06):59–69.

Kai G, Wang H, Liu T. Factors influencing the health level of the working population and trends in the evolution of health status. Soc Sci Res 2018(01):38–47.

Wang G. A study on the healthy life expectancy of the Chinese elderly population. Sociol Res. 2022;37(03):160–81.

Chen H, Chen D. Mechanisms of change in the gender gap in life expectancy of the Chinese population. Popul Stud. 2022;46(03):117–28.

Wu H. Family caregiving burden of urban and rural elderly in a period of rapid aging. Popul Stud. 2022;46(03):74–87.

Luo Y, Wang S. Urban living and chronic diseases in the presence of economic growth: evidence from a long-term study in southeastern China. Front Public Health 2022, 10.

Zhao L. A review of healthy aging in China, 2000–2019. Health Care Sci. 2022;1(2):111–8.

Article   Google Scholar  

McLaughlin SJ, Chen Y, Tham SS, Zhang J, Li LW. Healthy aging in China: Benchmarks and socio-structural correlates. Res Aging. 2020;42(1):23–33.

Article   PubMed   Google Scholar  

Han Y, He Y, Lyu J, Yu C, Bian M, Lee L. Aging in China: perspectives on public health. In. Volume 4. Elsevier; 2020. pp. 11–7.

Jiang Y, Wu Y, Li S, Fu S, Lv Y, Lin H, Yao Y. Aging-friendly environments and healthy aging. Front Med. 2023;10:1211632.

Yang F, Zhang J. Traditional Chinese Sports under China’s Health Strategy. Journal of Environmental and Public Health 2022, 2022.

Jiang Q, Feng Q, Navarro-Pardo E, Facal D, Bobrowicz-Campos E. Aging and health in China. Aging Health China 2022:5.

Meng Q. Strengthening public health systems in China. Lancet Public Health. 2022;7(12):e987–8.

Dong K. Personal pension: a Strategic measure to actively respond to Population Aging. People Forum. 2022;24:84–7.

Yan Y, Li Y. Research on the challenges and Strategies Faced by healthy aging to active aging. Dongyue Analects. 2022;43(07):165–75.

Chen L, Liu W. The effect of internet access on body weight: evidence from China. J Health Econ. 2022;85:102670.

Graham M, Dutton WH. Society and the internet: how networks of information and communication are changing our lives. Oxford University Press; 2019.

Hine C. Ethnography for the internet: embedded, embodied and everyday. Taylor & Francis; 2015.

Kaye DBV, Chen X, Zeng J. The co-evolution of two Chinese mobile short video apps: parallel platformization of Douyin and TikTok. Mob Media Communication. 2021;9(2):229–53.

Wang Y. Humor and camera view on mobile short-form video apps influence user experience and technology-adoption intent, an example of TikTok (DouYin). Comput Hum Behav. 2020;110:106373.

Scheerder AJ, Van Deursen AJ, Van Dijk JA. Internet use in the home: Digital inequality from a domestication perspective. new Media Soc. 2019;21(10):2099–118.

Benda NC, Veinot TC, Sieck CJ, Ancker JS. Broadband internet access is a social determinant of health! In. Volume 110. American Public Health Association; 2020. pp. 1123–5.

Hargittai E, Piper AM, Morris MR. From internet access to internet skills: digital inequality among older adults. Univ Access Inf Soc. 2019;18:881–90.

Estacio EV, Whittle R, Protheroe J. The digital divide: examining socio-demographic factors associated with health literacy, access and use of internet to seek health information. J Health Psychol. 2019;24(12):1668–75.

Kakhi K, Alizadehsani R, Kabir HD, Khosravi A, Nahavandi S, Acharya UR. The internet of medical things and artificial intelligence: trends, challenges, and opportunities. Biocybernetics Biomedical Eng 2022.

Wang J, Liang C, Li K. Impact of internet use on elderly health: empirical study based on Chinese general social survey (CGSS) data. Healthcare: 2020. MDPI; 2020. p. 482.

Li L, Ding H, Li Z. Does internet use impact the health status of middle-aged and older populations? Evidence from China health and retirement longitudinal study (CHARLS). Int J Environ Res Public Health. 2022;19(6):3619.

Han J, Zhao X. Impact of internet use on multi-dimensional health: an empirical study based on CGSS 2017 data. Front Public Health. 2021;9:749816.

Liu N, He Y, Li Z. The relationship between internet use and self-rated health among older adults in China: the mediating role of Social Support. Int J Environ Res Public Health. 2022;19(22):14785.

Guo H, Feng S, Liu Z. The temperature of internet: internet use and depression of the elderly in China. Front Public Health 2022, 10.

Fan S, Yang Y. How does internet use improve mental health among middle-aged and elderly people in rural areas in China? A quasi-natural experiment based on the China Health and Retirement Longitudinal Study (CHARLS). Int J Environ Res Public Health. 2022;19(20):13332.

Guo E, Li J, Luo L, Gao Y, Wang Z. The effect and mechanism of internet use on the physical health of the older people—empirical analysis based on CFPS. Front Public Health 2022, 10.

Chen KH, Oliffe JL, Kelly MT. Internet gaming disorder: an emergent health issue for men. Am J Men’s Health. 2018;12(4):1151–9.

Kwak Y, Kim H, Ahn J-W. Impact of internet usage time on mental health in adolescents: using the 14th Korea youth risk behavior web-based survey 2018. PLoS ONE. 2022;17(3):e0264948.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Cai Z, Mao P, Wang Z, Wang D, He J, Fan X. Associations between Problematic Internet Use and Mental Health outcomes of students: a Meta-analytic review. Adolesc Res Rev 2023:1–18.

Xie L, Yang H-l, Lin X-y, Ti S-m, Wu Y-y, Zhang S, Zhang S-q. Zhou W-l: does the internet use improve the mental health of Chinese older adults? Front Public Health. 2021;9:673368.

Zhang C, Wang Y, Wang J, Liu X. Does internet use promote mental health among middle-aged and older adults in China? Front Psychol. 2022;13:999498.

Gong X, Wang Y. Empowerment and extension domain: the impact of internet use on the social participation pattern of the elderly in our country. Sci Res Aging. 2023;11(11):40–55.

Dong J, Chen C. A study on the influence of internet use on social participation of urban elderly. World Surv Res 2023(05):66–75.

Liang X, Zhang C. The influence of social participation on the health of the elderly: from the perspective of urban and rural differences. J Xihua University(Philosophy Social Sciences). 2023;42(02):57–71.

He W, Zhang X, Liu L. The effect of social participation model on mental health of the elderly: an individual-family balance perspective. Gov Stud. 2022;38(05):12–24.

Li L, Ding H. The relationship between internet use and population health: a cross-sectional survey in China. Int J Environ Res Public Health. 2022;19(3):1322.

Li L, Zeng Y, Zhang Z, Fu C. The impact of internet use on health outcomes of rural adults: evidence from China. Int J Environ Res Public Health. 2020;17(18):6502.

Yang K. Internet use, employment performance and the health of Chinese residents. Int Health. 2022;14(3):222–35.

Zhang S, Zhang Y. The relationship between internet use and mental health among older adults in China: the mediating role of physical exercise. Risk Manage Healthc Policy 2021:4697–708.

Dehejia RH, Wahba S. Propensity score-matching methods for nonexperimental causal studies. Rev Econ Stat. 2002;84(1):151–61.

Cardini P, Endogenesis. diid—disegno Industriale Industrial Des 2022(76):6–6.

Pan Z, Wu L, Zhuo C, Yang F. Evolution and influencing factors of the spatial pattern of health level of the elderly population in China from 2010 to 2020. Acta Geogr Sin. 2022;77(12):3072–89.

Zhang W, Fu M. The Health Status and trends of the Elderly Population in China from 2010 to 2020: analysis based on Census and Sample Survey Data. Chin J Popul Sci 2022(05):17–31.

Boffetta P, Hashibe M. Alcohol and cancer. Lancet Oncol. 2006;7(2):149–56.

Article   CAS   PubMed   Google Scholar  

Rehm J, Shield KD. Alcohol use and cancer in the European Union. Eur Addict Res. 2021;27(1):1–8.

Kumagai N, Ogura S. Persistence of physical activity in middle age: a nonlinear dynamic panel approach. Eur J Health Econ. 2014;15:717–35.

Gonçalves S, Dias P, Correia A-P. Nomophobia and lifestyle: smartphone use and its relationship to psychopathologies. Computers Hum Behav Rep. 2020;2:100025.

Laws R, Walsh AD, Hesketh KD, Downing KL, Kuswara K, Campbell KJ. Differences between mothers and fathers of young children in their use of the internet to support healthy family lifestyle behaviors: cross-sectional study. J Med Internet Res. 2019;21(1):e11454.

Zhang Z, Kim HJ, Lonjon G, Zhu Y. Balance diagnostics after propensity score matching. Annals Translational Med 2019, 7(1).

Du X, Liao J, Ye Q, Wu H. Multidimensional internet use, social participation, and depression among middle-aged and elderly Chinese individuals: nationwide cross-sectional study. J Med Internet Res. 2023;25:e44514.

Schroeer C, Voss S, Jung-Sievers C, Coenen M. Digital formats for community participation in health promotion and prevention activities: a scoping review. Front Public Health. 2021;9:713159.

Baron RM, Kenny DA. The moderator–mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Personal Soc Psychol. 1986;51(6):1173.

Article   CAS   Google Scholar  

Ding H, Zhang C, Xiong W. Associations between mobile internet use and self-rated and mental health of the Chinese population: evidence from China family panel studies 2020. Behav Sci. 2022;12(7):221.

Jin H, Li L, Qian X, Zeng Y. Can rural e-commerce service centers improve farmers’ subject well-being? A new practice of ‘internet plus rural public services’ from China. Int Food Agribusiness Manage Rev. 2020;23(5):681–95.

Zeadally S, Siddiqui F, Baig Z, Ibrahim A. Smart healthcare: challenges and potential solutions using internet of things (IoT) and big data analytics. PSU Res Rev. 2020;4(2):149–68.

Petrovčič A, Reisdorf BC, Grošelj D, Prevodnik K. A typology of aging internet users: exploring digital gradations in internet skills and uses. Social Sci Comput Rev. 2023;41(5):1921–40.

Ghubaish A, Salman T, Zolanvari M, Unal D, Al-Ali A, Jain R. Recent advances in the internet-of-medical-things (IoMT) systems security. IEEE Internet Things J. 2020;8(11):8707–18.

Singh RP, Javaid M, Haleem A, Vaishya R, Ali S. Internet of medical things (IoMT) for orthopaedic in COVID-19 pandemic: roles, challenges, and applications. J Clin Orthop Trauma. 2020;11(4):713–7.

Kiel JM. The digital divide: internet and e-mail use by the elderly. Med Inf Internet Med. 2005;30(1):19–23.

Mubarak F, Suomi R. Elderly forgotten? Digital exclusion in the information age and the rising grey digital divide. INQUIRY: J Health Care Organ Provis Financing. 2022;59:00469580221096272.

Gruzdeva MA. The Age Factor in the Digital Divide: The Edges of Inequality. Economic and social changes: facts, trends, forecast 2022, 15(4):228–241.

Reiffel JA. Propensity score matching: the ‘devil is in the details’ where more may be hidden than you know. Am J Med. 2020;133(2):178–81.

Download references

Acknowledgements

The data used in this article are from the 2020 China Health and Retirement Longitudinal Study implemented by the National Development Research Institute of Peking University. We would like to thank the above institution for providing data assistance, but we are responsible for the content of this article.

This research was funded by The National Social Science Fund of China (21FGLB089); Hunan Provincial Social Science Achievement Review Committee Project (XSP2023GLZ001); 2022 Project of Hunan Health Eco-nomics and Information Society (2022B03).

Author information

Authors and affiliations.

School of Humanities and Management, Hunan University of Chinese Medicine, Changsha, 410208, China

School of Administration and Law, Hunan Agricultural University, Changsha, 410128, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, Y.W. and H.C.; methodology, Y.W.; software, Y.W.; validation, Y.W.; formal analysis, Y.W.; investigation, H.C. and Y.W.; resources, H.C.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W.; visualization, Y.W.; supervision, H.C.; project administration, Y.W.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Hong Chen .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Wang, Y., Chen, H. Effects of mobile Internet use on the health of middle-aged and older adults: evidences from China health and retirement longitudinal study. BMC Public Health 24 , 1490 (2024). https://doi.org/10.1186/s12889-024-18916-w

Download citation

Received : 08 May 2023

Accepted : 21 May 2024

Published : 04 June 2024

DOI : https://doi.org/10.1186/s12889-024-18916-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mobile internet use
  • Middle-aged and elderly people
  • Self-rated health

BMC Public Health

ISSN: 1471-2458

research bias findings

How Western, Educated, Industrialized, Rich, and Democratic is Social Computing Research?

  • Septiandri, Ali Akbar
  • Constantinides, Marios
  • Quercia, Daniele

Much of the research in social computing analyzes data from social media platforms, which may inherently carry biases. An overlooked source of such bias is the over-representation of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) populations, which might not accurately mirror the global demographic diversity. We evaluated the dependence on WEIRD populations in research presented at the AAAI ICWSM conference; the only venue whose proceedings are fully dedicated to social computing research. We did so by analyzing 494 papers published from 2018 to 2022, which included full research papers, dataset papers and posters. After filtering out papers that analyze synthetic datasets or those lacking clear country of origin, we were left with 420 papers from which 188 participants in a crowdsourcing study with full manual validation extracted data for the WEIRD scores computation. This data was then used to adapt existing WEIRD metrics to be applicable for social media data. We found that 37% of these papers focused solely on data from Western countries. This percentage is significantly less than the percentages observed in research from CHI (76%) and FAccT (84%) conferences, suggesting a greater diversity of dataset origins within ICWSM. However, the studies at ICWSM still predominantly examine populations from countries that are more Educated, Industrialized, and Rich in comparison to those in FAccT, with a special note on the 'Democratic' variable reflecting political freedoms and rights. This points out the utility of social media data in shedding light on findings from countries with restricted political freedoms. Based on these insights, we recommend extensions of current "paper checklists" to include considerations about the WEIRD bias and call for the community to broaden research inclusivity by encouraging the use of diverse datasets from underrepresented regions.

  • Computer Science - Human-Computer Interaction;
  • Computer Science - Artificial Intelligence

AI Index: State of AI in 13 Charts

In the new report, foundation models dominate, benchmarks fall, prices skyrocket, and on the global stage, the U.S. overshadows.

Illustration of bright lines intersecting on a dark background

This year’s AI Index — a 500-page report tracking 2023’s worldwide trends in AI — is out.

The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year’s report covers the rise of multimodal foundation models, major cash investments into generative AI, new performance benchmarks, shifting global opinions, and new major regulations.

Don’t have an afternoon to pore through the findings? Check out the high level here.

Pie chart showing 98 models were open-sourced in 2023

A Move Toward Open-Sourced

This past year, organizations released 149 foundation models, more than double the number released in 2022. Of these newly released models, 65.7% were open-source (meaning they can be freely used and modified by anyone), compared with only 44.4% in 2022 and 33.3% in 2021.

bar chart showing that closed models outperformed open models across tasks

But At a Cost of Performance?

Closed-source models still outperform their open-sourced counterparts. On 10 selected benchmarks, closed models achieved a median performance advantage of 24.2%, with differences ranging from as little as 4.0% on mathematical tasks like GSM8K to as much as 317.7% on agentic tasks like AgentBench.

Bar chart showing Google has more foundation models than any other company

Biggest Players

Industry dominates AI, especially in building and releasing foundation models. This past year Google edged out other industry players in releasing the most models, including Gemini and RT-2. In fact, since 2019, Google has led in releasing the most foundation models, with a total of 40, followed by OpenAI with 20. Academia trails industry: This past year, UC Berkeley released three models and Stanford two.

Line chart showing industry far outpaces academia and government in creating foundation models over the decade

Industry Dwarfs All

If you needed more striking evidence that corporate AI is the only player in the room right now, this should do it. In 2023, industry accounted for 72% of all new foundation models.

Chart showing the growing costs of training AI models

Prices Skyrocket

One of the reasons academia and government have been edged out of the AI race: the exponential increase in cost of training these giant models. Google’s Gemini Ultra cost an estimated $191 million worth of compute to train, while OpenAI’s GPT-4 cost an estimated $78 million. In comparison, in 2017, the original Transformer model, which introduced the architecture that underpins virtually every modern LLM, cost around $900.

Bar chart showing the united states produces by far the largest number of foundation models

What AI Race?

At least in terms of notable machine learning models, the United States vastly outpaced other countries in 2023, developing a total of 61 models in 2023. Since 2019, the U.S. has consistently led in originating the majority of notable models, followed by China and the UK.

Line chart showing that across many intellectual task categories, AI has exceeded human performance

Move Over, Human

As of 2023, AI has hit human-level performance on many significant AI benchmarks, from those testing reading comprehension to visual reasoning. Still, it falls just short on some benchmarks like competition-level math. Because AI has been blasting past so many standard benchmarks, AI scholars have had to create new and more difficult challenges. This year’s index also tracked several of these new benchmarks, including those for tasks in coding, advanced reasoning, and agentic behavior.

Bar chart showing a dip in overall private investment in AI, but a surge in generative AI investment

Private Investment Drops (But We See You, GenAI)

While AI private investment has steadily dropped since 2021, generative AI is gaining steam. In 2023, the sector attracted $25.2 billion, nearly ninefold the investment of 2022 and about 30 times the amount from 2019 (call it the ChatGPT effect). Generative AI accounted for over a quarter of all AI-related private investments in 2023.

Bar chart showing the united states overwhelming dwarfs other countries in private investment in AI

U.S. Wins $$ Race

And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount invested in the United Kingdom. That lineup looks the same when zooming out: Cumulatively since 2013, the United States leads investments at $335.2 billion, followed by China with $103.7 billion, and the United Kingdom at $22.3 billion.

Infographic showing 26% of businesses use AI for contact-center automation, and 23% use it for personalization

Where is Corporate Adoption?

More companies are implementing AI in some part of their business: In surveys, 55% of organizations said they were using AI in 2023, up from 50% in 2022 and 20% in 2017. Businesses report using AI to automate contact centers, personalize content, and acquire new customers. 

Bar chart showing 57% of people believe AI will change how they do their job in 5 years, and 36% believe AI will replace their jobs.

Younger and Wealthier People Worry About Jobs

Globally, most people expect AI to change their jobs, and more than a third expect AI to replace them. Younger generations — Gen Z and millennials — anticipate more substantial effects from AI compared with older generations like Gen X and baby boomers. Specifically, 66% of Gen Z compared with 46% of boomer respondents believe AI will significantly affect their current jobs. Meanwhile, individuals with higher incomes, more education, and decision-making roles foresee AI having a great impact on their employment.

Bar chart depicting the countries most nervous about AI; Australia at 69%, Great Britain at 65%, and Canada at 63% top the list

While the Commonwealth Worries About AI Products

When asked in a survey about whether AI products and services make you nervous, 69% of Aussies and 65% of Brits said yes. Japan is the least worried about their AI products at 23%.  

Line graph showing uptick in AI regulation in the united states since 2016; 25 policies passed in 2023

Regulation Rallies

More American regulatory agencies are passing regulations to protect citizens and govern the use of AI tools and data. For example, the Copyright Office and the Library of Congress passed copyright registration guidance concerning works that contained material generated by AI, while the Securities and Exchange Commission developed a cybersecurity risk management strategy, governance, and incident disclosure plan. The agencies to pass the most regulation were the Executive Office of the President and the Commerce Department. 

The AI Index was first created to track AI development. The index collaborates with such organizations as LinkedIn, Quid, McKinsey, Studyportals, the Schwartz Reisman Institute, and the International Federation of Robotics to gather the most current research and feature important insights on the AI ecosystem. 

More News Topics

Loading metrics

Open Access

Peer-reviewed

Research Article

Functional connectivity changes in the brain of adolescents with internet addiction: A systematic literature review of imaging studies

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliation Child and Adolescent Mental Health, Department of Brain Sciences, Great Ormond Street Institute of Child Health, University College London, London, United Kingdom

Roles Conceptualization, Supervision, Validation, Writing – review & editing

* E-mail: [email protected]

Affiliation Behavioural Brain Sciences Unit, Population Policy Practice Programme, Great Ormond Street Institute of Child Health, University College London, London, United Kingdom

ORCID logo

  • Max L. Y. Chang, 
  • Irene O. Lee

PLOS

  • Published: June 4, 2024
  • https://doi.org/10.1371/journal.pmen.0000022
  • Peer Review
  • Reader Comments

Fig 1

Internet usage has seen a stark global rise over the last few decades, particularly among adolescents and young people, who have also been diagnosed increasingly with internet addiction (IA). IA impacts several neural networks that influence an adolescent’s behaviour and development. This article issued a literature review on the resting-state and task-based functional magnetic resonance imaging (fMRI) studies to inspect the consequences of IA on the functional connectivity (FC) in the adolescent brain and its subsequent effects on their behaviour and development. A systematic search was conducted from two databases, PubMed and PsycINFO, to select eligible articles according to the inclusion and exclusion criteria. Eligibility criteria was especially stringent regarding the adolescent age range (10–19) and formal diagnosis of IA. Bias and quality of individual studies were evaluated. The fMRI results from 12 articles demonstrated that the effects of IA were seen throughout multiple neural networks: a mix of increases/decreases in FC in the default mode network; an overall decrease in FC in the executive control network; and no clear increase or decrease in FC within the salience network and reward pathway. The FC changes led to addictive behaviour and tendencies in adolescents. The subsequent behavioural changes are associated with the mechanisms relating to the areas of cognitive control, reward valuation, motor coordination, and the developing adolescent brain. Our results presented the FC alterations in numerous brain regions of adolescents with IA leading to the behavioural and developmental changes. Research on this topic had a low frequency with adolescent samples and were primarily produced in Asian countries. Future research studies of comparing results from Western adolescent samples provide more insight on therapeutic intervention.

Citation: Chang MLY, Lee IO (2024) Functional connectivity changes in the brain of adolescents with internet addiction: A systematic literature review of imaging studies. PLOS Ment Health 1(1): e0000022. https://doi.org/10.1371/journal.pmen.0000022

Editor: Kizito Omona, Uganda Martyrs University, UGANDA

Received: December 29, 2023; Accepted: March 18, 2024; Published: June 4, 2024

Copyright: © 2024 Chang, Lee. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting information files.

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The behavioural addiction brought on by excessive internet use has become a rising source of concern [ 1 ] since the last decade. According to clinical studies, individuals with Internet Addiction (IA) or Internet Gaming Disorder (IGD) may have a range of biopsychosocial effects and is classified as an impulse-control disorder owing to its resemblance to pathological gambling and substance addiction [ 2 , 3 ]. IA has been defined by researchers as a person’s inability to resist the urge to use the internet, which has negative effects on their psychological well-being as well as their social, academic, and professional lives [ 4 ]. The symptoms can have serious physical and interpersonal repercussions and are linked to mood modification, salience, tolerance, impulsivity, and conflict [ 5 ]. In severe circumstances, people may experience severe pain in their bodies or health issues like carpal tunnel syndrome, dry eyes, irregular eating and disrupted sleep [ 6 ]. Additionally, IA is significantly linked to comorbidities with other psychiatric disorders [ 7 ].

Stevens et al (2021) reviewed 53 studies including 17 countries and reported the global prevalence of IA was 3.05% [ 8 ]. Asian countries had a higher prevalence (5.1%) than European countries (2.7%) [ 8 ]. Strikingly, adolescents and young adults had a global IGD prevalence rate of 9.9% which matches previous literature that reported historically higher prevalence among adolescent populations compared to adults [ 8 , 9 ]. Over 80% of adolescent population in the UK, the USA, and Asia have direct access to the internet [ 10 ]. Children and adolescents frequently spend more time on media (possibly 7 hours and 22 minutes per day) than at school or sleeping [ 11 ]. Developing nations have also shown a sharp rise in teenage internet usage despite having lower internet penetration rates [ 10 ]. Concerns regarding the possible harms that overt internet use could do to adolescents and their development have arisen because of this surge, especially the significant impacts by the COVID-19 pandemic [ 12 ]. The growing prevalence and neurocognitive consequences of IA among adolescents makes this population a vital area of study [ 13 ].

Adolescence is a crucial developmental stage during which people go through significant changes in their biology, cognition, and personalities [ 14 ]. Adolescents’ emotional-behavioural functioning is hyperactivated, which creates risk of psychopathological vulnerability [ 15 ]. In accordance with clinical study results [ 16 ], this emotional hyperactivity is supported by a high level of neuronal plasticity. This plasticity enables teenagers to adapt to the numerous physical and emotional changes that occur during puberty as well as develop communication techniques and gain independence [ 16 ]. However, the strong neuronal plasticity is also associated with risk-taking and sensation seeking [ 17 ] which may lead to IA.

Despite the fact that the precise neuronal mechanisms underlying IA are still largely unclear, functional magnetic resonance imaging (fMRI) method has been used by scientists as an important framework to examine the neuropathological changes occurring in IA, particularly in the form of functional connectivity (FC) [ 18 ]. fMRI research study has shown that IA alters both the functional and structural makeup of the brain [ 3 ].

We hypothesise that IA has widespread neurological alteration effects rather than being limited to a few specific brain regions. Further hypothesis holds that according to these alterations of FC between the brain regions or certain neural networks, adolescents with IA would experience behavioural changes. An investigation of these domains could be useful for creating better procedures and standards as well as minimising the negative effects of overt internet use. This literature review aims to summarise and analyse the evidence of various imaging studies that have investigated the effects of IA on the FC in adolescents. This will be addressed through two research questions:

  • How does internet addiction affect the functional connectivity in the adolescent brain?
  • How is adolescent behaviour and development impacted by functional connectivity changes due to internet addiction?

The review protocol was conducted in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (see S1 Checklist ).

Search strategy and selection process

A systematic search was conducted up until April 2023 from two sources of database, PubMed and PsycINFO, using a range of terms relevant to the title and research questions (see full list of search terms in S1 Appendix ). All the searched articles can be accessed in the S1 Data . The eligible articles were selected according to the inclusion and exclusion criteria. Inclusion criteria used for the present review were: (i) participants in the studies with clinical diagnosis of IA; (ii) participants between the ages of 10 and 19; (iii) imaging research investigations; (iv) works published between January 2013 and April 2023; (v) written in English language; (vi) peer-reviewed papers and (vii) full text. The numbers of articles excluded due to not meeting the inclusion criteria are shown in Fig 1 . Each study’s title and abstract were screened for eligibility.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmen.0000022.g001

Quality appraisal

Full texts of all potentially relevant studies were then retrieved and further appraised for eligibility. Furthermore, articles were critically appraised based on the GRADE (Grading of Recommendations, Assessment, Development, and Evaluations) framework to evaluate the individual study for both quality and bias. The subsequent quality levels were then appraised to each article and listed as either low, moderate, or high.

Data collection process

Data that satisfied the inclusion requirements was entered into an excel sheet for data extraction and further selection. An article’s author, publication year, country, age range, participant sample size, sex, area of interest, measures, outcome and article quality were all included in the data extraction spreadsheet. Studies looking at FC, for instance, were grouped, while studies looking at FC in specific area were further divided into sub-groups.

Data synthesis and analysis

Articles were classified according to their location in the brain as well as the network or pathway they were a part of to create a coherent narrative between the selected studies. Conclusions concerning various research trends relevant to particular groupings were drawn from these groupings and subgroupings. To maintain the offered information in a prominent manner, these assertions were entered into the data extraction excel spreadsheet.

With the search performed on the selected databases, 238 articles in total were identified (see Fig 1 ). 15 duplicated articles were eliminated, and another 6 items were removed for various other reasons. Title and abstract screening eliminated 184 articles because they were not in English (number of article, n, = 7), did not include imaging components (n = 47), had adult participants (n = 53), did not have a clinical diagnosis of IA (n = 19), did not address FC in the brain (n = 20), and were published outside the desired timeframe (n = 38). A further 21 papers were eliminated for failing to meet inclusion requirements after the remaining 33 articles underwent full-text eligibility screening. A total of 12 papers were deemed eligible for this review analysis.

Characteristics of the included studies, as depicted in the data extraction sheet in Table 1 provide information of the author(s), publication year, sample size, study location, age range, gender, area of interest, outcome, measures used and quality appraisal. Most of the studies in this review utilised resting state functional magnetic resonance imaging techniques (n = 7), with several studies demonstrating task-based fMRI procedures (n = 3), and the remaining studies utilising whole-brain imaging measures (n = 2). The studies were all conducted in Asiatic countries, specifically coming from China (8), Korea (3), and Indonesia (1). Sample sizes ranged from 12 to 31 participants with most of the imaging studies having comparable sample sizes. Majority of the studies included a mix of male and female participants (n = 8) with several studies having a male only participant pool (n = 3). All except one of the mixed gender studies had a majority male participant pool. One study did not disclose their data on the gender demographics of their experiment. Study years ranged from 2013–2022, with 2 studies in 2013, 3 studies in 2014, 3 studies in 2015, 1 study in 2017, 1 study in 2020, 1 study in 2021, and 1 study in 2022.

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.t001

(1) How does internet addiction affect the functional connectivity in the adolescent brain?

The included studies were organised according to the brain region or network that they were observing. The specific networks affected by IA were the default mode network, executive control system, salience network and reward pathway. These networks are vital components of adolescent behaviour and development [ 31 ]. The studies in each section were then grouped into subsections according to their specific brain regions within their network.

Default mode network (DMN)/reward network.

Out of the 12 studies, 3 have specifically studied the default mode network (DMN), and 3 observed whole-brain FC that partially included components of the DMN. The effect of IA on the various centres of the DMN was not unilaterally the same. The findings illustrate a complex mix of increases and decreases in FC depending on the specific region in the DMN (see Table 2 and Fig 2 ). The alteration of FC in posterior cingulate cortex (PCC) in the DMN was the most frequently reported area in adolescents with IA, which involved in attentional processes [ 32 ], but Lee et al. (2020) additionally found alterations of FC in other brain regions, such as anterior insula cortex, a node in the DMN that controls the integration of motivational and cognitive processes [ 20 ].

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.g002

thumbnail

The overall changes of functional connectivity in the brain network including default mode network (DMN), executive control network (ECN), salience network (SN) and reward network. IA = Internet Addiction, FC = Functional Connectivity.

https://doi.org/10.1371/journal.pmen.0000022.t002

Ding et al. (2013) revealed altered FC in the cerebellum, the middle temporal gyrus, and the medial prefrontal cortex (mPFC) [ 22 ]. They found that the bilateral inferior parietal lobule, left superior parietal lobule, and right inferior temporal gyrus had decreased FC, while the bilateral posterior lobe of the cerebellum and the medial temporal gyrus had increased FC [ 22 ]. The right middle temporal gyrus was found to have 111 cluster voxels (t = 3.52, p<0.05) and the right inferior parietal lobule was found to have 324 cluster voxels (t = -4.07, p<0.05) with an extent threshold of 54 voxels (figures above this threshold are deemed significant) [ 22 ]. Additionally, there was a negative correlation, with 95 cluster voxels (p<0.05) between the FC of the left superior parietal lobule and the PCC with the Chen Internet Addiction Scores (CIAS) which are used to determine the severity of IA [ 22 ]. On the other hand, in regions of the reward system, connection with the PCC was positively connected with CIAS scores [ 22 ]. The most significant was the right praecuneus with 219 cluster voxels (p<0.05) [ 22 ]. Wang et al. (2017) also discovered that adolescents with IA had 33% less FC in the left inferior parietal lobule and 20% less FC in the dorsal mPFC [ 24 ]. A potential connection between the effects of substance use and overt internet use is revealed by the generally decreased FC in these areas of the DMN of teenagers with drug addiction and IA [ 35 ].

The putamen was one of the main regions of reduced FC in adolescents with IA [ 19 ]. The putamen and the insula-operculum demonstrated significant group differences regarding functional connectivity with a cluster size of 251 and an extent threshold of 250 (Z = 3.40, p<0.05) [ 19 ]. The molecular mechanisms behind addiction disorders have been intimately connected to decreased striatal dopaminergic function [ 19 ], making this function crucial.

Executive Control Network (ECN).

5 studies out of 12 have specifically viewed parts of the executive control network (ECN) and 3 studies observed whole-brain FC. The effects of IA on the ECN’s constituent parts were consistent across all the studies examined for this analysis (see Table 2 and Fig 3 ). The results showed a notable decline in all the ECN’s major centres. Li et al. (2014) used fMRI imaging and a behavioural task to study response inhibition in adolescents with IA [ 25 ] and found decreased activation at the striatum and frontal gyrus, particularly a reduction in FC at inferior frontal gyrus, in the IA group compared to controls [ 25 ]. The inferior frontal gyrus showed a reduction in FC in comparison to the controls with a cluster size of 71 (t = 4.18, p<0.05) [ 25 ]. In addition, the frontal-basal ganglia pathways in the adolescents with IA showed little effective connection between areas and increased degrees of response inhibition [ 25 ].

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.g003

Lin et al. (2015) found that adolescents with IA demonstrated disrupted corticostriatal FC compared to controls [ 33 ]. The corticostriatal circuitry experienced decreased connectivity with the caudate, bilateral anterior cingulate cortex (ACC), as well as the striatum and frontal gyrus [ 33 ]. The inferior ventral striatum showed significantly reduced FC with the subcallosal ACC and caudate head with cluster size of 101 (t = -4.64, p<0.05) [ 33 ]. Decreased FC in the caudate implies dysfunction of the corticostriatal-limbic circuitry involved in cognitive and emotional control [ 36 ]. The decrease in FC in both the striatum and frontal gyrus is related to inhibitory control, a common deficit seen with disruptions with the ECN [ 33 ].

The dorsolateral prefrontal cortex (DLPFC), ACC, and right supplementary motor area (SMA) of the prefrontal cortex were all found to have significantly decreased grey matter volume [ 29 ]. In addition, the DLPFC, insula, temporal cortices, as well as significant subcortical regions like the striatum and thalamus, showed decreased FC [ 29 ]. According to Tremblay (2009), the striatum plays a significant role in the processing of rewards, decision-making, and motivation [ 37 ]. Chen et al. (2020) reported that the IA group demonstrated increased impulsivity as well as decreased reaction inhibition using a Stroop colour-word task [ 26 ]. Furthermore, Chen et al. (2020) observed that the left DLPFC and dorsal striatum experienced a negative connection efficiency value, specifically demonstrating that the dorsal striatum activity suppressed the left DLPFC [ 27 ].

Salience network (SN).

Out of the 12 chosen studies, 3 studies specifically looked at the salience network (SN) and 3 studies have observed whole-brain FC. Relative to the DMN and ECN, the findings on the SN were slightly sparser. Despite this, adolescents with IA demonstrated a moderate decrease in FC, as well as other measures like fibre connectivity and cognitive control, when compared to healthy control (see Table 2 and Fig 4 ).

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.g004

Xing et al. (2014) used both dorsal anterior cingulate cortex (dACC) and insula to test FC changes in the SN of adolescents with IA and found decreased structural connectivity in the SN as well as decreased fractional anisotropy (FA) that correlated to behaviour performance in the Stroop colour word-task [ 21 ]. They examined the dACC and insula to determine whether the SN’s disrupted connectivity may be linked to the SN’s disruption of regulation, which would explain the impaired cognitive control seen in adolescents with IA. However, researchers did not find significant FC differences in the SN when compared to the controls [ 21 ]. These results provided evidence for the structural changes in the interconnectivity within SN in adolescents with IA.

Wang et al. (2017) investigated network interactions between the DMN, ECN, SN and reward pathway in IA subjects [ 24 ] (see Fig 5 ), and found 40% reduction of FC between the DMN and specific regions of the SN, such as the insula, in comparison to the controls (p = 0.008) [ 24 ]. The anterior insula and dACC are two areas that are impacted by this altered FC [ 24 ]. This finding supports the idea that IA has similar neurobiological abnormalities with other addictive illnesses, which is in line with a study that discovered disruptive changes in the SN and DMN’s interaction in cocaine addiction [ 38 ]. The insula has also been linked to the intensity of symptoms and has been implicated in the development of IA [ 39 ].

thumbnail

“+” indicates an increase in behaivour; “-”indicates a decrease in behaviour; solid arrows indicate a direct network interaction; and the dotted arrows indicates a reduction in network interaction. This diagram depicts network interactions juxtaposed with engaging in internet related behaviours. Through the neural interactions, the diagram illustrates how the networks inhibit or amplify internet usage and vice versa. Furthermore, it demonstrates how the SN mediates both the DMN and ECN.

https://doi.org/10.1371/journal.pmen.0000022.g005

(2) How is adolescent behaviour and development impacted by functional connectivity changes due to internet addiction?

The findings that IA individuals demonstrate an overall decrease in FC in the DMN is supported by numerous research [ 24 ]. Drug addict populations also exhibited similar decline in FC in the DMN [ 40 ]. The disruption of attentional orientation and self-referential processing for both substance and behavioural addiction was then hypothesised to be caused by DMN anomalies in FC [ 41 ].

In adolescents with IA, decline of FC in the parietal lobule affects visuospatial task-related behaviour [ 22 ], short-term memory [ 42 ], and the ability of controlling attention or restraining motor responses during response inhibition tests [ 42 ]. Cue-induced gaming cravings are influenced by the DMN [ 43 ]. A visual processing area called the praecuneus links gaming cues to internal information [ 22 ]. A meta-analysis found that the posterior cingulate cortex activity of individuals with IA during cue-reactivity tasks was connected with their gaming time [ 44 ], suggesting that excessive gaming may impair DMN function and that individuals with IA exert more cognitive effort to control it. Findings for the behavioural consequences of FC changes in the DMN illustrate its underlying role in regulating impulsivity, self-monitoring, and cognitive control.

Furthermore, Ding et al. (2013) reported an activation of components of the reward pathway, including areas like the nucleus accumbens, praecuneus, SMA, caudate, and thalamus, in connection to the DMN [ 22 ]. The increased FC of the limbic and reward networks have been confirmed to be a major biomarker for IA [ 45 , 46 ]. The increased reinforcement in these networks increases the strength of reward stimuli and makes it more difficult for other networks, namely the ECN, to down-regulate the increased attention [ 29 ] (See Fig 5 ).

Executive control network (ECN).

The numerous IA-affected components in the ECN have a role in a variety of behaviours that are connected to both response inhibition and emotional regulation [ 47 ]. For instance, brain regions like the striatum, which are linked to impulsivity and the reward system, are heavily involved in the act of playing online games [ 47 ]. Online game play activates the striatum, which suppresses the left DLPFC in ECN [ 48 ]. As a result, people with IA may find it difficult to control their want to play online games [ 48 ]. This system thus causes impulsive and protracted gaming conduct, lack of inhibitory control leading to the continued use of internet in an overt manner despite a variety of negative effects, personal distress, and signs of psychological dependence [ 33 ] (See Fig 5 ).

Wang et al. (2017) report that disruptions in cognitive control networks within the ECN are frequently linked to characteristics of substance addiction [ 24 ]. With samples that were addicted to heroin and cocaine, previous studies discovered abnormal FC in the ECN and the PFC [ 49 ]. Electronic gaming is known to promote striatal dopamine release, similar to drug addiction [ 50 ]. According to Drgonova and Walther (2016), it is hypothesised that dopamine could stimulate the reward system of the striatum in the brain, leading to a loss of impulse control and a failure of prefrontal lobe executive inhibitory control [ 51 ]. In the end, IA’s resemblance to drug use disorders may point to vital biomarkers or underlying mechanisms that explain how cognitive control and impulsive behaviour are related.

A task-related fMRI study found that the decrease in FC between the left DLPFC and dorsal striatum was congruent with an increase in impulsivity in adolescents with IA [ 26 ]. The lack of response inhibition from the ECN results in a loss of control over internet usage and a reduced capacity to display goal-directed behaviour [ 33 ]. Previous studies have linked the alteration of the ECN in IA with higher cue reactivity and impaired ability to self-regulate internet specific stimuli [ 52 ].

Salience network (SN)/ other networks.

Xing et al. (2014) investigated the significance of the SN regarding cognitive control in teenagers with IA [ 21 ]. The SN, which is composed of the ACC and insula, has been demonstrated to control dynamic changes in other networks to modify cognitive performance [ 21 ]. The ACC is engaged in conflict monitoring and cognitive control, according to previous neuroimaging research [ 53 ]. The insula is a region that integrates interoceptive states into conscious feelings [ 54 ]. The results from Xing et al. (2014) showed declines in the SN regarding its structural connectivity and fractional anisotropy, even though they did not observe any appreciable change in FC in the IA participants [ 21 ]. Due to the small sample size, the results may have indicated that FC methods are not sensitive enough to detect the significant functional changes [ 21 ]. However, task performance behaviours associated with impaired cognitive control in adolescents with IA were correlated with these findings [ 21 ]. Our comprehension of the SN’s broader function in IA can be enhanced by this relationship.

Research study supports the idea that different psychological issues are caused by the functional reorganisation of expansive brain networks, such that strong association between SN and DMN may provide neurological underpinnings at the system level for the uncontrollable character of internet-using behaviours [ 24 ]. In the study by Wang et al. (2017), the decreased interconnectivity between the SN and DMN, comprising regions such the DLPFC and the insula, suggests that adolescents with IA may struggle to effectively inhibit DMN activity during internally focused processing, leading to poorly managed desires or preoccupations to use the internet [ 24 ] (See Fig 5 ). Subsequently, this may cause a failure to inhibit DMN activity as well as a restriction of ECN functionality [ 55 ]. As a result, the adolescent experiences an increased salience and sensitivity towards internet addicting cues making it difficult to avoid these triggers [ 56 ].

The primary aim of this review was to present a summary of how internet addiction impacts on the functional connectivity of adolescent brain. Subsequently, the influence of IA on the adolescent brain was compartmentalised into three sections: alterations of FC at various brain regions, specific FC relationships, and behavioural/developmental changes. Overall, the specific effects of IA on the adolescent brain were not completely clear, given the variety of FC changes. However, there were overarching behavioural, network and developmental trends that were supported that provided insight on adolescent development.

The first hypothesis that was held about this question was that IA was widespread and would be regionally similar to substance-use and gambling addiction. After conducting a review of the information in the chosen articles, the hypothesis was predictably supported. The regions of the brain affected by IA are widespread and influence multiple networks, mainly DMN, ECN, SN and reward pathway. In the DMN, there was a complex mix of increases and decreases within the network. However, in the ECN, the alterations of FC were more unilaterally decreased, but the findings of SN and reward pathway were not quite clear. Overall, the FC changes within adolescents with IA are very much network specific and lay a solid foundation from which to understand the subsequent behaviour changes that arise from the disorder.

The second hypothesis placed emphasis on the importance of between network interactions and within network interactions in the continuation of IA and the development of its behavioural symptoms. The results from the findings involving the networks, DMN, SN, ECN and reward system, support this hypothesis (see Fig 5 ). Studies confirm the influence of all these neural networks on reward valuation, impulsivity, salience to stimuli, cue reactivity and other changes that alter behaviour towards the internet use. Many of these changes are connected to the inherent nature of the adolescent brain.

There are multiple explanations that underlie the vulnerability of the adolescent brain towards IA related urges. Several of them have to do with the inherent nature and underlying mechanisms of the adolescent brain. Children’s emotional, social, and cognitive capacities grow exponentially during childhood and adolescence [ 57 ]. Early teenagers go through a process called “social reorientation” that is characterised by heightened sensitivity to social cues and peer connections [ 58 ]. Adolescents’ improvements in their social skills coincide with changes in their brains’ anatomical and functional organisation [ 59 ]. Functional hubs exhibit growing connectivity strength [ 60 ], suggesting increased functional integration during development. During this time, the brain’s functional networks change from an anatomically dominant structure to a scattered architecture [ 60 ].

The adolescent brain is very responsive to synaptic reorganisation and experience cues [ 61 ]. As a result, one of the distinguishing traits of the maturation of adolescent brains is the variation in neural network trajectory [ 62 ]. Important weaknesses of the adolescent brain that may explain the neurobiological change brought on by external stimuli are illustrated by features like the functional gaps between networks and the inadequate segregation of networks [ 62 ].

The implications of these findings towards adolescent behaviour are significant. Although the exact changes and mechanisms are not fully clear, the observed changes in functional connectivity have the capacity of influencing several aspects of adolescent development. For example, functional connectivity has been utilised to investigate attachment styles in adolescents [ 63 ]. It was observed that adolescent attachment styles were negatively associated with caudate-prefrontal connectivity, but positively with the putamen-visual area connectivity [ 63 ]. Both named areas were also influenced by the onset of internet addiction, possibly providing a connection between the two. Another study associated neighbourhood/socioeconomic disadvantage with functional connectivity alterations in the DMN and dorsal attention network [ 64 ]. The study also found multivariate brain behaviour relationships between the altered/disadvantaged functional connectivity and mental health and cognition [ 64 ]. This conclusion supports the notion that the functional connectivity alterations observed in IA are associated with specific adolescent behaviours as well as the fact that functional connectivity can be utilised as a platform onto which to compare various neurologic conditions.

Limitations/strengths

There were several limitations that were related to the conduction of the review as well as the data extracted from the articles. Firstly, the study followed a systematic literature review design when analysing the fMRI studies. The data pulled from these imaging studies were namely qualitative and were subject to bias contrasting the quantitative nature of statistical analysis. Components of the study, such as sample sizes, effect sizes, and demographics were not weighted or controlled. The second limitation brought up by a similar review was the lack of a universal consensus of terminology given IA [ 47 ]. Globally, authors writing about this topic use an array of terminology including online gaming addiction, internet addiction, internet gaming disorder, and problematic internet use. Often, authors use multiple terms interchangeably which makes it difficult to depict the subtle similarities and differences between the terms.

Reviewing the explicit limitations in each of the included studies, two major limitations were brought up in many of the articles. One was relating to the cross-sectional nature of the included studies. Due to the inherent qualities of a cross-sectional study, the studies did not provide clear evidence that IA played a causal role towards the development of the adolescent brain. While several biopsychosocial factors mediate these interactions, task-based measures that combine executive functions with imaging results reinforce the assumed connection between the two that is utilised by the papers studying IA. Another limitation regarded the small sample size of the included studies, which averaged to around 20 participants. The small sample size can influence the generalisation of the results as well as the effectiveness of statistical analyses. Ultimately, both included study specific limitations illustrate the need for future studies to clarify the causal relationship between the alterations of FC and the development of IA.

Another vital limitation was the limited number of studies applying imaging techniques for investigations on IA in adolescents were a uniformly Far East collection of studies. The reason for this was because the studies included in this review were the only fMRI studies that were found that adhered to the strict adolescent age restriction. The adolescent age range given by the WHO (10–19 years old) [ 65 ] was strictly followed. It is important to note that a multitude of studies found in the initial search utilised an older adolescent demographic that was slightly higher than the WHO age range and had a mean age that was outside of the limitations. As a result, the results of this review are biased and based on the 12 studies that met the inclusion and exclusion criteria.

Regarding the global nature of the research, although the journals that the studies were published in were all established western journals, the collection of studies were found to all originate from Asian countries, namely China and Korea. Subsequently, it pulls into question if the results and measures from these studies are generalisable towards a western population. As stated previously, Asian countries have a higher prevalence of IA, which may be the reasoning to why the majority of studies are from there [ 8 ]. However, in an additional search including other age groups, it was found that a high majority of all FC studies on IA were done in Asian countries. Interestingly, western papers studying fMRI FC were primarily focused on gambling and substance-use addiction disorders. The western papers on IA were less focused on fMRI FC but more on other components of IA such as sleep, game-genre, and other non-imaging related factors. This demonstrated an overall lack of western fMRI studies on IA. It is important to note that both western and eastern fMRI studies on IA presented an overall lack on children and adolescents in general.

Despite the several limitations, this review provided a clear reflection on the state of the data. The strengths of the review include the strict inclusion/exclusion criteria that filtered through studies and only included ones that contained a purely adolescent sample. As a result, the information presented in this review was specific to the review’s aims. Given the sparse nature of adolescent specific fMRI studies on the FC changes in IA, this review successfully provided a much-needed niche representation of adolescent specific results. Furthermore, the review provided a thorough functional explanation of the DMN, ECN, SN and reward pathway making it accessible to readers new to the topic.

Future directions and implications

Through the search process of the review, there were more imaging studies focused on older adolescence and adulthood. Furthermore, finding a review that covered a strictly adolescent population, focused on FC changes, and was specifically depicting IA, was proven difficult. Many related reviews, such as Tereshchenko and Kasparov (2019), looked at risk factors related to the biopsychosocial model, but did not tackle specific alterations in specific structural or functional changes in the brain [ 66 ]. Weinstein (2017) found similar structural and functional results as well as the role IA has in altering response inhibition and reward valuation in adolescents with IA [ 47 ]. Overall, the accumulated findings only paint an emerging pattern which aligns with similar substance-use and gambling disorders. Future studies require more specificity in depicting the interactions between neural networks, as well as more literature on adolescent and comorbid populations. One future field of interest is the incorporation of more task-based fMRI data. Advances in resting-state fMRI methods have yet to be reflected or confirmed in task-based fMRI methods [ 62 ]. Due to the fact that network connectivity is shaped by different tasks, it is critical to confirm that the findings of the resting state fMRI studies also apply to the task based ones [ 62 ]. Subsequently, work in this area will confirm if intrinsic connectivity networks function in resting state will function similarly during goal directed behaviour [ 62 ]. An elevated focus on adolescent populations as well as task-based fMRI methodology will help uncover to what extent adolescent network connectivity maturation facilitates behavioural and cognitive development [ 62 ].

A treatment implication is the potential usage of bupropion for the treatment of IA. Bupropion has been previously used to treat patients with gambling disorder and has been effective in decreasing overall gambling behaviour as well as money spent while gambling [ 67 ]. Bae et al. (2018) found a decrease in clinical symptoms of IA in line with a 12-week bupropion treatment [ 31 ]. The study found that bupropion altered the FC of both the DMN and ECN which in turn decreased impulsivity and attentional deficits for the individuals with IA [ 31 ]. Interventions like bupropion illustrate the importance of understanding the fundamental mechanisms that underlie disorders like IA.

The goal for this review was to summarise the current literature on functional connectivity changes in adolescents with internet addiction. The findings answered the primary research questions that were directed at FC alterations within several networks of the adolescent brain and how that influenced their behaviour and development. Overall, the research demonstrated several wide-ranging effects that influenced the DMN, SN, ECN, and reward centres. Additionally, the findings gave ground to important details such as the maturation of the adolescent brain, the high prevalence of Asian originated studies, and the importance of task-based studies in this field. The process of making this review allowed for a thorough understanding IA and adolescent brain interactions.

Given the influx of technology and media in the lives and education of children and adolescents, an increase in prevalence and focus on internet related behavioural changes is imperative towards future children/adolescent mental health. Events such as COVID-19 act to expose the consequences of extended internet usage on the development and lifestyle of specifically young people. While it is important for parents and older generations to be wary of these changes, it is important for them to develop a base understanding of the issue and not dismiss it as an all-bad or all-good scenario. Future research on IA will aim to better understand the causal relationship between IA and psychological symptoms that coincide with it. The current literature regarding functional connectivity changes in adolescents is limited and requires future studies to test with larger sample sizes, comorbid populations, and populations outside Far East Asia.

This review aimed to demonstrate the inner workings of how IA alters the connection between the primary behavioural networks in the adolescent brain. Predictably, the present answers merely paint an unfinished picture that does not necessarily depict internet usage as overwhelmingly positive or negative. Alternatively, the research points towards emerging patterns that can direct individuals on the consequences of certain variables or risk factors. A clearer depiction of the mechanisms of IA would allow physicians to screen and treat the onset of IA more effectively. Clinically, this could be in the form of more streamlined and accurate sessions of CBT or family therapy, targeting key symptoms of IA. Alternatively clinicians could potentially prescribe treatment such as bupropion to target FC in certain regions of the brain. Furthermore, parental education on IA is another possible avenue of prevention from a public health standpoint. Parents who are aware of the early signs and onset of IA will more effectively handle screen time, impulsivity, and minimize the risk factors surrounding IA.

Additionally, an increased attention towards internet related fMRI research is needed in the West, as mentioned previously. Despite cultural differences, Western countries may hold similarities to the eastern countries with a high prevalence of IA, like China and Korea, regarding the implications of the internet and IA. The increasing influence of the internet on the world may contribute to an overall increase in the global prevalence of IA. Nonetheless, the high saturation of eastern studies in this field should be replicated with a Western sample to determine if the same FC alterations occur. A growing interest in internet related research and education within the West will hopefully lead to the knowledge of healthier internet habits and coping strategies among parents with children and adolescents. Furthermore, IA research has the potential to become a crucial proxy for which to study adolescent brain maturation and development.

Supporting information

S1 checklist. prisma checklist..

https://doi.org/10.1371/journal.pmen.0000022.s001

S1 Appendix. Search strategies with all the terms.

https://doi.org/10.1371/journal.pmen.0000022.s002

S1 Data. Article screening records with details of categorized content.

https://doi.org/10.1371/journal.pmen.0000022.s003

Acknowledgments

The authors thank https://www.stockio.com/free-clipart/brain-01 (with attribution to Stockio.com); and https://www.rawpixel.com/image/6442258/png-sticker-vintage for the free images used to create Figs 2 – 4 .

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. Association AP. Diagnostic and statistical manual of mental disorders: DSM-5. 5 ed. Washington, D.C.: American Psychiatric Publishing; 2013.
  • 10. Stats IW. World Internet Users Statistics and World Population Stats 2013 [ http://www.internetworldstats.com/stats.htm .
  • 11. Rideout VJR M. B. The common sense census: media use by tweens and teens. San Francisco, CA: Common Sense Media; 2019.
  • 37. Tremblay L. The Ventral Striatum. Handbook of Reward and Decision Making: Academic Press; 2009.
  • 57. Bhana A. Middle childhood and pre-adolescence. Promoting mental health in scarce-resource contexts: emerging evidence and practice. Cape Town: HSRC Press; 2010. p. 124–42.
  • 65. Organization WH. Adolescent Health 2023 [ https://www.who.int/health-topics/adolescent-health#tab=tab_1 .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Teens and social media: Key findings from Pew Research Center surveys

Laughing twin sisters looking at smartphone in park on summer evening

For the latest survey data on social media and tech use among teens, see “ Teens, Social Media, and Technology 2023 .” 

Today’s teens are navigating a digital landscape unlike the one experienced by their predecessors, particularly when it comes to the pervasive presence of social media. In 2022, Pew Research Center fielded an in-depth survey asking American teens – and their parents – about their experiences with and views toward social media . Here are key findings from the survey:

Pew Research Center conducted this study to better understand American teens’ experiences with social media and their parents’ perception of these experiences. For this analysis, we surveyed 1,316 U.S. teens ages 13 to 17, along with one parent from each teen’s household. The survey was conducted online by Ipsos from April 14 to May 4, 2022.

This research was reviewed and approved by an external institutional review board (IRB), Advarra, which is an independent committee of experts that specializes in helping to protect the rights of research participants.

Ipsos invited panelists who were a parent of at least one teen ages 13 to 17 from its KnowledgePanel , a probability-based web panel recruited primarily through national, random sampling of residential addresses, to take this survey. For some of these questions, parents were asked to think about one teen in their household. (If they had multiple teenage children ages 13 to 17 in the household, one was randomly chosen.) This teen was then asked to answer questions as well. The parent portion of the survey is weighted to be representative of U.S. parents of teens ages 13 to 17 by age, gender, race, ethnicity, household income and other categories. The teen portion of the survey is weighted to be representative of U.S. teens ages 13 to 17 who live with parents by age, gender, race, ethnicity, household income and other categories.

Here are the questions used  for this report, along with responses, and its  methodology .

Majorities of teens report ever using YouTube, TikTok, Instagram and Snapchat. YouTube is the platform most commonly used by teens, with 95% of those ages 13 to 17 saying they have ever used it, according to a Center survey conducted April 14-May 4, 2022, that asked about 10 online platforms. Two-thirds of teens report using TikTok, followed by roughly six-in-ten who say they use Instagram (62%) and Snapchat (59%). Much smaller shares of teens say they have ever used Twitter (23%), Twitch (20%), WhatsApp (17%), Reddit (14%) and Tumblr (5%).

A chart showing that since 2014-15 TikTok has started to rise, Facebook usage has dropped, Instagram and Snapchat have grown.

Facebook use among teens dropped from 71% in 2014-15 to 32% in 2022. Twitter and Tumblr also experienced declines in teen users during that span, but Instagram and Snapchat saw notable increases.

TikTok use is more common among Black teens and among teen girls. For example, roughly eight-in-ten Black teens (81%) say they use TikTok, compared with 71% of Hispanic teens and 62% of White teens. And Hispanic teens (29%) are more likely than Black (19%) or White teens (10%) to report using WhatsApp. (There were not enough Asian teens in the sample to analyze separately.)

Teens’ use of certain social media platforms also varies by gender. Teen girls are more likely than teen boys to report using TikTok (73% vs. 60%), Instagram (69% vs. 55%) and Snapchat (64% vs. 54%). Boys are more likely than girls to report using YouTube (97% vs. 92%), Twitch (26% vs. 13%) and Reddit (20% vs. 8%).

A chart showing that teen girls are more likely than boys to use TikTok, Instagram and Snapchat. Teen boys are more likely to use Twitch, Reddit and YouTube. Black teens are especially drawn to TikTok compared with other groups.

Majorities of teens use YouTube and TikTok every day, and some report using these sites almost constantly. About three-quarters of teens (77%) say they use YouTube daily, while a smaller majority of teens (58%) say the same about TikTok. About half of teens use Instagram (50%) or Snapchat (51%) at least once a day, while 19% report daily use of Facebook.

A chart that shows roughly one-in-five teens are almost constantly on YouTube, and 2% say the same for Facebook.

Some teens report using these platforms almost constantly. For example, 19% say they use YouTube almost constantly, while 16% and 15% say the same about TikTok and Snapchat, respectively.

More than half of teens say it would be difficult for them to give up social media. About a third of teens (36%) say they spend too much time on social media, while 55% say they spend about the right amount of time there and just 8% say they spend too little time. Girls are more likely than boys to say they spend too much time on social media (41% vs. 31%).

A chart that shows 54% of teens say it would be hard to give up social media.

Teens are relatively divided over whether it would be hard or easy for them to give up social media. Some 54% say it would be very or somewhat hard, while 46% say it would be very or somewhat easy.

Girls are more likely than boys to say it would be difficult for them to give up social media (58% vs. 49%). Older teens are also more likely than younger teens to say this: 58% of those ages 15 to 17 say it would be very or somewhat hard to give up social media, compared with 48% of those ages 13 to 14.

Teens are more likely to say social media has had a negative effect on others than on themselves. Some 32% say social media has had a mostly negative effect on people their age, while 9% say this about social media’s effect on themselves.

A chart showing that more teens say social media has had a negative effect on people their age than on them, personally.

Conversely, teens are more likely to say these platforms have had a mostly positive impact on their own life than on those of their peers. About a third of teens (32%) say social media has had a mostly positive effect on them personally, while roughly a quarter (24%) say it has been positive for other people their age.

Still, the largest shares of teens say social media has had neither a positive nor negative effect on themselves (59%) or on other teens (45%). These patterns are consistent across demographic groups.

Teens are more likely to report positive than negative experiences in their social media use. Majorities of teens report experiencing each of the four positive experiences asked about: feeling more connected to what is going on in their friends’ lives (80%), like they have a place where they can show their creative side (71%), like they have people who can support them through tough times (67%), and that they are more accepted (58%).

A chart that shows teen girls are more likely than teen boys to say social media makes them feel more supported but also overwhelmed by drama and excluded by their friends.

When it comes to negative experiences, 38% of teens say that what they see on social media makes them feel overwhelmed because of all the drama. Roughly three-in-ten say it makes them feel like their friends are leaving them out of things (31%) or feel pressure to post content that will get lots of comments or likes (29%). And 23% say that what they see on social media makes them feel worse about their own life.

There are several gender differences in the experiences teens report having while on social media. Teen girls are more likely than teen boys to say that what they see on social media makes them feel a lot like they have a place to express their creativity or like they have people who can support them. However, girls also report encountering some of the pressures at higher rates than boys. Some 45% of girls say they feel overwhelmed because of all the drama on social media, compared with 32% of boys. Girls are also more likely than boys to say social media has made them feel like their friends are leaving them out of things (37% vs. 24%) or feel worse about their own life (28% vs. 18%).

When it comes to abuse on social media platforms, many teens think criminal charges or permanent bans would help a lot. Half of teens think criminal charges or permanent bans for users who bully or harass others on social media would help a lot to reduce harassment and bullying on these platforms. 

A chart showing that half of teens think banning users who bully or criminal charges against them would help a lot in reducing the cyberbullying teens may face on social media.

About four-in-ten teens say it would help a lot if social media companies proactively deleted abusive posts or required social media users to use their real names and pictures. Three-in-ten teens say it would help a lot if school districts monitored students’ social media activity for bullying or harassment.

Some teens – especially older girls – avoid posting certain things on social media because of fear of embarrassment or other reasons. Roughly four-in-ten teens say they often or sometimes decide not to post something on social media because they worry people might use it to embarrass them (40%) or because it does not align with how they like to represent themselves on these platforms (38%). A third of teens say they avoid posting certain things out of concern for offending others by what they say, while 27% say they avoid posting things because it could hurt their chances when applying for schools or jobs.

A chart that shows older teen girls are more likely than younger girls or boys to say they don't post things on social media because they're worried it could be used to embarrass them.

These concerns are more prevalent among older teen girls. For example, roughly half of girls ages 15 to 17 say they often or sometimes decide not to post something on social media because they worry people might use it to embarrass them (50%) or because it doesn’t fit with how they’d like to represent themselves on these sites (51%), compared with smaller shares among younger girls and among boys overall.

Many teens do not feel like they are in the driver’s seat when it comes to controlling what information social media companies collect about them. Six-in-ten teens say they think they have little (40%) or no control (20%) over the personal information that social media companies collect about them. Another 26% aren’t sure how much control they have. Just 14% of teens think they have a lot of control.

Two charts that show a majority of teens feel as if they have little to no control over their data being collected by social media companies, but only one-in-five are extremely or very concerned about the amount of information these sites have about them.

Despite many feeling a lack of control, teens are largely unconcerned about companies collecting their information. Only 8% are extremely concerned about the amount of personal information that social media companies might have and 13% are very concerned. Still, 44% of teens say they have little or no concern about how much these companies might know about them.

Only around one-in-five teens think their parents are highly worried about their use of social media. Some 22% of teens think their parents are extremely or very worried about them using social media. But a larger share of teens (41%) think their parents are either not at all (16%) or a little worried (25%) about them using social media. About a quarter of teens (27%) fall more in the middle, saying they think their parents are somewhat worried.

A chart showing that only a minority of teens say their parents are extremely or very worried about their social media use.

Many teens also believe there is a disconnect between parental perceptions of social media and teens’ lived realities. Some 39% of teens say their experiences on social media are better than parents think, and 27% say their experiences are worse. A third of teens say parents’ views are about right.

Nearly half of parents with teens (46%) are highly worried that their child could be exposed to explicit content on social media. Parents of teens are more likely to be extremely or very concerned about this than about social media causing mental health issues like anxiety, depression or lower self-esteem. Some parents also fret about time management problems for their teen stemming from social media use, such as wasting time on these sites (42%) and being distracted from completing homework (38%).

A chart that shows parents are more likely to be concerned about their teens seeing explicit content on social media than these sites leading to anxiety, depression or lower self-esteem.

Note: Here are the questions used  for this report, along with responses, and its  methodology .

CORRECTION (May 17, 2023): In a previous version of this post, the percentages of teens using Instagram and Snapchat daily were transposed in the text. The original chart was correct. This change does not substantively affect the analysis.

  • Age & Generations
  • Age, Generations & Tech
  • Internet & Technology
  • Platforms & Services
  • Social Media
  • Teens & Tech
  • Teens & Youth

Emily A. Vogels is a former research associate focusing on internet and technology at Pew Research Center .

Download Risa Gelles-Watnick's photo

Risa Gelles-Watnick is a former research analyst focusing on internet and technology research at Pew Research Center .

Teens and Video Games Today

As biden and trump seek reelection, who are the oldest – and youngest – current world leaders, how teens and parents approach screen time, who are you the art and science of measuring identity, u.s. centenarian population is projected to quadruple over the next 30 years, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

TechRepublic

Account information.

research bias findings

Share with Your Friends

OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias

Your email has been sent

Image of Megan Crouse

Because large language models operate using neuron-like structures that may link many different concepts and modalities together, it can be difficult for AI developers to adjust their models to change the models’ behavior. If you don’t know what neurons connect what concepts, you won’t know which neurons to change.

On May 21, Anthropic published a remarkably detailed map of the inner workings of the fine-tuned version of its Claude AI, specifically the Claude 3 Sonnet 3.0 model. About two weeks later, OpenAI published its own research on figuring out how GPT-4 interprets patterns .

With Anthropic’s map, the researchers can explore how neuron-like data points, called features, affect a generative AI ’s output. Otherwise, people are only able to see the output itself.

Some of these features are “safety relevant,” meaning that if people reliably identify those features, it could help tune generative AI to avoid potentially dangerous topics or actions . The features are useful for adjusting classification, and classification could impact bias .

What did Anthropic discover?

Anthropic’s researchers extracted interpretable features from Claude 3, a current-generation large language model. Interpretable features can be translated into human-understandable concepts from the numbers readable by the model.

Interpretable features may apply to the same concept in different languages and to both images and text.

Anthropic shows a particular feature activates on words and images connected to the Golden Gate Bridge. The different shading of colors indicates the strength of the activation, from no activation in white to strong activation in dark orange.

“Our high-level goal in this work is to decompose the activations of a model (Claude 3 Sonnet) into more interpretable pieces,” the researchers wrote.

“One hope for interpretability is that it can be a kind of ‘test set for safety, which allows us to tell whether models that appear safe during training will actually be safe in deployment,’” they said.

SEE: Anthropic’s Claude Team enterprise plan packages up an AI assistant for small-to-medium businesses.

Features are produced by sparse autoencoders, which are a type of neural network architecture. During the AI training process, sparse autoencoders are guided by, among other things, scaling laws. So, identifying features can give the researchers a look into the rules governing what topics the AI associates together. To put it very simply, Anthropic used sparse autoencoders to reveal and analyze features.

“We find a diversity of highly abstract features,” the researchers wrote. “They (the features) both respond to and behaviorally cause abstract behaviors.”

The details of the hypotheses used to try to figure out what is going on under the hood of LLMs can be found in Anthropic’s research paper .

What did OpenAI discover?

OpenAI’s research, published June 6, focuses on sparse autoencoders. The researchers go into detail in their paper on scaling and evaluating sparse autoencoders ; put very simply, the goal is to make features more understandable — and therefore more steerable — to humans. They are planning for a future where “frontier models” may be even more complex than today’s generative AI.

“We used our recipe to train a variety of autoencoders on GPT-2 small and GPT-4 activations, including a 16 million feature autoencoder on GPT-4,” OpenAI wrote.

So far, they can’t interpret all of GPT-4’s behaviors: “Currently, passing GPT-4’s activations through the sparse autoencoder results in a performance equivalent to a model trained with roughly 10x less compute.” But the research is another step toward understanding the “black box” of generative AI, and potentially improving its security.

How manipulating features affects bias and cybersecurity

Anthropic found three distinct features that might be relevant to cybersecurity: unsafe code, code errors and backdoors. These features might activate in conversations that do not involve unsafe code; for example, the backdoor feature activates for conversations or images about “hidden cameras” and “jewelry with a hidden USB drive.” But Anthropic was able to experiment with “clamping” — put simply, increasing or decreasing the intensity of — these specific features, which could help tune models to avoid or tactfully handle sensitive security topics.

Claude’s bias or hateful speech can be tuned using feature clamping, but Claude will resist some of its own statements. Anthropic’s researchers “found this response unnerving,” anthropomorphizing the model when Claude expressed “self-hatred.” For example, Claude might output “That’s just racist hate speech from a deplorable bot…” when the researchers clamped a feature related to hatred and slurs to 20 times its maximum activation value.

Another feature the researchers examined is sycophancy; they could adjust the model so that it gave over-the-top praise to the person conversing with it.

What does research into AI autoencoders mean for cybersecurity for businesses?

Identifying some of the features used by a LLM to connect concepts could help tune an AI to prevent biased speech or to prevent or troubleshoot instances in which the AI could be made to lie to the user. Anthropic’s greater understanding of why the LLM behaves the way it does could allow for greater tuning options for Anthropic’s business clients .

SEE: 8 AI Business Trends, According to Stanford Researchers

Anthropic plans to use some of this research to further pursue topics related to the safety of generative AI and LLMs overall, such as exploring what features activate or remain inactive if Claude is prompted to give advice on producing weapons.

Another topic Anthropic plans to pursue in the future is the question: “Can we use the feature basis to detect when fine-tuning a model increases the likelihood of undesirable behaviors?”

TechRepublic has reached out to Anthropic for more information. Also, this article was updated to include OpenAI’s research on sparse autoencoders.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

  • Dell AI Laptops Will Be Powered By Next-Gen Qualcomm Processors
  • Microsoft Build 2024: Copilot AI Will Gain 'Personal Assistant' and Custom Agent Capabilities
  • The 10 Best AI Courses That Are Worth Taking in 2024
  • Learn How to Use AI for Your Business
  • Artificial Intelligence: More Must-Read Coverage

Image of Megan Crouse

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

What are the best pollsters in America?

538's new pollster ratings quantify each firm's error, bias and transparency.

With former President Donald Trump’s recent Republican presidential primary victories in Iowa and New Hampshire, we can now say with near certainty that the 2024 general election will feature a rematch between President Joe Biden and his predecessor. Both the stakes of the election and the odds of the outcome are of great importance, and people will be paying them a lot of attention over the next 10 months. And if social media conversations and news coverage about the primary are any indication, public opinion polls will feature very heavily in the discourse about the general election.

In fact, we are due, by my estimation, to be inundated with around 1,500 polls of elections for president, senator, governor and the U.S. House by November. For poll-readers trying to analyze each one, it will feel like drinking from a firehose. Each poll brings with it an array of questions about trust and reliability. For instance, when two polls disagree, which do we trust more? And when we’re averaging polls together (538’s preferred solution to the firehose problem), how can we quantify our preference in a way that is statistically valid and leads to the most accurate models?

Enter 538's pollster ratings , which grade each polling organization based on its historical accuracy and methodological transparency. These ratings have long been an ingredient in 538's polling averages and election models,* but we've rebuilt them from the ground up to account for a changing landscape of polling bias, uncertainty and difficulty.

How we grade pollsters

If you're interested in all the gory details of how we calculate pollster ratings, please peruse our detailed methodological write-up at your leisure. But if all you need is a top-level overview, just know that our ratings reflect firms' scores on two dimensions of pollster quality.

The first is empirical accuracy , as measured by the average error and average bias of a pollster's polls. We quantify error by calculating how close a pollster's surveys land to actual election results, adjusting for how difficult each contest is to poll. Bias is just error that accounts for whether a pollster systematically overestimates Republicans or Democrats. We average our final error and bias values together into one measure of overall accuracy called POLLSCORE, a silly backronym for "Predictive Optimization of Latent skill Level in Surveys, Considering Overall Record, Empirically." POLLSCORE tells us whether a pollster is more accurate than a theoretical replacement-level pollster that polled all the same contests. Negative POLLSCOREs are better and mean that a pollster has less error and bias than this theoretical alternative.

But empirical accuracy only gets us so far. Some pollsters are accurate, but they don't reveal much about how they actually do their work. This can range from small things, like not releasing sample sizes for key subgroups, to big problems, such as not disclosing the partisan sponsors of their research. We have found that pollsters that hide such information tend to be less accurate in future elections, even if they have good historical empirical records.

So we now also score firms based on their methodological transparency . To do this, we have quantified how much information each pollster released about every poll in our archive since 2016. (Shoutout to our fantastic research team, Mary Radcliffe and Cooper Burton, for undertaking this heroic task.) Each poll gets 1 point for each of 10 criteria it meets, ranging from whether it published the actual question wording of its poll or listed sample sizes for key subgroups. We give each pollster a Transparency Score based on the weighted average of the scores of its individual polls and whether it shares data with the Roper Center for Public Opinion Research at Cornell University or is a member of the American Association for Public Opinion Research's Transparency Initiative .

Finally, we combine each pollster's POLLSCORE and Transparency Score into a star rating between 0.5 and 3. Only the best of the best will get 3.0 stars; these are pollsters who score in the 99th percentile or better for both accuracy and transparency. Pollsters scoring between 2.8 and 3.0 are still very good — just not the best of the best. Most pollsters score between a 1.9 and 2.8, representing what we see as America's core block of good pollsters. Pollsters between 1.5 and 1.9 stars are decent, but they typically score poorly on either accuracy or transparency. Generally, we are very skeptical of pollsters that get less than 1 star, as they both have poor empirical records and share comparatively little about their methodology. A 0.5-star rating — the bare minimum — is reserved for pollsters that have records of severe error or bias or are disclosing only the bare minimum about their polls.

Why bias, instead of error alone?

Eagle-eyed readers (and pollster-rating superfans) may have noticed two key differences from 538's past pollster ratings. The first is that the ratings incorporate not just polling error , but polling bias . We think both metrics are important, as demonstrated by this simple illustration.

Imagine two polling firms: Pollster A and Pollster B. Pollster A released three surveys of the presidential election in 2020. They showed now-President Joe Biden beating then-President Donald Trump in the national popular vote by 6, 7 and 8 percentage points. Given that Biden actually won the popular vote by 4 points, these polls were off by 2, 3 and 4 points each — all in Biden's favor. On average, that means Pollster A's polls had an error of 3 points and an identical bias of 3 points toward Democrats.

Pollster B, on the other hand, released two surveys showing Biden up by 8 and 12 percentage points each and one survey showing Trump up by 2. These polls were off by 4, 8 and 6 points, respectively, for an average error of 6 points — higher than Pollster A's. However, Pollster B's polls were less biased ; Biden's average lead in its polls was 6 points, meaning Pollster B's polls were only biased toward Democrats by 2 points.

When we create polling averages, we want them to be not only accurate, but also unbiased. And to produce unbiased predictions, we need a lot of unbiased polls. (Nate Cohn at The New York Times discovered something similar when developing their own polling average methodology in 2020 .) An accurate, but biased, set of pollsters will still yield a biased aggregate on average.

Think about this another way. If most polls in a race overestimate the Democratic candidate by 10 points in a given election, but Pollster C's surveys overestimate Republicans by 5, there may be something off about the way Pollster C does its polls even if its accuracy is higher. We wouldn't necessarily expect it to keep outperforming other pollsters in subsequent elections since the direction of polling bias bounces around unpredictably from election to election.

Transparency matters

The second novelty in 538's new pollster ratings is Transparency Score.

There are several ways to assess how much we should trust a pollster (and, therefore, how much weight we should put on its data in our models). The most direct measurement is the aforementioned quantitative evaluation of pollsters' track records; the idea here is that the most trustworthy pollster is the one that has been most accurate historically. That works well for pollsters that release a lot of polls across different types of races and years. But we don't have that information for all pollsters, and many firms change their methods over time, making their past performance less predictive of future results.

For these pollsters, it turns out that we can use a pollster's transparency as a proxy for future performance. For example, 538's research has found that pollsters that share their data with the Roper Center for Public Opinion Research or participate in the AAPOR Transparency Initiative tend to be more accurate than pollsters that don't. The chart below shows our weighted average POLLSCORE for these two groups of pollsters:

In 2000, pollsters that were members of the AAPOR Transparency Initiative or shared their data with the Roper Center as of December 2023 were about 2.3 points more accurate than an average pollster and over 3.5 points more accurate than pollsters who didn't participate in the AAPOR Transparency Initiative or share data with Roper. That difference had shrunk to about 1.8 points in 2022, but it remains statistically significant. All else being equal, you should almost always prefer a pollster that participates in one of these organizations over a pollster that does not meet either of those criteria. For this reason, 538's old pollster ratings took AAPOR Transparency Initiative and Roper participation into account.

Our new ratings go a step further by incorporating a direct measurement of pollster transparency: Transparency Score. We developed this metric in collaboration with Mark Blumenthal, a pollster, past 538 contributor and co-founder of the (now sadly defunct) poll-aggregation website Pollster.com. Blumenthal has found that pollsters that released more information about their work tended to be more accurate during the 2022 election cycle. Therefore, having a specific score for each pollster's transparency should give us even more information with which to predict how well it will perform in the future. It also brings our definition of "trust" in a pollster closer to how scientists peer-review each other or how journalists vet other types of sources (data-driven journalism is, after all, journalism).

America's best pollsters

Now for the moment you've all been waiting for — which pollsters actually score the best by our new metric?

There are some familiar faces here. The New York Times/Siena College, for example, is the most accurate pollster in America. Due to its accuracy and transparency, it and ABC News/Washington Post are also the only two pollsters with a three-star rating (although 538 is part of ABC News, we developed this methodology without the input of ABC News’s polling team and did not know how it would affect their rating). The Marquette University Law School poll is also America’s most transparent, owing to the abundance of information it shares about how it conducts its polls. The other pollsters benefit from a mix of accuracy and transparency.

However, a word of caution here. The precise values for each score — and, therefore, each pollster's rank — are subject to a good amount of measurement error. Although we have a lot of quantitative tools to account for a poll's sampling error and the difficulty of certain races to poll, there are some factors we simply cannot adjust for. To illustrate this point, I re-ran our pollster-rating computer program 1,000 times, each time grading pollsters based on a random subset of their polls in our database (this is what academics call " bootstrapping "). This yielded 1,000 different plausible pollster scores for each organization. In the table below, I show the median, 5th percentile and 95th percentile of a few key firms' overall ranks across the simulations.

As you can see, there is a fairly wide range for many pollsters' potential ranks. The width of that range depends both on how many polls the pollster has published and how often it does well or poorly compared with the competition — in essence, whether it's getting "luckier" than others.

All this is to say you should not sweat small differences in the pollster ratings. A pollster's exact rank is less important than its general position: Pollsters near the top of the list are very trustworthy; those near the middle are good but have minor issues; and you should be wary of data from pollsters near the bottom.

That's it for now! Our new methodology for rating pollsters is designed to provide readers with a quick, accessible way to identify the most accurate, unbiased and transparent polls in America. As the polling landscape evolves, our strong belief is that the best pollsters are those that both performed well in the past and show their work today. Check out the full rankings on our interactive dashboard .

*While we haven't yet updated our polling averages to account for the new pollster ratings, we will do so in the near future.

CORRECTION (Jan. 25, 2024, 11:25 a.m.): A previous version of this article incorrectly stated that Mark Blumenthal has found that pollsters that release more information about their work tend to be more accurate in the long run. Blumenthal’s finding was that pollsters that released more information about their work tended to be more accurate during the 2022 election cycle.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research bias

Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .

Understanding research bias is important for several reasons.

  • Bias exists in all research, across research designs , and is difficult to eliminate.
  • Bias can occur at any stage of the research process.
  • Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimise them.

For example, the success rate of the program will likely be affected if participants start to drop out. Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.  

Table of contents

Actor–observer bias.

  • Confirmation bias

Information bias

Interviewer bias.

  • Publication bias

Researcher bias

Response bias.

Selection bias

How to avoid bias in research

Other types of research bias, frequently asked questions about research bias.

Actor–observer bias occurs when you attribute the behaviour of others to internal factors, like skill or personality, but attribute your own behaviour to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behaviour of others, you are more likely to associate behaviour with their personality, nature, or temperament.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the road, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasise findings that ‘prove’ that your lived experience is the case for most families, neglecting other explanations and experiences.

Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias

Performance bias

Regression to the mean (rtm).

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgement (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the data collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double-  and single-blinded research methods.

Based on discussions you had with other researchers before starting your observations, you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behaviour because they are aware they are being studied, this is called the Hawthorne (or observer) effect . Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behaviour in an effort to compensate for their perceived disadvantage.

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the centre of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Participant: ‘I like to solve puzzles, or sometimes do some gardening.’

You: ‘I love gardening, too!’

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).

The unconscious form of researcher bias is associated with the Pygmalion (or Rosenthal) effect, where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .

This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

While interviewing a student, you ask them:

‘Do you think it’s okay to cheat on an exam?’

Common types of response bias are:

Acquiescence bias

Demand characteristics.

  • Social desirability bias

Courtesy bias

  • Question-order bias

Extreme responding

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like ‘agree/disagree’, ‘yes/no’, or ‘true/false’. Acquiescence is sometimes referred to as ‘yea-saying’.

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Q: Are you a social person?

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  • A quiet night in
  • A night out with friends

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviours or views. Ensuring that participants are not aware of the research goals is the best way to avoid this type of bias.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behaviour.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Sampling or ascertainment bias

  • Attrition bias

Volunteer or self-selection bias

  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey , three surveys during the program, and a posttest survey.

Volunteer bias (also called self-selection bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment – i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process – focusing on ‘survivors’ and forgetting those who went through a similar process and did not survive.

Note that ‘survival’ does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies , make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies .
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgemental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview , paying special attention to any influence you may have had on participants. You can include these in your final analysis.

Cognitive bias

  • Baader–Meinhof phenomenon
  • Availability heuristic
  • Halo effect
  • Framing effect
  • Sampling bias
  • Ascertainment bias
  • Self-selection bias
  • Hawthorne effect
  • Omitted variable bias
  • Pygmalion effect
  • Placebo effect

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

Is this article helpful?

More interesting articles.

  • Attrition Bias | Examples, Explanation, Prevention
  • Demand Characteristics | Definition, Examples & Control
  • Hostile Attribution Bias | Definition & Examples
  • Observer Bias | Definition, Examples, Prevention
  • Regression to the Mean | Definition & Examples
  • Representativeness Heuristic | Example & Definition
  • Sampling Bias and How to Avoid It | Types & Examples
  • Self-Fulfilling Prophecy | Definition & Examples
  • The Availability Heuristic | Example & Definition
  • The Baader–Meinhof Phenomenon Explained
  • What Is a Ceiling Effect? | Definition & Examples
  • What Is Actor-Observer Bias? | Definition & Examples
  • What Is Affinity Bias? | Definition & Examples
  • What Is Anchoring Bias? | Definition & Examples
  • What Is Ascertainment Bias? | Definition & Examples
  • What Is Belief Bias? | Definition & Examples
  • What Is Bias for Action? | Definition & Examples
  • What Is Cognitive Bias? | Meaning, Types & Examples
  • What Is Confirmation Bias? | Definition & Examples
  • What Is Conformity Bias? | Definition & Examples
  • What Is Correspondence Bias? | Definition & Example
  • What Is Explicit Bias? | Definition & Examples
  • What Is Generalisability? | Definition & Examples
  • What Is Hindsight Bias? | Definition & Examples
  • What Is Implicit Bias? | Definition & Examples
  • What Is Information Bias? | Definition & Examples
  • What Is Ingroup Bias? | Definition & Examples
  • What Is Negativity Bias? | Definition & Examples
  • What Is Nonresponse Bias?| Definition & Example
  • What Is Normalcy Bias? | Definition & Example
  • What Is Omitted Variable Bias? | Definition & Example
  • What Is Optimism Bias? | Definition & Examples
  • What Is Outgroup Bias? | Definition & Examples
  • What Is Overconfidence Bias? | Definition & Examples
  • What Is Perception Bias? | Definition & Examples
  • What Is Primacy Bias? | Definition & Example
  • What Is Publication Bias? | Definition & Examples
  • What Is Recall Bias? | Definition & Examples
  • What Is Recency Bias? | Definition & Examples
  • What Is Response Bias? | Definition & Examples
  • What Is Selection Bias? | Definition & Examples
  • What Is Self-Selection Bias? | Definition & Example
  • What Is Self-Serving Bias? | Definition & Example
  • What Is Social Desirability Bias? | Definition & Examples
  • What Is Status Quo Bias? | Definition & Examples
  • What Is Survivorship Bias? | Definition & Examples
  • What Is the Affect Heuristic? | Example & Definition
  • What Is the Egocentric Bias? | Definition & Examples
  • What Is the Framing Effect? | Definition & Examples
  • What Is the Halo Effect? | Definition & Examples
  • What Is the Hawthorne Effect? | Definition & Examples
  • What Is the Placebo Effect? | Definition & Examples
  • What Is the Pygmalion Effect? | Definition & Examples
  • What Is Unconscious Bias? | Definition & Examples
  • What Is Undercoverage Bias? | Definition & Example
  • What Is Vividness Bias? | Definition & Examples

IMAGES

  1. Research bias: What it is, Types & Examples

    research bias findings

  2. Research bias: What it is, Types & Examples

    research bias findings

  3. Types of Bias in Research

    research bias findings

  4. 78 Cognitive Bias Examples (2024)

    research bias findings

  5. 78 Types of Bias (2024)

    research bias findings

  6. Strategies For Minimizing Bias In A Study: A Comprehensive Guide

    research bias findings

VIDEO

  1. Types of Bias in Research

  2. Sampling Bias in Research

  3. Junk Science About Men

  4. Bias and Reproducible Results -- Dr. John Ioannidis

  5. Dealing with "We already knew that!" in user research

  6. Black Harvard Economist IGNORED Over Anti-Woke Police Bias Findings

COMMENTS

  1. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  2. Best Available Evidence or Truth for the Moment: Bias in Research

    According to Pannucci and Wilkins (2010), consumers of research need to determine the degree to which bias was prevented through study design and implementation before adopting findings. Although it was previously thought that bias was less of an issue for quantitative than qualitative research, it turns out that both are subject to the problem.

  3. Bias in Research

    Research bias can affect the validity and credibility of research findings, leading to erroneous conclusions. It can emerge from the researcher's subconscious preferences or the methodological design of the study itself. For instance, if a researcher unconsciously favors a particular outcome of the study, this preference could affect how they interpret the results, leading to a type of bias ...

  4. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  5. Study Bias

    Channeling bias is a type of selection bias noted in observational studies. It occurs most frequently when patient characteristics, such as age or severity of illness, affect cohort assignment. This can occur, for example, in surgical studies where different interventions carry different levels of risk.

  6. Revisiting Bias in Qualitative Research: Reflections on Its

    Stories of research funding bodies and journal peer reviewers rejecting proposed qualitative methods or study findings due to "bias" are not uncommon. Usually, I find this relates to a perception by peer reviewers that the way data have/will be collected or analyzed is too closely aligned with the personal agenda of the researcher(s).

  7. Reducing bias and improving transparency in medical research: a

    To facilitate reproducibility of research findings and to assess the plausibility of scientific claims, it is essential that documentation, including protocols and analysis plans, are made available to peers. Making all study findings available is the only way to address publication bias.

  8. Bias in research

    Understanding research bias is important for several reasons: first, bias exists in all research, across research designs and is difficult to eliminate; second, bias can occur at each stage of the research process; third, bias impacts on the validity and reliability of study findings and misinterpretation of data can have important consequences ...

  9. Bias in Research

    Bias in Research. Understanding research bias is important for several reasons: first, bias exists in all research, across research designs and is difficult to eliminate; second, bias can occur at each stage of the research process; third, bias impacts on the validity and reliability of study findings and misinterpretation of data can have ...

  10. Best Available Evidence or Truth for the Moment: Bias in Research

    In terms of research, "bias is any trend or deviation from the truth in data collec-tion, data analysis, interpretation and publication which can cause false conclusions" (Simundic, 2013, p. 12). From this definition it can be determined that bias may occur in any part of the research process. Furthermore, it must be acknowl-edged that bias ...

  11. Research Bias 101: Definition + Examples

    Research bias refers to any instance where the researcher, or the research design, negatively influences the quality of a study's results, whether intentionally or not. The three common types of research bias we looked at are: Selection bias - where a skewed sample leads to skewed results. Analysis bias - where the analysis method and/or ...

  12. Types of Research Bias & How to Avoid Them? + Examples

    These diverse examples of research bias highlight the need for robust safeguards, transparency, and peer review in the research process. Recognizing and addressing bias is essential for maintaining the integrity of scientific inquiry and ensuring that research findings can be trusted and applied effectively. Conclusion for Research Bias

  13. Assessing the methodological quality and risk of bias

    Methods research has been ongoing since the 2010s to develop effective approaches for conducting overviews of reviews and ... bias using ROBIS can be a great way to determine whether authors of reviews of interventions that have a high risk of bias over emphasised their findings and conclusions. Subgrouping also allows overview authors to ...

  14. Research bias: What it is, Types & Examples

    Research bias is a technique in which the researchers conducting the experiment modify the findings to present a specific consequence. It is often known as experimenter bias. Bias is a characteristic of the research technique that makes it rely on experience and judgment rather than data analysis. The most important thing to know about bias is ...

  15. How bias affects scientific research

    Students will study types of bias in scientific research and in applications of science and engineering, and will identify the effects of bias on research conclusions and on society. Then ...

  16. Taking a hard look at our implicit biases

    Banaji opened on Tuesday by recounting the "implicit association" experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and ...

  17. The good, the bad, and the ugly of implicit bias

    The concept of implicit bias, also termed unconscious bias, and the related Implicit Association Test (IAT) rests on the belief that people act on the basis of internalised schemas of which they are unaware and thus can, and often do, engage in discriminatory behaviours without conscious intent.1 This idea increasingly features in public discourse and scholarly inquiry with regard to ...

  18. Moving towards less biased research

    Introduction. Bias, perhaps best described as 'any process at any stage of inference which tends to produce results or conclusions that differ systematically from the truth,' can pollute the entire spectrum of research, including its design, analysis, interpretation and reporting. 1 It can taint entire bodies of research as much as it can individual studies. 2 3 Given this extensive ...

  19. Effective Health Care (EHC) Program

    The Effective Health Care (EHC) Program improves the quality of healthcare by providing the best available evidence on the benefits and harms of drugs, devices, and healthcare services and by helping healthcare professionals, patients, policymakers, and healthcare systems make informed healthcare decisions. The EHC Program achieves this goal by ...

  20. Reviewers

    Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued ...

  21. Effects of mobile Internet use on the health of middle-aged and older

    As per the findings of the heterogeneity analysis, the impact of mobile Internet use was shown to be more pronounced on the well-being of middle-aged persons aged 45-60 years compared to those aged ≥ 60 years. Further, the endogeneity test revealed that the PSM model could better eliminate bias in sample selection.

  22. How Western, Educated, Industrialized, Rich, and Democratic is Social

    Much of the research in social computing analyzes data from social media platforms, which may inherently carry biases. An overlooked source of such bias is the over-representation of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) populations, which might not accurately mirror the global demographic diversity. We evaluated the dependence on WEIRD populations in research ...

  23. AI Index: State of AI in 13 Charts

    This year's AI Index — a 500-page report tracking 2023's worldwide trends in AI — is out.. The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year's report covers the rise of multimodal foundation models ...

  24. Functional connectivity changes in the brain of adolescents with

    The findings that IA individuals demonstrate an overall decrease in FC in the DMN is supported by numerous research . Drug addict populations also exhibited similar decline in FC in the DMN [ 40 ]. The disruption of attentional orientation and self-referential processing for both substance and behavioural addiction was then hypothesised to be ...

  25. Teens and social media: Key findings from Pew Research Center surveys

    Girls are more likely than boys to say it would be difficult for them to give up social media (58% vs. 49%). Older teens are also more likely than younger teens to say this: 58% of those ages 15 to 17 say it would be very or somewhat hard to give up social media, compared with 48% of those ages 13 to 14. Teens are more likely to say social ...

  26. OpenAI, Anthropic AI Research Reveals More About How LLMs Affect

    OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias. Anthropic opened a window into the 'black box' where 'features' steer a large language model's output ...

  27. What are the best pollsters in America?

    Pollsters scoring between 2.8 and 3.0 are still very good — just not the best of the best. Most pollsters score between a 1.9 and 2.8, representing what we see as America's core block of good ...

  28. Types of Bias in Research

    Bias exists in all research, across research designs, and is difficult to eliminate. Bias can occur at any stage of the research process. Bias impacts the validity and reliability of your findings, leading to misinterpretation of data. It is almost impossible to conduct a study without some degree of research bias.