Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

How to Take the Bias Out of Interviews

  • Iris Bohnet

interview bias in research

It’s easier to improve processes than people.

If you’re a hiring manager, you’re probably happiest getting a sense of a candidate through unstructured interviews, which allow you to randomly explore details you think are interesting and relevant. (What does the applicant think of her past employer? Does she like Chicago? What does she do in her downtime?) After all, isn’t your job to get to know the candidate? But while unstructured interviews consistently receive the highest ratings for perceived effectiveness from hiring managers, dozens of studies have found them to be among the worst predictors of actual on-the-job performance — far less reliable than general mental ability tests, aptitude tests, or personality tests.

interview bias in research

  • Iris Bohnet is the Albert Pratt Professor of Business and Government, co-director of the Women and Public Policy Program and the Academic Dean at Harvard Kennedy School. She is the author of the award-winning book What Works: Gender Equality by Design .

Partner Center

Yleos beta Logo

  • User Research
  • Meeting Notes
  • Product Demo
  • Help Center
  • Security & Privacy
  • Articles & Resources
  • Qual.ity Community

Interviewer Bias In User Research & Steps To Conquer It

19th November 2019

Interviewer Bias In User Research and Steps To Conquer It

Context is powerful. If you’ve lined up an interview with someone because they match your participant definition, you’ve already made the decision that on some level, you know that person.

Whether it’s for research, journalism or a job interview, that pretext is the lens and construct through which you’ve chosen to understand and empathise (or not) with someone. Interview bias is that lens, and everyone has it.

But even without such pretext, or even with strong ability to put bias aside when starting an interview; a first impression will do most of the work on its own.

In truth, by seeing or speaking to someone you are already fighting an uphill battle against bias within seconds. Bias that you as a researcher can’t afford. Bias that as a journalist might diminish the truth in your article. Or bias that as a manager could cause an imbalance in the workplace.

In this article, we’re going to talk about interviewer bias in both its technical and non-technical forms. We’ll also discuss tactics for professional interviewerrs to circumvent some of the damage that their own bias could be causing.

What Is Interviewer Bias, Actually

Let’s first look at the technical description of interviewer bias. The Oxford Reference database (as operated by the Oxford University Press Association) defines interviewer bias as follows:

[Interviewer Bias] is a distortion of response related to the person questioning informants in research. The interviewer’s expectations or opinions may interfere with their objectivity or interviewees may react differently to their personality or social background. Both mistrust and over-rapport can affect outcomes.  Interviewer bias – Oxford Reference Database

From a practical point of view, the definition references the potential for two types of interference with the interview process:

  • outwardly expressing one’s own beliefs or answers to questions
  • Pretexting questions with implicative phrases, such as “I’m sure you know this already but…”
  • dressing or provoking intrigue through appearance or presentation
  • Gender or racial bias are significant factors here
  • highlighting or finding implied patterns and insights in data that align with your point of view even if not explicitly in the data
  • not reviewing interview data at all and instead relying on your memory or impressions from an interview

Now within these practical states of interference, and therefore distortion of outcomes, we have implied a long list of potential bias types. I’ll talk more about different bias types later in the article, but the overarching construct of implied social bias is perhaps best defined by the term “halo effect”.

To continue to quote the Oxford Reference database…

Halo Effect refers to a common bias, in the impression people form of others, by which attributes are often generalised. Implicitly nice people, for example, are assumed to have all nice attributes. This can lead to misleading judgements: for example, clever people may falsely be assumed to be knowledgeable about everything. Halo effect – Oxford Reference Database

As if we didn’t already know, the halo effect and the studies around it are confirmation that regardless of who you are, first impressions are an ever present, and ever powerful influence in our lives. 

It’s impossible to really say how much time a first impression takes; with study findings ranging from 5-30 minutes for a job interview , to 1/10 of a second for a social interaction .

The potentially dangerous part of these discoveries is that we have no control over it. Humans have, in the general sense, failed to build a social mechanism that hinders our instinctual biases. And because humans consciously reflect on the past, in the way other animals do not, it takes much longer for us to reprogram a first impression.

So if we literally cannot help ourselves from having bias in an interview, what can we do?

How To Identify & Recognise Interviewer Bias In Yourself

The first step in identifying interviewer bias in yourself is accepting that you have interviewer bias at all. 

To be able to work with your interviewer bias, you’ll need to fast track through the 5 stages of grief : through denial and anger, past depression and bargaining, and right to the end: Acceptance.

If you can accept that you yourself are agist, racist, gender biased, homo/hetero-phobic and have any other worldy combination of socially biased perspectives; then you have a good platform to build from. 

The sad part of this world is that we all have these biases, we all use stereotypes and generalisations to try and understand, judge or empathise with people when we interact with them. But if we don’t accept that we might have these, either through ignorance or claiming to stand above them, we are most prone to suggestions and actions that are in the best case a mild bias, or in the worst case racist or bigoted.

As an interviewer, and particularly a researcher, your role is to ask questions and listen to your interviewees without judgement; something that you have to accept is nigh on impossible before you can learn to work with it.

How To Identify Interviewer Bias In Others

How To Identify Yleos Bias In Others, And Help Them Overcome It

For this article, when we refer to identifying bias in “others”, we’re really talking about your colleagues.

Whether it’s other people who do interviews, editors and clients, people that listen and take notes, or even people responsible for transcribing or coding data during analysis.

In most research work environments, identifying bias in others can often be very difficult. In fact, you’d often hope it’s difficult, as overt bias early on will have other implications.

For example, if a research project is designed around, influenced by, and makes no effort to challenge the overt bias of a client, the results of that project will often only confirm what that client assumed to know anyway. The client may then disregard the work as a waste of time/money, and refuse to acknowledge research as an important part of their process. These situations are common in industry research; a kind of self-fulfilling prophecy for clients who feel this work is unnecessary.

But the bias of people involved in the project may not even appear until the very end of a project, wherein the only real outcome is what one person has been saying all along. If this has happened and the data to support the claim is strong, then while the work may have missed things, it at the very least confirmed an assumption. However, if the data to support the initial claim is weak, but remains the only real insight from the work, it’s likely the project was off the rails from the beginning.

In point of fact, as researchers we are often trying to suppress our own bias – partially, fully, overtly or not. The respect most people have for the scientific process means that they attempt not to judge too much. And this type of bias, this subtle influence of our assumptions is extremely difficult to see in colleagues until it’s too late; in fact, if you ever even see it at all.

So what can you do about it?

Working With The Bias Of Colleagues, Friends And Clients

There are a number of ways to insulate qualitative work from the influence of your colleagues.

A tactic that some researchers use, and is now a standard process for the research ops team at Yleos , is a simple extension of what you will be doing anyway as a researcher: you interview people.

Whether it’s your colleague with a client; a team in a company, or you with an overseeing professor, the first interview you design for a project is an interview for you and your colleagues to identify, notate and hold accountable your own biases and assumptions about the project.

It doesn’t take much, or need much, but even if you’re writing questions for yourself and getting a roommate to ask you them; the best way you can counter your own bias is to use your interviewing skills to weed them out.

At Yleos any biases and patterns we find in ourselves at the start of a project are recorded, noted, and openly discussed with everyone involved. We often then go back to our notes as a team at each stage of the project ,from script development to analysis and reporting.

Sometimes it’s as simple (or perhaps as difficult…) as saying: “We know this is a bias in the team. Is this question informing an assumption, or leading towards confirmation instead?”

Working With Your Accepted Biases During An Actual Interview

If you’ve accepted that you have biases, there are three acts of self awareness you can use to improve and monitor your interviewer bias.

1. Maintain a state of active self review.

While ‘reflective’ self review is, well, reflective, and therefore after-the-fact; ‘active’ self review is a state of constant consciousness and self evaluation.

Although this is arguably the most effective tactic for improving your biases in life, this is very difficult to do at all, much less halfway through an interview…

As an interviewer you’re juggling a lot, everything from the schedule, time and duration, to the comfort of the interviewee as well as any colleagues in the room, your script, the notes you are taking, etc etc, etc.

As part of that dance researchers tend to be, on average, quite self aware. But can you do more? Can you do better? Can you consciously analyse and control your own haptics, your voice inflection, your facial reactions, even the wording choice of your questions?

This is not in an attempt to change who you are, but to identify traits, words and phrases in yourself that may (intentionally or not) trigger specific reactions in your interviewee. The ability to do this well is both a power for good as well as evil as it’s the root to manipulation as much as it can be a root too empathy.

Regardless of your own abilities, there are limits to everyone’s ability to do and improve this set of skills, so it’s good to have a vehicle for this to work within, thus our second act…

2. Quite literally ‘ play a role’ .

Playing a role means to create a character or persona for your interview that you adopt and embody for its duration.

It is, in short, to be an actor on a very small and intimate stage, wherein your goal is not to make the audience laugh or cry, but instead to allow them to be entirely and completely who they are and experience how they feel about things without your interruption or interference.

Now, this character could be designed in your current self image, or reflect what you feel to be your best properties. But we find it works best when you can use the first act (active self review), to identify your biases and then act the part of someone (you) who has no such biases.

Now many of you may think, “I do that anyway!” And if so, that’s great! I would hope so.

I want to emphasise it here because in our evaluation of investigative conversations between mentors and subjects (ie. interviewers and interviewees) we’ve seen that making the conscious decision to play a role often means avoiding the compulsion to represent your own truth (due to ego or habit). It also means you don’t feel like you’re lying because you are playing a role for someone else’s benefit, not yours.

The trick in this work is not to lie about who you are, but to not let who you are prevent someone else from representing their truth. So making the conscious decision to play that role allows you to put aside your instincts and feelings and be fully present for your interviewee.

3. Add an additional layer to your analysis process wherein you start by redefining your hypothesis.

Unlike points one and two, the third is a very quick framework for researchers to use pre-analysis or even during one-off interview debriefs.

The concept is simple. Before starting the analysis of interview data, particularly the tagging and coding of raw audio data, take 30 minutes to do the following with anyone involved in the analysis process.

  • Review your original hypothesis for the study or interview(s)
  • Write down 1-3 assumptions on what you’ll find/discover about each hypothesis
  • Write down what you think you absolutely won’t find about each hypothesis
  • Write down a list of things you know you learned from interviews that you didn’t previously know about the topic
  • Write down the number of things you hope to find in the data that would be new insights for you
  • Formulate additional drafted hypotheses of things you don’t expect to find but answers to which could, maybe, possibly, be hidden in the data

Now I’m going to call myself out here…

Many researchers would argue that the above process could potentially lead to a close-minded view of the data. Instead of removing bias, perhaps this process enforces it?

In short, this checklist is a way of consolidating the risk of bias to a short process at the start of the analysis process instead of opening the entire analysis process up to the risk of bias throughout.

I’m sure a number of researchers will have comments and arguments about this concept, and I hope you’ll all contribute to the comments and help evaluate, test and adjust this process!

Comment below with your thoughts on this!

The Crux Of Yleos Bias, Reflections On Yourself

“Am I creating bias in my own research?” is a question we get all the time at Yleos .

In fact, in a study we did with 60 researchers, at least 10% of researchers brought this up as a concern they had in their work, independent of the questions asked (which were not on this topic at all).

In fact, I personally believe that if you haven’t asked yourself this question, you may be missing entire areas of discovery in your work!

However, this concern we have for our own bias is something we should learn to address constructively. Researchers who experience this doubt and struggle to justify the results of their research need to be sure that their results were not primarily driven by their own ideals.

And that’s ultimately why we wrote this article. The point is not to sell a fixed ideal of bias-resistance research, but rather to present options that may help some of you overcome the barriers to achieving confidently unbiased outcomes.

Regardless of whether or not you agree with the suggestions we’ve made, I hope you’ll contribute to the comments and help us find the best way forward.

And make sure to signup for our email short course on conducting great interviews below!

Thanks for this article goes out to the entire Yleos team, our beta community, and the teams at Growth Mechanics and Silicon Rhino for their knowledge contributions.

Want to learn more about how to conduct great qualitative interviews?

Whether you’re new to qual or looking for a new perspective on techniques and tactics, we’ve got the stuff!

Email Address *

“Built from a combination of unique content and interviews with user researchers from around the world; the Yleos Newsletter is a unique perspective on the world of qualitative research.” Jerry C., Research Professional

Privacy Overview

Research Design Review

A discussion of qualitative & quantitative research design, interviewer bias & reflexivity in qualitative research.

Interviewer bias & reflexivity in qualitative research

Reflexivity is an important concept because it is directed at  the greatest underlying threat to the validity of our qualitative research outcomes – that is, the social interaction component of the interviewer-interviewee relationship, or, what Steinar Kvale called,  “the asymmetrical power relations of the research interviewer and the interviewed subject” (see “Dialogue as Oppression and Interview Research,” 2002 ).  The act of reflection enables the interviewer to thoughtfully consider this asymmetrical relationship and speculate on the ways the interviewer-interviewee interaction may have been exacerbated by presumptions arising from obvious sources, such as certain demographics (e.g., age, gender, and race), or more subtle cues such as socio-economic status, cultural background, or political orientation.  Linda Finlay (2002) identifies five ways to go about reflexivity – introspection, inter-subjective reflection, mutual collaboration, social critique, and discursive deconstruction – and discusses utilizing these techniques in order to understand the interviewer’s role in the interview context and how to use this knowledge to “enhance the trustworthiness, transparency, and accountability of their research” (p. 211-212).  An awareness of misperceptions through reflexivity enables the interviewer to design specific questions for the interviewee that help inform and clarify the interviewer’s understanding of the outcomes.

It is for this reason that a reflexive journal, where the interviewer logs the details of how they may have influenced the results of each interview, should be part of a qualitative research design.  This journal or diary sensitizes the interviewer to their prejudices and subjectivities, while more fully informing the researcher on the impact of these influences on the credibility of the research outcomes.  The reflexive journal not only serves as a key contributor to the final analyses but also enriches the overall study design by providing a documented first-hand account of interviewer bias and the preconceptions that may have negatively influenced the findings.  In this manner, the reader of the final research report can assess any concerns about objectivity and interpretations of outcomes.

Reflexivity, along with the reflexive journal , is just one way that our qualitative research designs can address the bias that most assuredly permeates the socially-dependent nature of qualitative research.  Introspective reflexivity – along with peer debriefing and triangulation – add considerably to the credibility and usefulness of our qualitative research.

Finlay, L. (2002). Negotiating the swamp: The opportunity and challenge of reflexivity in research practice. Qualitative Research , 2 (2), 209–230.

Share this:

  • Click to share on Reddit (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

14 comments

  • Pingback: How To Do Customer Interviews That Reveal Priceless Insights - Dalilplus
  • Pingback: Shared Constructs in Research Design: Part 2 — Bias | Research Design Review
  • Pingback: “Did I Do Okay?”: The Case for the Participant Reflexive Journal | Research Design Review
  • Pingback: Limitations of In-person Focus Group Discussions | Research Design Review
  • Pingback: Reflections on “Qualitative Literacy” | Research Design Review
  • Pingback: In-the-moment Question-Response Reflexivity | Research Design Review
  • Pingback: Making Connections: Practical Applications of the Total Quality Framework in Mixed Methods Research | Research Design Review
  • Pingback: Paying Attention to Bias in Qualitative Research: A Message to Marketing Researchers (& Clients) | Research Design Review
  • Pingback: 40 Interview Tips (Complete List) | Any Intern
  • Pingback: The Recipe for Quality Outcomes in Qualitative Research Includes a Healthy Dose of Consistency | Research Design Review
  • Pingback: Reflexivity for Interviewer Bias Management | kushaanand
  • Pingback: Resisting Stereotypes in Qualitative Research | Research Design Review
  • Pingback: Reflections from the Field: Questions to Stimulate Reflexivity Among Qualitative Researchers | Research Design Review
  • Pingback: Striking a Balance in Research Design | Research Design Review

Leave a comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Daring Leadership Institute: a groundbreaking partnership that amplifies Brené Brown's empirically based, courage-building curriculum with BetterUp’s human transformation platform.

Brené Brown and Alexi Robichaux on Stage at Uplift

What is Coaching?

Types of Coaching

Discover your perfect match : Take our 5-minute assessment and let us pair you with one of our top Coaches tailored just for you.

Find your coach

BetterUp coaching session happening

We're on a mission to help everyone live with clarity, purpose, and passion.

Join us and create impactful change.

Read the buzz about BetterUp.

Meet the leadership that's passionate about empowering your workforce.

Find your Coach

For Business

For Individuals

Request a demo

What is interview bias and how to avoid it when hiring?

Find my Coach

Jump to section

What is interview bias?

Types of interview bias

How to avoid bias in interviews

Are interviews always the best option, limit, not eliminate.

Diversity in the workplace drives innovation, produces better leaders, and avoids groupthink . But recruiting diverse hires can be more challenging than it seems, and unconscious bias or implicit bias is one reason why.

Everyone carries implicit biases . It’s part of being human. But in a hiring context, your partiality impacts the way you perceive an applicant’s abilities or cultural fit , especially during the interview process. Those prejudgements make it difficult to identify and hire the best candidates.

Understanding and managing interview bias will help reduce its impact on hiring decisions, allowing you to bring the best and brightest recruits on board and create a more diverse team.

Interview bias happens when a recruiter judges a candidate’s ability based on stereotypes or non-work-related ideas about a person. It interferes with a fair, merit-based assessment of a candidate’s suitability, often leading to poor hiring decisions.

As an interviewer, your unconscious response to trivial characteristics, like a person’s body language or hobbies, can unfairly affect their chance of landing the job. You could end up rejecting one candidate who would have done well or recruiting another for the wrong reasons.

This phenomenon is more common than it might seem. When discussing hires that didn’t perform as expected, 42% of recruitment specialists credited bias for getting in the way of choosing the right candidate . In fact, 32% of hiring mistakes happened because those responsible “took a chance on a nice person.”  

And allowing your prejudices to bias the hiring process is expensive. A bad hiring decision can cost companies upwards of 30% of the employee’s first-year earnings , and it can also harm employee morale and team productivity.

Making the right hiring decisions requires educating yourself on different types of bias. Once you can recognize them, you’ll be able to take practical steps to counter your prejudgements and see candidates for who they are. 

9 types of interview bias

While you can never eliminate unconscious bias entirely, being aware of it can help mitigate its effects. Here are a few common types of biases and how they could affect the interview process:

1. Stereotyping bias

Judging a candidate based on group characteristics instead of their individual qualities is stereotype bias. Usually, these conclusions are rooted in prejudice of one form or another. This can lead to gender inequality and other kinds of inequality in the workplace. Types of stereotype bias include:

Gender bias

Socioeconomic bias

Ability bias

Racial bias

EXAMPLE: You reject a candidate for a programming job because their socioeconomic background makes you question their intelligence, even though they have years of experience.

Annoyed-man-looking-at-woman-he-is-interviewing-for-work-interview-bias

2. Halo/horn bias

These biases form when a single characteristic or physical trait overshadows an applicant’s other qualities. When that trait is positive and you hire them for that reason, this bias becomes the halo effect . If it’s negative and you reject them, it becomes the horn effect. 

EXAMPLE: You hire a candidate because they were easy to talk to, but their hard skills are lacking and they don’t end up being a high performer.

3. Recency bias

With recency bias, you might favor more recent candidates than past ones. Chances are you better remember people you interviewed more recently, so their positive traits are fresher in your brain. Information about candidates you interviewed a few days or weeks ago might be murkier, making you less likely to hire them.

EXAMPLE: During the interview, a candidate starts confidently but doesn’t give a good answer to your last question. The final error sticks with you, and you choose a different candidate even though they aren’t as strong overall.

4. First impression bias

It takes one-tenth of a second to create a judgment based on someone’s appearance . Forming an opinion based on your first impression can remain throughout the interview process, even if that impression has nothing to do with their merit. 

EXAMPLE: You unconsciously favor a mediocre candidate who attended the interview in a blazer instead of the more competent applicant who wore a t-shirt.

5. Non-verbal bias

Rejecting applicants based on physical mannerisms instead of skills is called non-verbal bias. An introverted candidate who doesn’t make eye contact could still be a strong hire, but you might feel like you didn’t connect with them because of their mannerisms.

Different types of neurodiversity , like Tourette syndrome and autism spectrum disorders (ASD), can also affect a person’s body language and lead you not to hire them despite their strengths. EXAMPLE: The candidate you’re interviewing twists their hair around their finger when answering a difficult question. You disqualify the applicant because you judge them as uninterested in the position.

6. Similarity bias

Whether unconscious or not, people tend to favor others with similar interests as them. This tendency is known as similarity or affinity bias, meaning if you have something in common with a candidate, you’ll likely connect with and favor them over other applicants.

EXAMPLE: After asking them to tell you about themselves , the candidate discloses that they completed the same internship as you. You bond over the experience, leading you to view their application more favorably than others.

Woman-laughing-with-man-she-is-interviewing-for-work-interview-bias

7. Central tendency

When using a scale measurement, humans tend to rate things closer to the middle, even if that rating isn’t accurate . This is known as central tendency and, in a hiring situation, it can remove nuance and render one applicant indistinguishable from another. This can make it harder to decide who to hire and discount an applicant's individual strengths and weaknesses. 

EXAMPLE: You’re rating a series of candidates based on their work experience. No one seems to stand out, so you mark them all as average.

8. Inconsistency in questioning

Instead of following a list of standardized questions during the interview, you might adjust your process after meeting a candidate. You could unnecessarily ask them about a gap in their resume or choose not to question their work experience based on their personality. This prevents you from having a holistic view of each interviewee. 

EXAMPLE: The candidate you’re interviewing is a recent graduate of an Ivy League college. You don’t ask about their understanding of a significant business fundamental because you assume they know, but after hiring them you learn they aren’t as competent as you thought. 

9. Confirmation bias

Confirmation bias operates hand-in-hand with the other biases on this list. When you judge an interviewee based on your preconceptions, it can lead you to ask questions or pay attention to information that validates the things you already think.

EXAMPLE: During an interview, you ask your top candidate questions highlighting their strengths while ignoring any red flags and weaknesses.

Removing all common biases from the interview process isn’t realistic, but putting in the effort does help. Try implementing one or more of these suggestions to help preserve your objectivity and lead you to the best possible candidates. 

1. Provide interviewers with diversity and recruitment training

Create a balanced recruitment process by formally training everyone involved. Bring in a diversity and inclusion coach to talk to your team and explore ways to better your hiring process. These conversations should help you recognize prejudice, maintain an objective view, and write interview questions that prevent bias.

Woman-on-a-wheelchair-giving-course-to-collegues-at-office-interview-bias

2. Use standardized questions and scoring criteria for all candidates

Applying a standardized set of questions and rubric for every candidate interview helps create a consistent experience. Standardized scoring highlights the information you’ve gathered based on the job criteria, not biases, and it weighs everyone as equally as possible.

3. Rely on an interview guide

Using a formal interview guide builds structure in the interview process. A standard document can organize a list of skill-based and behavioral questions so you don’t go off-track. You can keep it in a shared space to let the whole team take notes and document the process.

4. Keep some candidate data anonymous

Research has shown that something as small as a name can affect the hiring process. One study found that applicants with Chinese and Indian names were 20–40% less likely to receive callbacks than those from other backgrounds . 

Blind hiring practices let you evaluate candidates without referencing factors that could lead to the formation of a bias, like:

Date of birth

5. Recruit broadly

If your company’s headquarters are in a big city, you might miss out on talented applicants who prefer rural living or can’t afford to relocate. If you offer hybrid or remote work opportunities, look beyond your immediate geographic area and expand your candidate sourcing practices. Recruit based on merit, not convenience.

6. Include different interviewers in the process

Establish a collaborative hiring process by including diverse staff members in screening, interviewing, and decision-making . Different perspectives on your hiring panel will help reduce the effect of individual bias and create a more level playing field.

Man-on-work-interview-via-videocall-interview-bias

Job interviews help you assess a candidate’s personality and learn more about their background and experience. But sometimes, an interview isn’t the best predictor of whether someone is the right person for the job. Here are a few reasons why you should include other evaluation factors in your hiring process:

Speed: Phone screenings , interviews, and in-person assessments require a significant time investment, both from the hiring team and the interviewee. As a consequence, you might avoid spending too much time on unworthy candidates, eliminating them before they have the chance to prove themselves. 

Relevant information: If you’re hiring for a technical position, it might be a better choice to prioritize skills assessments, working interviews , and experience over a person’s interview answers. 

Interview anxiety: Some candidates may feel nervous during interviews , even if they’re qualified for the job. Their unease could affect your ability to assess their skills and potential as an employee.

It’s impossible to completely remove bias from the hiring process — unconscious or otherwise. But you can still limit the effects of interview bias by recognizing potential prejudice, diversifying the hiring panel, and creating a standardized process for every candidate.

That way, you can establish a fair and equitable assessment and create a diverse team that positions you for long-term success.

Understand Yourself Better:

Big 5 Personality Test

Allaya Cooks-Campbell

With over 15 years of content experience, Allaya Cooks Campbell has written for outlets such as ScaryMommy, HRzone, and HuffPost. She holds a B.A. in Psychology and is a certified yoga instructor as well as a certified Integrative Wellness & Life Coach. Allaya is passionate about whole-person wellness, yoga, and mental health.

10 unique and creative interview questions to ask candidates

Implicit bias: how unconscious attitudes affect everything, 35 behavioral interview questions to ask in your next interview, find the best candidates with a structured interview, what self-serving bias is and 6 tips to overcome it, it’s time to stop asking culture fit questions — think culture add, 15 questions to ask at the end of an interview to impress recruiters, how unconscious bias training helps build safety on your team, how to use motivational interview questions to drive change, 6 examples of interview feedback to start using today, can blind hiring improve your company’s diversity initiatives, why a working interview can help you land your dream job (and candidate), 9 must-haves for a stellar candidate experience, how to conduct an interview: 8 tips to find the perfect candidate, the ultimate guide to hiring for behavioral competency (with examples), top-down processing & how to avoid self-limiting behavior, the cognitive biases caused by the availability heuristic, stay connected with betterup, get our newsletter, event invites, plus product insights and research..

3100 E 5th Street, Suite 350 Austin, TX 78702

  • Platform overview
  • Integrations
  • Powered by AI
  • BetterUp Lead™
  • BetterUp Manage™
  • BetterUp Care®
  • Sales Performance
  • Diversity & Inclusion
  • Case studies
  • ROI of BetterUp
  • What is coaching?
  • About Coaching
  • Find your Coach
  • Career Coaching
  • Communication Coaching
  • Personal Coaching
  • News and Press
  • Leadership Team
  • Become a BetterUp Coach
  • BetterUp Briefing
  • Center for Purpose & Performance
  • Leadership Training
  • Business Coaching
  • Contact Support
  • Contact Sales
  • Privacy Policy
  • Acceptable Use Policy
  • Trust & Security
  • Cookie Preferences

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Multidiscip Healthc

Information bias in health research: definition, pitfalls, and adjustment methods

Alaa althubaiti.

Department of Basic Medical Sciences, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia

As with other fields, medical sciences are subject to different sources of bias. While understanding sources of bias is a key element for drawing valid conclusions, bias in health research continues to be a very sensitive issue that can affect the focus and outcome of investigations. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. This paper seeks to raise awareness of information bias in observational and experimental research study designs as well as to enrich discussions concerning bias problems. Specifying the types of bias can be essential to limit its effects and, the use of adjustment methods might serve to improve clinical evaluation and health care practice.

Introduction

Bias can be defined as any systematic error in the design, conduct, or analysis of a study. In health studies, bias can arise from two different sources; the approach adopted for selecting subjects for a study or the approach adopted for collecting or measuring data from a study. These are, respectively, termed as selection bias and information bias. 1 Bias can have different effects on the validity of medical research findings. In epidemiological studies, bias can lead to inaccurate estimates of association, or over- or underestimation of risk parameters. Allocating the sources of bias and their impacts on final results are key elements for making valid conclusions. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. These measurements can be obtained by experimentation (eg, bioassays) or observation (eg, questionnaires or surveys).

Medical practitioners are conscious of the fact that the results of their investigation can be deemed invalid if they do not account for major sources of bias. While a number of studies have discussed different types of bias, 2 – 4 the problem of bias is still frequently ignored in practice. Often bias is unintentionally introduced into a study by researchers, making it difficult to recognize, but it can also be introduced intentionally. Thus, bias remains a very sensitive issue to address and discuss openly. The aim of this paper is to raise the awareness of three specific forms of information bias in observational and experimental medical research study designs. These are self-reporting bias, and the often-marginalized measurement error bias, and confirmation bias. We present clear and simple strategies to improve the decision-making process. As will be seen, specifying the type of bias can be essential for limiting its implications. The “Self-reporting bias” section discusses the problem of bias in self-reporting data and presents two examples of self-reporting bias, social desirability bias and recall bias. The “Measurement error bias” section describes the problem of measurement error bias, while the “Confirmation bias” section discusses the problem of confirmation bias.

Self-reporting bias

Self-reporting is a common approach for gathering data in epidemiologic and medical research. This method requires participants to respond to the researcher’s questions without his/her interference. Examples of self-reporting include questionnaires, surveys, or interviews. However, relative to other sources of information, such as medical records or laboratory measurements, self-reported data are often argued to be unreliable and threatened by self-reporting bias.

The issue of self-reporting bias represents a key problem in the assessment of most observational (such as cross-sectional or comparative, eg, case–control or cohort) research study designs, although it can still affect experimental studies. Nevertheless, when self-reporting data are correctly utilized, they can help to provide a wider range of responses than many other data collection instruments. 5 For example, self-reporting data can be valuable in obtaining subjects’ perspectives, views, and opinions.

There are a number of aspects of bias that accompany self-reported data and these should be taken into account during the early stages of the study, particularly when designing the self-reporting instrument. Bias can arise from social desirability, recall period, sampling approach, or selective recall. Here, two examples of self-reporting bias are discussed: social desirability and recall bias.

Social desirability bias

When researchers use a survey, questionnaire, or interview to collect data, in practice, the questions asked may concern private or sensitive topics, such as self-report of dietary intake, drug use, income, and violence. Thus, self-reporting data can be affected by an external bias caused by social desirability or approval, especially in cases where anonymity and confidentiality cannot be guaranteed at the time of data collection. For instance, when determining drug usage among a sample of individuals, the results could underestimate the exact usage. The bias in this case can be referred to as social desirability bias.

Overcoming social desirability bias

The main strategy to prevent social desirability bias is to validate the self-reporting instrument before implementing it for data collection. 6 – 11 Such validation can be either internal or external. In internal validation, the responses collected from the self-reporting instrument are compared with other data collection methods, such as laboratory measurements. For example, urine, blood, and hair analysis are some of the most commonly used validation approaches for drug testing. 12 – 14 However, when laboratory measurements are not available or it is not possible to analyze samples in a laboratory for reasons such as cost and time, external validation is often used. There are different methods, including medical record checks or reports from family or friends to examine externally the validity of the self-reporting instrument. 12 , 15

Note that several factors must be accounted for in the design and planning of the validation studies, and in some cases, this can be very challenging. For example, the characteristics of the sample enrolled in the validation study should be carefully investigated. It is important to have a random selection of individuals so that results from the validation can be generalized to any group of participants. When the sampling approach is not random and subjective, the results from the validation study can only apply to the same group of individuals, and the differences between the results from validation studies and self-reporting instruments cannot be used to adjust for differences in any group of individuals. 12 , 16 Hence, when choosing a predesigned and validated self-reporting instrument, information on the group of participants enrolled in the validation process should be obtained. This information should be provided as part of the research paper and if not, further communication is needed with the authors of the work in order to obtain them. For example, if the target of the study is to examine drug use among the general population with no specific background, then a self-reporting instrument that has been validated on a sample of the population having general characteristics should be used. In addition, combining more than one validation technique or the use of multiple data sources may increase the validity of the results.

Moreover, the possible effects of social desirability on study outcomes should be identified during the design phase of the data collection method. As such, measurement scales such as Marlowe–Crowne Social Desirability Scale 17 or Martin–Larsen Approval Motivation score 18 would be useful to identify and measure the social desirability aspect of the self-reported information.

Recall bias

Occasionally, study participants can erroneously provide responses that depend on his/her ability to recall past event. The bias in this case can be referred to as recall bias, as it is a result of recall error. This type of bias often occurs in case–control or retrospective cohort study designs, where participants are required to evaluate exposure variables retrospectively using a self-reporting method, such as self-administered questionnaires. 19 – 21

While the problems posed by recall bias are no less than those caused by social desirability, recall bias is more common in epidemiologic and medical research. The effect of recall bias has been investigated extensively in the literature, with particular focus on survey methods for measuring dietary or food intake. 22 – 25 If not given proper consideration, it can either underestimate or overestimate the true effect or association. For example, a recall error in a dietary survey may result in underestimates of the association between dietary intake and disease risk. 24

Overcoming recall bias

To overcome recall bias, it is important to recognize cases where recall errors are more likely to occur. Recall bias was found to be related to a number of factors, including length of the recall period (ie, short or long times of clinical assessment), characteristics of the disease under investigation (eg, acute, chronic), patient/sample characteristics (eg, age, accessibility), and study design (eg, duration of study). 26 – 30 For example, in a case–control study, cases are often more likely to recall exposure to risk factors than healthy controls. As such, true exposure might be underreported in healthy controls and overreported in the cases. The size of the difference between the observed rates of exposure to risk factors in cases and controls will consequently be inflated, and, in turn, the observed odds ratio would also increase.

Many solutions have proven to be useful for minimizing and, in some cases, eliminating recall bias. For example, to select the appropriate recall period, all the above-mentioned factors should be considered in relation to recall bias. Previous literature showed that a short recall period is preferable to a long one, particularly when asking participants about routine or frequent events. In addition, the recall period can be stratified according to participant demographics and the frequency of events they experienced. For example, when participants are expected to have a number of events to recall, they can be asked to describe a shorter period than those who would have fewer events to recall. Other methods to facilitate participant’s recall include the use of memory aids, diaries, and interviewing of participants prior to initiating the study. 31

However, when it is not possible to eliminate recall errors, it is important to obtain information on the error characteristics and distribution. Such information can be obtained from previous or pilot studies and is useful when adjusting the subsequent analyses and choosing a suitable statistical approach for data analysis. It must be borne in mind that there are fundamental differences between statistical approaches to make adjustments that address different assumptions about the errors. 22 , 32 – 36 When conducting a pilot study to examine error properties, a high level of accuracy and careful planning are needed, as validation largely depends on biological testing or laboratory measurements, which, besides being costly to conduct, are often subject to measurement errors. For example, in a validation study to estimate sodium intake using a 24-hour urinary excretion method, the estimated sodium intake tended to be lower than the true amount. 25 Despite these potential shortcomings, the use of biological testing or laboratory measurements is one of the most credible approaches to validate self-reported data. More information on measurement errors is provided in the next section.

It is important to point out that overcoming recall bias can be difficult in practice. In particular, bias often accompanies results from case–control studies. Hence, case–control studies can be conducted in order to generate a research hypothesis, but not to evaluate prognoses or treatment effects. Finally, more research is needed to assess the impact of recall bias. Studies to evaluate the agreements between responses from self-reporting instruments and gold-standard data sources should be conducted. Such studies can provide medical researchers with information concerning the validity of the self-reporting instrument before utilizing it in a study or for a disease under investigation. Other demographic factors associated with recall bias can also be identified. For instance, a high agreement was found between self-reported questionnaires and medical record diagnoses of diseases such as diabetes, hypertension, myocardial infarction, and stroke but not for heart failure. 37

Measurement error bias

Device inaccuracy, environmental conditions in the laboratory, or self-reported measurements are all sources of errors. If these errors occur, observed measurements will differ from the actual values, and this is often referred to as measurement error, instrumental error, measurement imprecision, or measurement bias. These errors are encountered in both observational (such as cohort studies) and experimental (such as laboratory tests) study designs. For example, in an observational study of cardiovascular disease, measurements of blood cholesterol levels (as a risk factor) often included errors.

An analysis that ignores the effect of measurement error on the results can be referred to as a naïve analysis. 22 Results obtained from using naïve analysis can be potentially biased and misleading. Such results can include inconsistent (or biased) and/or inefficient estimators of regression parameters, which may yield poor inferences about confidence intervals and the hypothesis testing of parameters. 22 , 34

However, random sampling should not be confused with measurement error variability. Commonly used statistical methods can address the sampling variability during data analysis, but they do not account for uncertainty due to measurement error.

Measurement error bias has rarely been discussed or adjusted for in the medical research literature, except in the field of forensic medicine, where forensic toxicologists have undoubtedly the most theoretical understanding of measurement bias as it is particularly relevant for their type of research. 38 Known examples of measurement error bias have also been reported for blood alcohol content analyses. 38 , 39

Systematic and random error

Errors could occur in a random or systematic manner. When errors are systematic, the observed measurements deviate from true values in a consistent manner, that is, they are either consistently higher or lower than the true values. For example, a device could be calibrated improperly and subtract a certain amount from each measurement. By not accounting for this deviation in the measurement, the results will contain systematic errors and in this case, true measurements would be underestimated.

For random errors, the deviation of the observed from true values is not consistent, causing errors to occur in an unpredictable manner. Such errors will follow a distribution, in the simplest case a gaussian (also called normal or bell-shaped) distribution, and will have a mean and standard deviation. When the mean is zero, the measured value should be reported within an interval around zero and an estimated amount of deviation from the actual value. When the target value is reported to fall within a range or interval of minimum and maximum levels, the size of the interval depends mainly on the size of measurement errors, that is, the larger the errors, the larger the uncertainty and hence the wider the intervals, which could affect the precision level.

Random errors could also be proportional to the measured amount. In this case, errors can be referred to as multiplicative or non-gaussian errors. 36 These random errors occur due to uncontrollable and possibly unknown experimental factors, such as laboratory environment conditions that affect concentrations in biological experiments. Examples of non-gaussian errors can be found in breath alcohol measurements, in which the variability around the measurement increases with increasing alcohol concentrations. 40 – 42

Adjusting for measurement error bias

The type and distribution of measurement errors determines the type of adjusting method. 34 When errors are systematic, calibration methods can be used to reduce their effects on the results. These methods are based on a reference measurement that can be obtained from a previous or pilot study, and used as the correct quantity to calibrate the study measurements. As such, simple mathematical tools can be used if the errors are estimated. The adjustment methods for systematic errors are simpler to apply than those for random errors.

Significant efforts have been made to develop sophisticated statistical approaches that adjust for the effect of random measurement errors. 34 Commonly available and popular statistical software packages, such as R Software Package ( http://www.r-project.org ) and the Stata (Stata Corporation, College Station, TX, USA) include features that allow adjustments to be made for random measurement errors. Some of the bias adjustment methods include simulation–extrapolation, regression calibration, and the instrumental variable approach. 34 In order to select the best adjustment approach, knowledge of the error properties is essential. For example, the amount of standard deviation and the shape of error distribution should be identified through a previous or pilot study. Therefore, evaluation of the measuring technique is recommended to identify the error properties before starting the actual measuring procedure. Error properties should also be identified for survey measurement errors, in which methods for examining the reliability and validity of the survey can be used such as test–retest and record checks.

A simpler approach used by practitioners to minimize errors in epidemiologic studies is replication; in this method, replicates of the risk factor (eg, long-term average nutrients) are available and the mean of these values is calculated and used to present an approximate value relative to the actual value. 43 These replicates can also be used to estimate the measurement error variance and apply an adjusted statistical approach.

Confirmation bias

Placing emphasis on one hypothesis because it does not contradict investigator beliefs is called confirmation bias, otherwise known as confirmatory, ascertainment, or observer bias. Confirmation bias is a type of psychological bias in which a decision is made according to the subject’s preconceptions, beliefs, or preferences. Such bias results from human errors, including imprecision and misconception. Confirmation bias can also emerge owing to overconfidence, which results in contradictory evidence being ignored or overlooked. 44 In medicine, confirmation bias is one of the main reasons for diagnostic errors and may cause inaccurate diagnosis and improper treatment management. 45 – 47

An understanding of how the results of a medical investigation are affected by confirmation bias is important. Many studies have demonstrated that any aspect of investigation that requires human judgment is subject to confirmation bias, 48 – 50 which was also found to influence the inclusion and exclusion criteria of randomized controlled trial study designs. 51 There are many examples of confirmation bias in the medical literature, some of which are even illustrated in DNA matching. 16

Overcoming confirmation bias

Researchers have shown that not accounting for confirmation bias could affect the reliability of the investigation. Several studies in the literature also suggest a number of approaches for dealing with this type of bias. An approach that is often used is to conduct multiple and independent checks on study subjects across different laboratories or through consultation with other researchers who may have differing opinions. Through this approach, scientists can seek independent feedback and confirmation. 52 The use of blinding or masking procedures, whether single- or double-blinded, is important for enhancing the reliability of scientific investigations. These approaches have proven to be very useful in clinical trials, as they protect final conclusions from confirmation bias. The blinding may involve participant, treating clinician, recruiter, and/or assessor.

In addition, researchers should be encouraged to evaluate evidence objectively, taking into account contradictory evidence, and alter perspectives through specific education and training programs, 53 , 54 with no overcorrection or change in the researcher’s decision making. 55

However, the problem with the above suggestions is that they become ineffective if specific factors of bias are not accounted for. For example, researchers could reach conclusions in haste due to external pressure to obtain results, which can be particularly true in highly sensitive clinical trials. Bias in such cases is a very sensitive issue, as it might affect the validity of the investigation. We can, however, avoid the possibility of such bias by developing and following well-designed study protocols.

Finally, in order to overcome confirmation bias and enhance the reliability of investigations, it is important to accept that bias is a part of investigations. Quantifying this inevitable bias and its potential sources must be part of well-developed conclusions.

Bias in epidemiologic and medical research is a major problem. Understanding the possible types of bias and how they affect research conclusions is important to ensure the validity of findings. This work discussed some of the most common types of information bias, namely self-reporting bias, measurement error bias, and confirmation bias. Approaches for overcoming bias through the use of adjustment methods were also presented. A summary of study types with common data collection methods, type of information bias and adjusting or preventing strategies is presented in Table 1 . The framework described in this work provides epidemiologists and medical researchers with useful tools to manage information bias in their scientific investigations. The consequences of ignoring this bias on the validity of the results were also described.

Type of study designs, common data collection methods, type of bias, and adjusting strategies

Study designData collection methodType of biasOvercoming strategy
ObservationalSelf-administered questionnaire, surveys, or interviewsSocial desirabilityConduct internal or external validation study
Apply Marlowe–Crowne Social Desirability Scale or Martin–Larsen Approval Motivation score
RecallUse memory aids or diaries
Interview a subsample of participants prior to initiating the study (validated subsample)
Observational/experimentalLaboratory testsSystematic errorsConduct calibration study
Random errorsApply statistical adjusting method (eg, simulation–extrapolation, regression calibration, Bayesian approaches)
Replicate measurements
Clinical examination/diagnostic testsConfirmationMake multiple and independent checks
Introduce training and education programs

Bias is often not accounted for in practice. Even though a number of adjustment and prevention methods to mitigate bias are available, applying them can be rather challenging due to limited time and resources. For example, measurement error bias properties might be difficult to detect, particularly if there is a lack of information about the measuring instrument. Such information can be tedious to obtain as it requires the use of validation studies and, as mentioned before, these studies can be expensive and require careful planning and management. Although conducting the usual analysis and ignoring measurement error bias may be tempting, researchers should always follow the practice of reporting any evidence of bias in their results.

In order to minimize or eliminate bias, careful planning is needed in each step of the research design. For example, several rules and procedures should be followed when designing self-reporting instruments. Training of interviewers is important in minimizing such type of bias. On the other hand, the effect of measurement error can be difficult to eliminate since measuring devices and algorithms are often imperfect. A general rule is to revise the level of accuracy of the measuring instrument before utilizing it for data collection. Such adjustments should greatly reduce any possible defects. Finally, confirmation bias can be eliminated from the results if investigators take into account different factors that can affect human judgment.

Researchers should be familiar with sources of bias in their results, and additional effort is needed to minimize the possibility and effects of bias. Increasing the awareness of the possible shortcomings and pitfalls of decision making that can result in bias should begin at the medical undergraduate level and students should be provided with examples to demonstrate how bias can occur. Moreover, adjusting for bias or any deficiency in the analysis is necessary when bias cannot be avoided. Finally, when presenting the results of a medical research study, it is important to recognize and acknowledge any possible source of bias.

The author reports no conflicts of interest in this work.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Interviews in Research | Guide & Examples

Types of Interviews in Research | Guide & Examples

Published on March 10, 2022 by Tegan George . Revised on June 22, 2023.

An interview is a qualitative research method that relies on asking questions in order to collect data . Interviews involve two or more people, one of whom is the interviewer asking the questions.

There are several types of interviews, often differentiated by their level of structure.

  • Structured interviews have predetermined questions asked in a predetermined order.
  • Unstructured interviews are more free-flowing.
  • Semi-structured interviews fall in between.

Interviews are commonly used in market research, social science, and ethnographic research .

Table of contents

What is a structured interview, what is a semi-structured interview, what is an unstructured interview, what is a focus group, examples of interview questions, advantages and disadvantages of interviews, other interesting articles, frequently asked questions about types of interviews.

Structured interviews have predetermined questions in a set order. They are often closed-ended, featuring dichotomous (yes/no) or multiple-choice questions. While open-ended structured interviews exist, they are much less common. The types of questions asked make structured interviews a predominantly quantitative tool.

Asking set questions in a set order can help you see patterns among responses, and it allows you to easily compare responses between participants while keeping other factors constant. This can mitigate   research biases and lead to higher reliability and validity. However, structured interviews can be overly formal, as well as limited in scope and flexibility.

  • You feel very comfortable with your topic. This will help you formulate your questions most effectively.
  • You have limited time or resources. Structured interviews are a bit more straightforward to analyze because of their closed-ended nature, and can be a doable undertaking for an individual.
  • Your research question depends on holding environmental conditions between participants constant.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

interview bias in research

Semi-structured interviews are a blend of structured and unstructured interviews. While the interviewer has a general plan for what they want to ask, the questions do not have to follow a particular phrasing or order.

Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order. For this reason, they are often considered “the best of both worlds.”

However, if the questions differ substantially between participants, it can be challenging to look for patterns, lessening the generalizability and validity of your results.

  • You have prior interview experience. It’s easier than you think to accidentally ask a leading question when coming up with questions on the fly. Overall, spontaneous questions are much more difficult than they may seem.
  • Your research question is exploratory in nature. The answers you receive can help guide your future research.

An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers.

Unstructured interviews are by definition open-ended. This flexibility can help you gather detailed information on your topic, while still allowing you to observe patterns between participants.

However, so much flexibility means that they can be very challenging to conduct properly. You must be very careful not to ask leading questions, as biased responses can lead to lower reliability or even invalidate your research.

  • You have a solid background in your research topic and have conducted interviews before.
  • Your research question is exploratory in nature, and you are seeking descriptive data that will deepen and contextualize your initial hypotheses.
  • Your research necessitates forming a deeper connection with your participants, encouraging them to feel comfortable revealing their true opinions and emotions.

A focus group brings together a group of participants to answer questions on a topic of interest in a moderated setting. Focus groups are qualitative in nature and often study the group’s dynamic and body language in addition to their answers. Responses can guide future research on consumer products and services, human behavior, or controversial topics.

Focus groups can provide more nuanced and unfiltered feedback than individual interviews and are easier to organize than experiments or large surveys . However, their small size leads to low external validity and the temptation as a researcher to “cherry-pick” responses that fit your hypotheses.

  • Your research focuses on the dynamics of group discussion or real-time responses to your topic.
  • Your questions are complex and rooted in feelings, opinions, and perceptions that cannot be answered with a “yes” or “no.”
  • Your topic is exploratory in nature, and you are seeking information that will help you uncover new questions or future research ideas.

Prevent plagiarism. Run a free check.

Depending on the type of interview you are conducting, your questions will differ in style, phrasing, and intention. Structured interview questions are set and precise, while the other types of interviews allow for more open-endedness and flexibility.

Here are some examples.

  • Semi-structured
  • Unstructured
  • Focus group
  • Do you like dogs? Yes/No
  • Do you associate dogs with feeling: happy; somewhat happy; neutral; somewhat unhappy; unhappy
  • If yes, name one attribute of dogs that you like.
  • If no, name one attribute of dogs that you don’t like.
  • What feelings do dogs bring out in you?
  • When you think more deeply about this, what experiences would you say your feelings are rooted in?

Interviews are a great research tool. They allow you to gather rich information and draw more detailed conclusions than other research methods, taking into consideration nonverbal cues, off-the-cuff reactions, and emotional responses.

However, they can also be time-consuming and deceptively challenging to conduct properly. Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from accidentally leading questions.

Here are some advantages and disadvantages of each type of interview that can help you decide if you’d like to utilize this research method.

Advantages and disadvantages of interviews
Type of interview Advantages Disadvantages
Structured interview
Semi-structured interview , , , and
Unstructured interview , , , and
Focus group , , and , since there are multiple people present

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of 4 types of interviews .

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Types of Interviews in Research | Guide & Examples. Scribbr. Retrieved September 11, 2024, from https://www.scribbr.com/methodology/interviews-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, unstructured interview | definition, guide & examples, structured interview | definition, guide & examples, semi-structured interview | definition, guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research bias

Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .

Understanding research bias is important for several reasons.

  • Bias exists in all research, across research designs , and is difficult to eliminate.
  • Bias can occur at any stage of the research process.
  • Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimise them.

For example, the success rate of the program will likely be affected if participants start to drop out. Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.  

Table of contents

Actor–observer bias.

  • Confirmation bias

Information bias

Interviewer bias.

  • Publication bias

Researcher bias

Response bias.

Selection bias

How to avoid bias in research

Other types of research bias, frequently asked questions about research bias.

Actor–observer bias occurs when you attribute the behaviour of others to internal factors, like skill or personality, but attribute your own behaviour to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behaviour of others, you are more likely to associate behaviour with their personality, nature, or temperament.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the road, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasise findings that ‘prove’ that your lived experience is the case for most families, neglecting other explanations and experiences.

Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias

Performance bias

Regression to the mean (rtm).

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgement (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the data collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double-  and single-blinded research methods.

Based on discussions you had with other researchers before starting your observations, you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behaviour because they are aware they are being studied, this is called the Hawthorne (or observer) effect . Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behaviour in an effort to compensate for their perceived disadvantage.

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the centre of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Participant: ‘I like to solve puzzles, or sometimes do some gardening.’

You: ‘I love gardening, too!’

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).

The unconscious form of researcher bias is associated with the Pygmalion (or Rosenthal) effect, where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .

This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

While interviewing a student, you ask them:

‘Do you think it’s okay to cheat on an exam?’

Common types of response bias are:

Acquiescence bias

Demand characteristics.

  • Social desirability bias

Courtesy bias

  • Question-order bias

Extreme responding

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like ‘agree/disagree’, ‘yes/no’, or ‘true/false’. Acquiescence is sometimes referred to as ‘yea-saying’.

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Q: Are you a social person?

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  • A quiet night in
  • A night out with friends

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviours or views. Ensuring that participants are not aware of the research goals is the best way to avoid this type of bias.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behaviour.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Sampling or ascertainment bias

  • Attrition bias

Volunteer or self-selection bias

  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey , three surveys during the program, and a posttest survey.

Volunteer bias (also called self-selection bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment – i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process – focusing on ‘survivors’ and forgetting those who went through a similar process and did not survive.

Note that ‘survival’ does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies , make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies .
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgemental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview , paying special attention to any influence you may have had on participants. You can include these in your final analysis.

Cognitive bias

  • Baader–Meinhof phenomenon
  • Availability heuristic
  • Halo effect
  • Framing effect
  • Sampling bias
  • Ascertainment bias
  • Self-selection bias
  • Hawthorne effect
  • Omitted variable bias
  • Pygmalion effect
  • Placebo effect

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

Is this article helpful?

More interesting articles.

  • Attrition Bias | Examples, Explanation, Prevention
  • Demand Characteristics | Definition, Examples & Control
  • Hostile Attribution Bias | Definition & Examples
  • Observer Bias | Definition, Examples, Prevention
  • Regression to the Mean | Definition & Examples
  • Representativeness Heuristic | Example & Definition
  • Sampling Bias and How to Avoid It | Types & Examples
  • Self-Fulfilling Prophecy | Definition & Examples
  • The Availability Heuristic | Example & Definition
  • The Baader–Meinhof Phenomenon Explained
  • What Is a Ceiling Effect? | Definition & Examples
  • What Is Actor-Observer Bias? | Definition & Examples
  • What Is Affinity Bias? | Definition & Examples
  • What Is Anchoring Bias? | Definition & Examples
  • What Is Ascertainment Bias? | Definition & Examples
  • What Is Belief Bias? | Definition & Examples
  • What Is Bias for Action? | Definition & Examples
  • What Is Cognitive Bias? | Meaning, Types & Examples
  • What Is Confirmation Bias? | Definition & Examples
  • What Is Conformity Bias? | Definition & Examples
  • What Is Correspondence Bias? | Definition & Example
  • What Is Explicit Bias? | Definition & Examples
  • What Is Generalisability? | Definition & Examples
  • What Is Hindsight Bias? | Definition & Examples
  • What Is Implicit Bias? | Definition & Examples
  • What Is Information Bias? | Definition & Examples
  • What Is Ingroup Bias? | Definition & Examples
  • What Is Negativity Bias? | Definition & Examples
  • What Is Nonresponse Bias?| Definition & Example
  • What Is Normalcy Bias? | Definition & Example
  • What Is Omitted Variable Bias? | Definition & Example
  • What Is Optimism Bias? | Definition & Examples
  • What Is Outgroup Bias? | Definition & Examples
  • What Is Overconfidence Bias? | Definition & Examples
  • What Is Perception Bias? | Definition & Examples
  • What Is Primacy Bias? | Definition & Example
  • What Is Publication Bias? | Definition & Examples
  • What Is Recall Bias? | Definition & Examples
  • What Is Recency Bias? | Definition & Examples
  • What Is Response Bias? | Definition & Examples
  • What Is Selection Bias? | Definition & Examples
  • What Is Self-Selection Bias? | Definition & Example
  • What Is Self-Serving Bias? | Definition & Example
  • What Is Social Desirability Bias? | Definition & Examples
  • What Is Status Quo Bias? | Definition & Examples
  • What Is Survivorship Bias? | Definition & Examples
  • What Is the Affect Heuristic? | Example & Definition
  • What Is the Egocentric Bias? | Definition & Examples
  • What Is the Framing Effect? | Definition & Examples
  • What Is the Halo Effect? | Definition & Examples
  • What Is the Hawthorne Effect? | Definition & Examples
  • What Is the Placebo Effect? | Definition & Examples
  • What Is the Pygmalion Effect? | Definition & Examples
  • What Is Unconscious Bias? | Definition & Examples
  • What Is Undercoverage Bias? | Definition & Example
  • What Is Vividness Bias? | Definition & Examples

Best Practices for Reducing Bias in the Interview Process

  • Education (G Badalato and E Margolin, Section Editors)
  • Published: 12 October 2022
  • Volume 23 , pages 319–325, ( 2022 )

Cite this article

interview bias in research

  • Ilana Bergelson 1 ,
  • Chad Tracy 1 &
  • Elizabeth Takacs 1  

13k Accesses

24 Citations

14 Altmetric

Explore all metrics

Purpose of Review

Objective measures of residency applicants do not correlate to success within residency. While industry and business utilize standardized interviews with blinding and structured questions, residency programs have yet to uniformly incorporate these techniques. This review focuses on an in-depth evaluation of these practices and how they impact interview formatting and resident selection.

Recent Findings

Structured interviews use standardized questions that are behaviorally or situationally anchored. This requires careful creation of a scoring rubric and interviewer training, ultimately leading to improved interrater agreements and biases as compared to traditional interviews. Blinded interviews eliminate even further biases, such as halo, horn, and affinity bias. This has also been seen in using multiple interviewers, such as in the multiple mini-interview format, which also contributes to increased diversity in programs. These structured formats can be adopted to the virtual interviews as well.

There is growing literature that using structured interviews reduces bias, increases diversity, and recruits successful residents. Further research to measure the extent of incorporating this method into residency interviews will be needed in the future.

Similar content being viewed by others

interview bias in research

Best practices for interviewing applicants for medical school admissions: a systematic review

interview bias in research

Transition to multiple mini interview (MMI) interviewing for medical school admissions

interview bias in research

Designing successful virtual residency interviews: a survey-based needs assessment of applicants across medical specialties

Avoid common mistakes on your manuscript.

Introduction

Optimizing the criteria to rank residency applicants is a difficult task. The National Residency Matching Program (NRMP) is designed to be applicant-centric, with the overarching goal to provide favorable outcomes to the applicant while providing opportunity for programs to match high-quality candidates. From a program’s perspective, the NRMP is composed of three phases: the screening of applicants, the interview, and the creation of the rank list. While it is easy to compare candidates based on objective measures, these do not always reflect qualities required to be a successful resident or physician. Prior studies have demonstrated that objective measures such as Alpha Omega Alpha status, United States Medical Licensing Exams (USMLE), and class rank do not correlate with residency performance measures [ 1 ]. Due to the variability of these factors to predict success and recognition of the importance of the non-cognitive traits, most programs place increased emphasis on candidate interviews to assess fit [ 2 ].

Unfortunately, the interview process lacks standardization across residency programs. Industry and business have more standardized interviews and utilize best practices that include blinded interviewers, use of structured questions (situational and/or behavioral anchored questions), and skills testing. Due to residency interview heterogeneity, studies evaluating the interview as a predictor of success have failed to reliably predict who will perform well during residency. Additionally, resident success has many components, such that isolating any one factor, such as the interview, may be problematic and argues for a more holistic approach to resident selection [ 3 ]. Nevertheless, there are multiple ways the application review and interview can be standardized to promote transparency and improve resident selection.

Residency programs have begun adopting best practices from business models for interviewing, which include standardized questions, situational and/or behavioral anchored questions, blinded interviewers, and use of the multiple mini-interview (MMI) model. The focus of this review is to take a more in-depth look at practices that have become standard in business and to review the available data on the impact of these practices in resident selection.

Unstructured Versus Structured Interviews

Unstructured interviews are those in which questions are not set in advance and represent a free-flowing discussion that is conversational in nature. The course of an unstructured interview often depends on the candidate’s replies and may offer opportunities to divert away from topics that are important to applicant selection. While unstructured interviews may involve specific questions such as “tell me about a recent book you read” or “tell me about your research,” the questions do not seek to determine specific applicant attributes and may vary significantly between applicants. Due to their free-form nature, unstructured interviews may be prone to biased or illegal questions. Additionally, due to a lack of a specific scoring rubric, unstructured interviews are open to multiple biases in answer interpretation and as such generally show limited validity [ 4 ]. For the applicant, unstructured interviews allow more freedom to choose a response, with some studies reporting higher interviewee satisfaction with these questions [ 5 ].

In contrast to the unstructured interview, structured interviews use standardized questions that are written prior to an interview, are asked of every candidate, and are scored using an established rubric. Standardized questions may be behaviorally or situationally anchored [ 5 ]. Due to their uniformity, standardized interviews have higher interrater reliability and are less prone to biased or illegal questions.

Behavioral questions ask the candidate to discuss a specific response to a prior experience, which can provide insight into how an applicant may behave in the future [ 5 ]. Not only does the candidate’s response reflect a possible prediction of future behavior, it can also demonstrate the knowledge, priorities, and values of the candidate [ 5 ]. Questions are specifically targeted to reflect qualities the program is searching for (Table 1 ) [ 5 , 6 , 7 ].

Situational questions require an applicant to predict how they would act in a hypothetical situation and are intended to reflect a realistic scenario the applicant may encounter during residency; this can provide insight into priorities and values [ 5 ]. For example, asking what an applicant would do when receiving sole credit for something they worked on with a colleague can provide insight into the integrity of a candidate [ 4 ]. These types of questions can be especially helpful for fellowships, as applicants would already have the clinical experience of residency to draw from [ 5 ].

Using standardized questions provides a method to recruit candidates with characteristics that ultimately correlate to resident success and good performance. Indeed, structured interview scores have demonstrated an ability to predict which students perform better with regard to communication skills, patient care, and professionalism in surgical and non-surgical specialties [ 8 •]. In fields such as radiology, non-cognitive abilities that can be evaluated in behavioral questions, such as conscientiousness or confidence, are thought to critically influence success in residency and even influence cognitive performance [ 1 ]. This has also been demonstrated in obstetrics and gynecology, where studies have shown that resident clinical performance after 1 year had a positive correlation with the rank list percentile that was generated using a structured interview process [ 9 ].

Creating Effective Structured Interviews

To be effective, standardized interview questions should be designed in a methodical manner. The first step in standardizing the interview process is determining which core values predict resident success in a particular program. To that end, educational leaders and faculty within the department should come to a consensus on the main qualities they seek in a resident. From there, questions can be formatted to elicit those traits during the interview process. Some programs have used personality assessment inventories to establish these qualities. Examples include openness to experience, humility, conscientiousness, and honesty. Further program-specific additions can be included, such as potential for success in an urban versus rural environment [ 10 ].

Once key attributes have been chosen and questions have been selected, a scoring rubric can be created. The scoring of each question is important as it helps define what makes a high-performing versus low-performing answer. Once a scoring system is determined, interviewers can be trained to review the questions, score applicant responses, and ensure they do not revise the questions during the interview [ 11 ]. Questions and the grading rubric should be further scrutinized through mock interviews with current residents, including discussing responses of the mock interviewee and modifying the questions and rubric prior to formal implementation [ 12 ]. Interviewer training itself is critical, as adequate training leads to improved interrater agreements [ 13 ]. Figure  1 demonstrates the steps to develop a behavioral interview question.

figure 1

Example of standardized question to evaluate communication with scoring criteria

Rating the responses of the applicants can come with errors that ultimately reduce validity. For example, central tendency error involves interviewers not rating students at the extremes of a scale but rather placing all applicants in the middle; leniency versus severity refers to interviewers who either give all applicants high marks or give everyone low marks; contrast effects involve comparing one applicant to another rather than solely focusing on the rubric for each interviewee. These rating errors reflect the importance of training and providing feedback to interviewers [ 4 ].

Blinded Interviewers

Blinding the interviewers to the application prior to meeting with a candidate is intended to eliminate various biases within the interview process (Table 2 ) [ 14 , 15 ]. In addition to grades and test scores, aspects of the application that can either introduce or exacerbate bias include photographs, demographics, letters of recommendation, selection to medical honor societies, and even hobbies. Impressions of candidates can be formed prematurely, with the interview then serving to simply confirm (or contradict) those impressions [ 16 •]. Importantly, application blinding may also decrease implicit bias against applicants who identify as underrepresented in medicine [ 17 ].

Despite the proven success of these various interview tactics, their use in resident selection remains limited, with only 5% of general surgery programs using standardized interview questions and less than 20% using even a limited amount of blinding (e.g., blinding of photograph) [ 2 ]. Some programs have continued to rely on unblinded interviews and prioritize USMLE scores and course grades in ranking [ 18 ]. Due to their potential benefits and ability to standardize the interview process, it is critical that programs become familiar with the various interview practices so that they can select the best applicants while minimizing the significant bias in traditional interview formats.

Multiple Mini-interview (MMI)

The use of multiple interviews by multiple interviewers provides an opportunity to ask the applicant more varied questions and also allows for the averaging out of potential interviewer bias leading to more consistent applicant scoring and ability to predict applicant success [ 7 ]. Training of the interviewers in interviewing techniques, scoring, and avoiding bias is also likely to decrease scoring variability. Similarly, the use of the same group of interviewers for all candidates should be encouraged in order to limit variance in scoring amongst certain faculty [ 19 ].

One interview method that incorporates multiple interviewers and has had growing frequency in medical school interviews as well as residency interviews is the MMI model. This system provides multiple interviews in the form of 6–12 stations, each of which evaluates a non-medical question designed to assess specific non-academic applicant qualities [ 20 ]. While the MMI format can intimidate some candidates, others find that it provides an opportunity to demonstrate traits that would not be observed in an unstructured interview, such as multitasking, efficiency, flexibility, interpersonal skills, and ethical decision-making [ 21 ]. Furthermore, MMI has been shown to have increased reliability as shown in a study of five California medical schools that showed inter-interviewer consistency was higher for MMIs than traditional interviews which were unstructured and had a 1:1 ratio of interviewer to applicant [ 22 ].

The MMI format is also versatile enough to incorporate technical competencies even through a virtual platform. In general surgery interviews, MMI platforms have been designed to test traits such as communication and empathy but also clinical knowledge and surgical aptitude through anatomy questions and surgical skills (knot tying and suturing). Thus, MMIs are not only versatile, but also have an ability to evaluate cognitive traits and practical skills [ 23 ].

MMI also has the potential to reduce resident attrition. For example, in evaluating students applying to midwifery programs in Australia, attrition rates and grades were compared for admitted students using academic rank and MMI scores obtained before and after the incorporation of MMIs into their selection program. The authors found that when using MMIs, enrolled students had not only higher grades but significantly lower attrition rates. MMI was better suited to show applicants’ passion and commitment, which then led to similar mindsets of accepted applicants as well as a support network [ 24 ]. Furthermore, attrition rates have been found to be higher in female residents in general surgery programs [ 25 ]. Perhaps with greater diversity, which is associated with use of standardized interviews, the number of women can increase in surgical specialties and thus reduce attrition rate in this setting as well.

Impact of Interview Best Practices on Bias and Diversity

An imperative of all training programs is to produce a cohort of physicians with broad and diverse experiences representative of the patient populations they treat. To better address diversity within surgical residencies, particularly regarding women and those who are underrepresented in medicine, it is important that interviews be designed to minimize bias against any one portion of the applicant pool. Diverse backgrounds and cultures within a program enhance research, innovation, and collaboration as well as benefit patients [ 26 ]. Patients have shown greater satisfaction and reception when they share ethnicity or background with their provider, and underrepresented minorities in medicine often go on to work in underserved communities [ 27 ].

All interviewers undoubtedly have elements of implicit bias; Table 2 describes the common subtypes of implicit bias [ 14 ]. While it is difficult to eliminate bias in the interview process, unstructured or “traditional” interviews are more likely to risk bias toward candidates than structured interviews. Studies have demonstrated that Hispanic and Black applicants receive scores one quarter of a standard deviation lower than Caucasian applicants [ 28 ]. “Like me” bias is just one example of increased subjectivity with unstructured interviews, where interviewers prefer candidates who may look like, speak like, or share personal experiences with the interviewer [ 29 ].

Furthermore, unstructured interviews provide opportunities to ask inappropriate or illegal questions, including those that center on religion, child planning, and sexual orientation [ 30 ]. Inappropriate questions tend to be disproportionately directed toward certain groups, with women more likely to get questions regarding marital status and to be questioned and interrupted than male counterparts [ 28 , 31 ].

Structured interviews, conversely, have been shown to decrease bias in the application process. Faculty trained in behavior-based interviews for fellowship applications demonstrated that there were reduced racial biases in candidate evaluations due to scoring rubrics [ 12 ]. Furthermore, as structured questions are determined prior to the interview and involve training of interviewers, structured interviews are less prone to illegal and inappropriate questions [ 32 ]. Interviewers can ask additional questions such as “could you be more specific?” with the caveat that probing should be minimized and kept consistent between applications. This way the risk of prompting the applicant toward a response is reduced [ 4 ].

Implementing Interview Types During the Virtual Interview Process

An added complexity to creating standardized interviews is incorporating a virtual platform. Even prior to the move toward virtual interviews instituted during the COVID-19 pandemic, studies on virtual interviews showed that they provided several advantages over in-person interviews, including decreased cost, reduction in time away from commitments for applicants and staff, and ability to interview at more programs. A significant limitation, for applicants and for programs, is the inability to interact informally, which allows applicants to evaluate the environment of the hospital and the surrounding community [ 33 •]. Following their abrupt implementation in 2020 during the COVID-19 pandemic, virtual interviews have remained in place and likely will remain in place in some form into the future due to their significant benefits in reducing applicant cost and improving interview efficiency. Although these types of interviews are in their relative infancy in the resident selection process, studies have found that standardized questions and scoring rubrics that have been used in person can still be applied to a virtual interview setting without degrading interview quality [ 34 ].

The virtual format may also allow for further interview innovation in the form of standardized video interviews. For medical student applicants, the Association of American Medical Colleges (AAMC) has trialed a standardized video interview (SVI) that includes recording of applicant responses, scoring, and subsequent release to the Electronic Residency Application Service (ERAS) application. Though early data in the pilot was promising, the program was not continued after the 2020 cycle due to lack of interest [ 35 ]. There is limited evidence supporting the utility of this type of interview in residency training, and one study found that these interviews did not add significant benefit as the scores did not associate with other candidate attributes such as professionalism [ 32 ]. Similarly, a separate study found no correlation between standardized video interviews and faculty scores on traits such as communication and professionalism. Granted, there was no standardization in what the faculty asked, and they were not blinded to academic performance of the applicants [ 36 ]. While there was an evaluation of six emergency medicine programs that demonstrated a positive linear correlation between the SVI score and the traditional interview score, it was a very low r coefficient; thus the authors concluded that the SVI was not adequate to replace the interview itself [ 37 ].

Conclusions: Future Steps in Urology and Beyond

The shift to structured interviews in urology has been slow. Within the last decade, studies consistent with other specialties demonstrated that urology program directors prioritized USMLE scores, reference letters, and away rotations at the program director’s institution as the key factors in choosing applicants [ 38 ]. More recently, a survey of urology programs found < 10% blinded the recruitment team at the screening step, with < 20% blinding the recruitment team during the interview itself [ 39 ]. In 2020 our program began using structured interview questions and blinded interviewers to all but the personal statement and letters of recommendation. After querying faculty and interviewees, we have found that most interviewers do not miss the additional information, and applicants feel that they are able to have more eye contact with faculty who are not looking down at the application during the interview. Structured behavioral interview questions have allowed us to focus on the key attributes important to our program. With time we hope to see that inclusion of these metrics helps diversify our resident cohort, improve resident satisfaction with the training program, and produce successful future urologists.

Despite the slow transition in urology and other fields, there is a growing body of literature in support of standardized interviews for evaluating key candidate traits that ultimately lead to resident success and reducing bias while increasing diversity. With time, the hope is that programs will continue incorporating these types of interviews in the resident selection process.

Papers of particular interest, published recently, have been highlighted as: • Of importance

ALTMAIER, EM, et al., The predictive utility of behavior-based interviewing compared with traditional interviewing in the selection of radiology residents. Investigative Radiology 1992;27(5):385–389

Kim RH, et al. General surgery residency interviews: are we following best practices? Am J Surg. 2016;211(2):476-481.e3.

Article   Google Scholar  

Stephenson-Famy A, et al. Use of the interview in resident candidate selection: a review of the literature. J Grad Med Educ. 2015;7(4):539–48.

Best practices for conducting residency interviews, A.o.A.M. Colleges, Editor. 2016

Black C, Budner H, Motta AL. Enhancing the residency interview process with the inclusion of standardised questions. Postgrad Med J. 2018;94(1110):244–6.

Hartwell CJ, Johnson CD, Posthuma RA. Are we asking the right questions? Predictive validity comparison of four structured interview question types. J Bus Res. 2019;100:122–9.

Beran B, et al. An analysis of obstetrics-gynecology residency interview methods in a single institution. J Surg Educ. 2019;76(2):414–9.

• Marcus-Blank B, et al. Predicting performance of first-year residents: correlations between structured interview, licensure exam, and competency scores in a multi-institutional study. Acad Med. 2019;94(3):378–87.  Authors administered 18 behavioral structured interview questions (SI) to measure key noncognitive competencies across 14 programs (13 residency, 1 fellowship) from 6 institutions to determine correlation first-year resident milestone performance in the ACGME's core competency domains and overall scores. They found SI scores predicted midyear and year-end overall performance and year-end performance on patient care, interpersonal and communication skills, and professionalism competencies and that SI scores contributed incremental validity over USMLE scores in predicting year-end performance on patient care, interpersonal and communication skills, and professionalism.

Olawaiye A, Yeh J, Withiam-Leitch M. Resident selection process and prediction of clinical performance in an obstetrics and gynecology program. Teach Learn Med. 2006;18(4):310–5.

Prystowsky MB, et al. Prioritizing the interview in selecting resident applicants: behavioral interviews to determine goodness of fit. Academic pathology. 2021;8:23742895211052884–23742895211052884.

Breitkopf DM, Vaughan LE, Hopkins MR. Correlation of behavioral interviewing performance with obstetrics and gynecology residency applicant characteristics☆?>. J Surg Educ. 2016;73(6):954–8.

Langhan ML, Goldman MP, Tiyyagura G. Can behavior-based interviews reduce bias in fellowship applicant assessment? Acad Pediatr. 2022;22(3):478–85.

Gardner AK, D’Onofrio BC, Dunkin BJ. Can we get faculty interviewers on the same page? An examination of a structured interview course for surgeons. J Surg Educ. 2018;75(1):72–7.

Oberai H, Ila Mehrotra A, Unconscious bias: thinking without thinking. Hum Resour Manag Int Dig 2018;26(6):14–17.

Hull L, Sevdalis N. Advances in teaching and assessing nontechnical skills. Surg Clin North Am. 2015;95(4):869–85.

• Balhara KS, et al. Navigating bias on interview day: strategies for charting an inclusive and equitable course. J Grad Med Educ. 2021;13(4):466–70. Strategies for decreasing bias in the interview process based on best practices from medical and corporate literature, cognitive psychology theory, and the authors' experiences. Provides simple, actionable and accessible strategies for navigating and mitigating the pitfalls of bias during residency interview

Haag J, et al. Impact of blinding interviewers to written applications on ranking of Gynecologic Oncology fellowship applicants from groups underrepresented in medicine. Gynecol Oncol Rep. 2022;39: 100935.

Kasales C, Peterson C, Gagnon E. Interview techniques utilized in radiology resident selection-a survey of the APDR. Acad Radiol. 2019;26(7):989–98.

Levashina J, et al. The structured employment interview: narrative and quantitative review of the research literature. Pers Psychol. 2014;67(1):241–93.

Al Abri R, Mathew J, Jeyaseelan L. Multiple mini-interview consistency and satisfactoriness for residency program recruitment: Oman evidence. Oman Med J 2019;34(3):218–223.

Boysen-Osborn M, et al. A multiple-mini interview (MMI) for emergency medicine residency admissions: a brief report and qualitative analysis. J Adv Med Educ Prof. 2018;6(4):176–80.

Google Scholar  

Jerant A, et al. Reliability of multiple mini-interviews and traditional interviews within and between institutions: a study of five California medical schools. BMC Med Educ. 2017;17(1):190.

Lund S, et al. Conducting virtual simulated skills multiple mini-interviews for general surgery residency interviews. J Surg Educ. 2021;78(6):1786–90.

Sheehan A, et al. The impact of multiple mini interviews on the attrition and academic outcomes of midwifery students. Women Birth. 2022;35(4):e318–27.

Article   CAS   Google Scholar  

Khoushhal Z, et al. Prevalence and causes of attrition among surgical residents: a systematic review and meta-analysis. JAMA Surg. 2017;152(3):265–72.

DeBenedectis CM, et al. A program director’s guide to cultivating diversity and inclusion in radiology residency recruitment. Acad Radiol. 2020;27(6):864–7.

Figueroa O. The significance of recruiting underrepresented minorities in medicine: an examination of the need for effective approaches used in admissions by higher education institutions. Med Educ Online. 2014;19:24891–24891.

Costa PC, Gardner AK. Strategies to increase diversity in surgical residency. Current Surgery Reports. 2021;9(5):11.

Gardner AK. How can best practices in recruitment and selection improve diversity in surgery? Ann Surg 2018:267(1)

Resident Match process policy and guidelines. 2022; Available from: https://sauweb.org/match-program/resident-match-process.aspx .

Otugo O, et al. Bias in recruitment: a focus on virtual interviews and holistic review to advance diversity. AEM Education and Training. 2021;5(S1):S135–9.

Hughes RH, Kleinschmidt S, Sheng AY. Using structured interviews to reduce bias in emergency medicine residency recruitment: worth a second look. AEM Educ Train. 2021;5(Suppl 1):S130-s134.

• Huppert LA, et al. Virtual interviews at graduate medical education training programs: determining evidence-based best practices. Acad Med. 2021;96(8):1137–45. Review of existing literature regarding virtual interviews that summarizes best practices for interviews the advantages and disadvantages of the virtual interview format. The authors make the following recommendations: develop a detailed plan for the interview process, consider using standardized interview questions, recognize and respond to potential biases that may be amplified with the virtual interview format, prepare your own trainees for virtual interviews, develop electronic materials and virtual social events to approximate the interview day, and collect data about virtual interviews at your own institution

Chou DW, et al. Otolaryngology residency interviews in a socially distanced world: strategies to recruit and assess applicants. Otolaryngol Head Neck Surg. 2021;164(5):903–8.

AAMC Standardized Video Interview Evaluation Summary. 2022.

Schnapp BH, et al. Assessing residency applicants’ communication and professionalism: standardized video interview scores compared to faculty gestalt. West J Emerg Med. 2019;20(1):132–7.

Chung AS, et al. How well does the standardized video interview score correlate with traditional interview performance? Western Journal of Emergency Medicine. 2019;20(5):726–30.

Weissbart SJ, Stock JA, Wein AJ. Program directors’ criteria for selection into urology residency. Urology. 2015;85(4):731–6.

Chantal Ghanney Simons E, et al. MP19–05 Landscape analysis of the use of holistic review in the urology residency match process. J Urol 2022;207:e308

Download references

Author information

Authors and affiliations.

Department of Urology, University of Iowa, Iowa City, USA

Ilana Bergelson, Chad Tracy & Elizabeth Takacs

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ilana Bergelson .

Ethics declarations

Conflict of interest.

The authors have no financial or non-financial interests to disclose.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of Topical Collection on Education

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Bergelson, I., Tracy, C. & Takacs, E. Best Practices for Reducing Bias in the Interview Process. Curr Urol Rep 23 , 319–325 (2022). https://doi.org/10.1007/s11934-022-01116-7

Download citation

Accepted : 19 July 2022

Published : 12 October 2022

Issue Date : November 2022

DOI : https://doi.org/10.1007/s11934-022-01116-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Resident selection
  • Structured interviews
  • Blinded interviewers
  • Medical student

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience

Employee Experience

  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

interview bias in research

How to reduce bias in interviews

Hiring the best people for your organization requires removing bias from your interviews. Find out what they are, why it's important, and how to do it at every touchpoint.

At every step of the candidate lifecycle, there are opportunities to promote inclusive hiring practices. And yet, despite your talent acquisition team’s efforts to proactively recruit and select a pool of diverse candidates, interviewer bias can derail the entire process – ultimately hindering your organization’s ability to hire employees with myriad backgrounds and experiences.

To help you overcome the challenges interviewer bias presents, we’ve taken a closer look at the different types of bias, why you should aim to avoid bias, and shared our tips for how to reduce bias during the interview process.

What are the types of bias?

There are a number of ways biases present themselves in an interview setting.

Unconscious bias, or implicit bias

This is one of the most common types of biases. It refers to the opinions you form about a person or situation – in this case, a candidate interviewing for your organization – without knowing you’re doing so.

Your bias towards the candidate is formed by your experiences and knowledge (or opinions) of social norms, stereotypes, cultures, attitudes, and more. While your experiences sometimes serve you in making decisions, unconscious bias also harms your perception of meeting people who aren’t like you, often skewing your judgment to your expectations and preferences instead of being open-minded.

There are a number of other biases that impact your ability to interview a candidate with an open mind.

Affinity bias

The inclination to favor a candidate who is most like you – impacts your ability to see the value in those who aren’t like you.

Recency bias

This means you’re more likely to favor the candidate you most recently interviewed.

Halo effect

This occurs when you focus on one particularly great feature about a person and neglect others – including those that are negative.

The horn effect

This is just the opposite: allowing a weak fact to overshadow positive qualities in a candidate.  

Gender bias

A bias towards one gender over the other – can cause you to unconsciously prefer a candidate based on his or her gender and the qualities you associate with it.

Attribution bias

Or how you perceive your actions as well as those of others, stems from our brain’s flawed ability to assess the reasons for certain behaviors – particularly those that lead to success and failure. In general, we attribute our own accomplishments to our skills and abilities, and our failures to external factors.

The reverse is true for others, especially people we don’t know, such as job candidates. We tend to minimize their accomplishments or attribute them to luck, but attribute career misses to skill deficits.

Confirmation bias

This refers to how we often search for evidence that aligns with our own opinions, rather than considering the whole picture or person. This often leads to overlooking other information and instead focusing on things that fit your view of a candidate.

Get the HR leader’s guide: Applying diversity, equity, and inclusion to your employee experience program

Why reduce bias?

When we spoke to people as part of our global study of more than 11,800 participants at the end of 2020, a sense of belonging emerged as the strongest driver of employee engagement – ahead of typical drivers like trust in leadership and ability for career growth. Belonging is a core element of inclusion, along with feeling as though you can be yourself at work, and that your organization is a place where everyone can succeed to their full potential, no matter who they are.

We know a culture of equity and inclusion is not only critical to the success of diversity efforts, but creating an equitable and inclusive workplace also creates a positive employee experience .

Organizations that have had diversity, equity, and inclusion (DEI) strategies in place for an extended period of time have reported positive business outcomes, such as:

+ Diverse teams are more innovative and capable of solving complex problems

+ Companies with gender diverse boards have superior financial outcomes

+ Inclusive managers and psychological safety support team effectiveness

+ DEI is highly connected to employee engagement , job satisfaction, and retention

+ Diversity and inclusion impact company reputation and risk management

There is not only strong moral value in building a DEI program – working to eliminate bias and systemic equity issues around gender and race – there’s measurable business value, as well.

Read more: 8 expert tips for fostering equity in the workplace

10 ways to reduce bias in interviews

Get started addressing the biases that hinder your organization’s ability to foster a workplace where everyone belongs.

Here are several ways to reduce bias in your interview process:

  • Educate yourself about bias at work by seeking out resources such as books and articles authored by members of underrepresented communities – and include myriad perspectives and experiences within those communities, as well.
  • Understand and talk about the benefits of hiring diverse candidates and fostering an inclusive workplace environment – many of which we mentioned above.
  • Consider culture add over culture fit. This means interviewing and hiring employees that not only align with your company’s values but also bring diverse experiences and backgrounds to the table, too.
  • Source more intentionally. Be strategic about where you’re posting open positions. Go beyond the homogeneous networks to tap into diverse talent pipelines you might have previously ignored and/or didn’t realize existed.
  • Conduct panel interviews to mitigate any one individual interviewer’s biases and allow for various perspectives within the interview process.
  • Create structured interview guides with the same questions, asked in the same order, for each candidate.
  • Ask open-ended behavior-based questions that give you more context, as well as insights to actions, critical thinking skills, and results.
  • Use a scorecard to rate candidates consistently and document their abilities and competencies to do the job.
  • Dedicate time for candidates to ask his or her own questions during the interview. This helps establish your organization as a place where everyone is encouraged to contribute – a foundational component of belonging.
  • Ask for candidate feedback . Candidates’ experience interviewing at your company will reveal potential gaps and opportunities for improving the process.

Get insights to your candidate experience

Qualtrics // Experience Management

Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.

With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.

Related Articles

May 13, 2024

X4 2024 Employee Experience Showcase: Powering the future of work with AI

March 27, 2024

A timeline of the gender pay gap in the United States

February 21, 2024

6 effective employee engagement programs

January 11, 2024

Top 7 recruitment strategies for 2024

December 11, 2023

How to be a better manager

November 3, 2023

UNLEASH World: Unveiling what’s next in the world of HR

October 31, 2023

Forrester’s employee listening solutions landscape 2023 – Top three takeaways

October 24, 2023

The 5 employee experience trends redefining work in 2024

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

What is Interview Bias and How to Avoid It

What is Interview Bias and How to Avoid It

What is Interview Bias and How to Avoid It

When it comes to hiring the right person, there can be a lot of stress involved in the recruitment process. From sifting through CVs to getting through assessments and reports to track down the ideal candidate. This is compounded by the fact that in the past you might have missed out on the perfect candidate because of something called interview bias. 

We all carry some biases in our subconscious, and interview bias is no different. It can make hiring the right candidate more difficult because interview bias can interfere with objectivity and cloud the judgment of the person being interviewed.

In this guide we are going to take a closer look at interview bias, understanding what it is, the different types of interview bias and how to reduce it when it comes to making hiring decisions. 

What is interview bias?

Interviewer bias is where the expectations or opinions of the interviewer interferes with the judgment of the interviewee. This can either affect the outcome positively or negatively and that these preconceptions can both consciously and unconsciously influence judgment. 

For example, an interviewer may decide that the candidate wasn’t a good fit for the organization because their handshake wasn’t strong enough at the start of the interview or that not enough eye contact was made when answering questions. These are extreme but very common types of negative outcomes in interview bias.

Another form of bias may be that the interviewer feels some sort of affinity towards the interviewee because they like the same football team or share a similar point of view. It’s important to note that some interviewees will answer questions in a manner that is to please the interviewer, clouding that bias further.

Understanding interviewer bias

Interview bias can exhibit in different ways, not least when the interviewer uses language in the questions that exhibit biases or talk about subjects that would be geared towards more personal choices rather than the role itself. For example, the interviewer may talk about what has recently happened in the news - unless you’re interviewing for a news organization, this is loaded with bias as the answer given may result in a variety of different responses.

Of course, interviewer bias can also be due to body language, facial expression etc. These are preconceived notions that have arisen through years of development in the interviewer and whilst many would consider themselves to have little bias, that is ever rarely the case. After all, we are all human, we will develop these ideas over time and we are all subject to it.

The biggest point to note is that interview bias refers to how responses from participants are affected by aspects of the interviewer. From a handshake to the opinions on a particular area of concern, how the interviewer sees those things can make or break the chances of the person being hired - all because the interviewer sees it differently.

How can bias affect a job interview

Interview bias can be against or in favor of a particular candidate over another, and this is where bias can play a significant role in both the interview and the outcoming selection. You must take the measures previously discussed in order to limit bias and remove it from the decision making process. 

What are the types of bias that can affect interviews? Here are some of the most common forms of bias. 

Stereotyping

These are generalized opinions formed over time about how people from a given gender, religion or race, think, act, feel, or respond. Example: Presuming that a woman would prefer a desk job over working in engineering is a form of stereotyping bias.

Inconsistent questioning

This is where different questions are asked to different candidates. Example: You may ask a caucasian male candidate to describe what their university experience was like compared to a candidate who is a person of color, where you only ask about work experience.

Negative emphasis

Based on a small amount of negative information, you reject the candidate. Interviewers will weigh negative information twice as much as favorable information. Negative emphasis generally happens on subjective factors such as dress or nonverbal communication which can taint the interviewer’s judgment.

Halo effect

The halo effect is where the interviewer allows one strong point - they personally view it as - to overshadow all the other information presented in the interview. When it works in favor of the candidate, it is known as the halo effect; and when it works in the opposite direction (the interviewer judges potential candidates unfavorably in all areas on the basis of one trait) it is called the horn effect. 

Cultural noise

Cultural noise is about the failure to recognize a candidate’s responses which are socially acceptable rather than factual. For example, the candidate may give responses that are "politically correct" but not very revealing. Example: An employer may comment, "I note that you are applying for a role that has more working hours. How do you feel about that?" The applicant might say that this is fine even though this is not the case.

Types of interview bias

There is not just one type of interview bias, there are plenty and whilst we have covered just a few in the previous sections, it is a good idea to understand just what those types of biases are.

We are going to take a closer look at the different types of interview bias and uncover more in each one.

Stereotype bias

This is a generalized belief about a group of people where the interviewer has a clouded judgment of the candidate based on their social category and not the skills or competencies of the interviewee. It could be that the position requires longer working hours than normal and if there are childcare commitments, a female candidate may be excluded before the interview stage. Other examples of stereotype bias include:

  • Elderly people
  • Disabled people
  • Rich people
  • Gender for the role  (i.e. a man can’t be a receptionist)
  • Poor people

Confirmation bias

Confirmation bias is something we are all guilty of. In the case of recruitment it is when the interviewee is asked questions to confirm or elicit responses that support the preconceived notions about a particular candidate. This ultimately means that the interviewer is only concerned about confirming an idea that they have of the candidate either from a preconception that comes from a CV or application or from the moment that the interviewer meets the candidate and another form of bias may have crept into the interview stage.

Social desirability bias

Social desirability bias or cultural noise bias as it can otherwise be called is when the interviewee changes their answers so that they are more desirable from a cultural perspective rather than expressing their own true thoughts.

Recency bias

Recency bias can occur when the interviewer bases their assessment on recent events and not over a wider period of time. Therefore memories of the most recent interview candidates are stronger. It is sometimes called, contrast effect bias - wherein interviewers compare candidates with the preceding interviewee.

Gender and racial bias

Gender and racial bias are pretty self explanatory. In short, this is when the interviewer will either hold a general view about a certain gender or race or that the role is not suitable for either because of preconceived ideas and notions. It is critical that no interviewer should be influenced by prejudice from both a moral standpoint and a legal one as well. 

interview bias in research

Similarity bias

Similarity bias is when interviewers and candidates may discuss hobbies they share or display similar traits in an interview. Hiring decisions based on these similarities rather than a candidate's qualifications may be the result of similarity bias.

Nonverbal bias

Both interviewers and interviewees communicate in a non-verbal manner through body language and eye contact as well through verbal communication. When an interviewer focuses more on the nonverbal aspect rather than the skills of the interviewee this is known as nonverbal bias.

This refers to when one single characteristic overshadows all the other ones for the interviewee. For example this could be where the candidate went to school or in some cases it could be how good looking they are. This will give the interviewee an idea that is positive about all the skillsets of the candidate rather than just one area.

This is in effect the opposite to the halo bias. For example this could be where the candidate may spell something incorrectly on their CV giving the interviewee an idea that is negative about all the skillsets of the candidate rather than just one area.

What is an example of interview bias

There are plenty of examples where we can dive deeper into interview bias. Here is one such example that explains affinity bias. 

Affinity bias is one of the most common types of interview bias. This is where there is an affinity between the interviewer and the candidate, therefore the candidate is viewed in a better light than they should be. Traditionally this could be from something on the C.V. such as the university they went to was the same, or they had the same manager but many years apart. 

However, affinity bias can take place in a moment usually within the interview itself. Let’s say a candidate gets interviewed on a Monday and the interviewer begins with, “how was your weekend?” The response is something like, “Good thank you. I went for a bike ride and did some trails.” The hiring manager is equally a keen cyclist and so they get along. Whether intentional or not, the hiring manager has a favorable view of the candidate, despite the fact that no work-related evidence has been presented yet.

Things like red flags are quickly dismissed from the rest of the interview, and the positive characteristics of the candidate are emphasized even further. 

With something like affinity bias, it is better to use blind C.V’s which limit the amount of personal information one can gather and equally, not to ask personal questions, in, before or after the interview. 

Another good example of interview bias is the Horns effect. 

As previously mentioned, the horns effect is the opposite to the Halo effect. So how does it manifest in an interview scenario? The horns effect means that the candidate has appeared as a “bad” candidate once (maybe briefly) on one item of the interview, and now the interviewer has made up their mind. Anything else, even as accurate as the answer could be will be dismissed or downplayed. 

Finally, another type of bias that is common is attribution bias. Similar to confirmation bias but with a twist, it is another form of cognitive bias but in this scenario, the interviewer will make up reasons for facts and things that happen to the candidate, instead of looking at the facts objectively. They try to craft explanations, or more accurately, to invent explanations.

An example would be the following; the candidate shows up late to an interview and the interviewer decides that they came late because the candidate does not care about the role. The truth however is that the interviewer has no way to certainly assert if the candidate cares or not because they cannot read minds. However, attribution bias will dictate that such behavior does not imply a specific case. 

The interviewer is dangerous in this scenario because they are painting a picture of the candidate based on their beliefs (attribution) and then frames the rest of the interview so that the candidate lives up to such a faulty image (confirmation). 

How to reduce bias in interviews

There are many types of bias that we have discussed, however, it is equally important to understand that removing bias from the interview process helps interviewers to correctly identify the best candidates and be objective.

How can interviewers remove bias from the interview process? Here are some suggestions:

Use an interview guide

This is a document that is put together to help provide a structure for the interviewing process. It helps keep both the interviewer and the organization interviewing in a consistent and compliant manner which should help all candidates  get the same treatment at interview.

Furthermore, this will help interviewers know what to ask and in what order, so as to help provide the same candidate experience for all applicants. Whilst the questions may change based on the industry or requirements for the job, an interview guide can be used to help candidates for a role be treated equally. 

Use standardized questions and scoring criteria

Using standardized questions and a scoring criteria removes many different kinds of bias from the interview process. You can develop a scoring system but then this needs to be standardized across each interview. By doing so, you bring clarity to the decision making process based solely on the information you have from the interview over other potential influencing decisions.

Provide interviewer training

Interviewers must have training on equality and diversity, including how to avoid their own unconscious biases. Not only will this help to minimize the impact of hidden intolerances and prejudices but it will also provide a fairer system for all the candidates being interviewed. 

Training should include things such as:

  • How to avoid asking irrelevant questions that can lead to a bias being made on the character of the person
  • Recognizing how assumptions can be made about applicants
  • Keeping an impartial and open mind and not focusing on things such as looks or body language to affect the evaluation of the candidate

Use anonymized skills based testing

Where possible, you can keep many aspects of the candidate selection anonymous. For example in a skills assessment, you can remove the potential for bias and aid recruitment in decision making by removing things such as name, date of birth and even ethnicity from the records. Keeping this information anonymous allows you to thoroughly assess the skills of the candidate without coloring any judgments.

Use multiple, diverse interviewers

We are of course all naturally biased. What this means is that we need to remove bias at different stages of an interview process, and in some cases use more than one method to help do this. One way to do this is to use multiple interviewers so that the potential for bias affecting the interview process is reduced.

Someone will have less bias in one area than another and vice versa. By opening up the pool of interviewers, you are allowing for more skills to shine through. 

interview bias in research

Take notes throughout interviews

Real-time note taking helps minimize the chance of bias. Why? Because notes after the meeting can be tinged with opinions or ideas about the candidate which has no place in the decision making process. Keeping accurate records throughout the interview will help to identify the skills and competencies of the candidate in a clear and concise manner, removing bias along the way.

Minimize unrelated discussion

Whilst there is always room for a bit of small talk to help the candidate feel at ease and at home with the process, making that small talk the topic of conversation can contribute towards bias in the interview. 

Being able to keep the small talk, small is essential. This is where sticking to a script and using a marking method can help with the development of the interview process and limit the level of potential bias.

Use an assessment matrix

You can use an assessment matrix to help identify a candidate based on a number of different criteria, and all without any form of bias creeping in. From job description, person specification and the agreed weight given to each criterion, an interviewer will ensure that all applicants are assessed objectively, and solely on their ability to do the job satisfactorily. 

This will help to ensure that every hiring decision is based on reason and evidence, rather than opinion and potential discriminatory bias.

Recruit broadly

Where you get your candidates from can easily influence the kind of bias that some may have compared to others. For example if you’re only getting candidates from a job board, it may be worthwhile opening up the application process from a job fair or going to higher education establishments and recruiting from there.

This will open up the variety of candidates from different areas to help your interviewers get the most broad range of candidates from different areas which in turn will provide a less biased based evaluation of the interviewee.

When are interviews not the best option?

Sometimes, interviews are not always the best option and it’s easy to forget that they are not the only way of gathering information - depending on the role you are hiring for. As an example, large-scale phone interviews can be time-consuming and expensive whilst mailed questionnaires may provide the best option when it comes to getting information from a large number of people.

Here are some other reasons why interviews are not always the best option; 

  • Phone interviews/screenings for a large candidate base is time consuming and good candidates could be ruled out in early stages due to bias - questionnaires/assessments etc can help
  • Times when only numeric data is required to be collected - instead using forms
  • Some candidates may not perform well in interviews and interviewer bias could affect the ability to see the true potential of a candidate
  • If respondents are unwilling to cooperate, interviews will not be suitable. 
  • If your candidate has something against your organization they will not give you the answers you want and may even mess up your results. Become aware of your candidate’s inclinations quickly so that you can already make the best judgment before interviewing them. 
  • Equally, when candidates struggle to talk or communicate effectively, setting up an interview is a waste of time and resources. You should, then, look for a less direct way of gathering the information you need.

There are also times that when an interview is over, the interviewee wants to go over the copy, change or edit it to make sure that they are addressing any concerns that the interviewer may have. That’s not always a good option - in fact, that shouldn’t really happen but sometimes bias can step in and allow for it to happen. If the subject you're addressing involves technical information, you may have the interviewee check the final result for you, just for accuracy.

Finally there is something which isn’t always considered when it comes to deciding if an interview is the best option or not, that is, do you have a standardized process in place? Research by Schmidt and Hunter has consistently shown that the interview method on average explains an average of 8% variation in employee performance. This means its predictive power is limited when it comes to predicting which employee will perform when put on the job. 

There are a number of reasons why the interview process has such a low level of predictive power, but primarily this is focused on the factor that most interviews are unstructured and unstandardized. This means that candidates who have applied for the same job are sometimes asked questions that have nothing to do with the job hence making it difficult to assess their suitability for the job. Another reason why interviews show low predictive power is interviewers are poorly trained and rarely come for the interviews prepared. This needs to be something that changes in order to bring a greater consistency to the interview process and in return, bring out the best results from the candidates.

There is however a solution to a lot of the problems that are brought out from interviews, one that can work against the biases mentioned above. That is to conduct interviews and assessments together, so that the process is standardized and better equipped to deal with bias in the first place.

Conclusion 

Interview bias refers to how responses from participants are affected by aspects of the interviewer and when done in such a manner, it can lead to interviewers making a bad decision over who should or who shouldn’t be hired. 

We are all biased and the interview stage has many different forms which can affect selection of a great candidate. Being able to limit these biases from the interview process is essential in keeping the organization more responsive to a better crop of candidates for long term success.

The  Thomas Recruitment Platform allow candidates to be evaluated fairly, freely and without bias from the information provided interfering in their application. It can also aid your organization in the development of value based questions and analyze results to help make hiring decisions easier. Specifically, the  interview guide  is dynamically generated for each individual that takes our assessments, and gives you suggested questions to ask in the interviews based on the traits and aptitudes that you define as most important for the role. Using these questions in an interview, you can really get to the heart of each interviewee and understand more about their personality and behavior than you would from a standard interview.

If you would like to know more, please speak to one of our team .

interview bias in research

Please select which Thomas product you would like to sign in to

thomas-perform

IMAGES

  1. 7 Types of Interview Bias

    interview bias in research

  2. Research bias: What it is, Types & Examples

    interview bias in research

  3. Types of Bias in Research: Definition, Examples, and Prevention

    interview bias in research

  4. Forms of Interview Bias

    interview bias in research

  5. Interview Bias: A Comprehensive Guide

    interview bias in research

  6. What is Interview Bias and How Can it Be Prevented?

    interview bias in research

VIDEO

  1. Sampling Bias in Research

  2. BIAS’ story

  3. How to Avoid Interview Bias Round Table

  4. 04

  5. How to Avoid Interview Bias

  6. Hidden interview: Bias news presenter OWNED by Palestine protestor! Still not uploaded by GBN

COMMENTS

  1. Best Practices for Reducing Bias in the Interview Process

    Residency programs have begun adopting best practices from business models for interviewing, which include standardized questions, situational and/or behavioral anchored questions, blinded interviewers, and use of the multiple mini-interview (MMI) model. The focus of this review is to take a more in-depth look at practices that have become ...

  2. Revisiting Bias in Qualitative Research: Reflections on Its

    Bias—commonly understood to be any influence that provides a distortion in the results of a study (Polit & Beck, 2014)—is a term drawn from the quantitative research paradigm.Most (though perhaps not all) of us would recognize the concept as being incompatible with the philosophical underpinnings of qualitative inquiry (Thorne, Stephens, & Truant, 2016).

  3. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  4. In-depth Interviewer Effects: Mitigating Interviewer Bias

    The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 83-84).. The outcome of a qualitative in-depth interview (IDI) study, regardless of mode, is greatly affected by the interviewer's conscious or unconscious influence within the context of the IDIs—that is, the absence or presence of interviewer bias.

  5. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  6. How to Take the Bias Out of Interviews

    But while unstructured interviews consistently receive the highest ratings for perceived effectiveness from hiring managers, dozens of studies have found them to be among the worst predictors of ...

  7. Improving the Trustworthiness/Validity of Interview Data in Qualitative

    The Formative Influences Timeline (FIT) is one such procedure. This article provides a brief history of bias in qualitative research, describes the FIT and how to use it, and reviews nonprofit research studies that either employed—or could have employed—the FIT to produce data relatively uncontaminated by researchers' a priori assumptions.

  8. Interviewer Bias

    An interview for research studies may be obtained from a variety of sources: commercial interviewing services; students or research assis­ tants; and staff members of the or­ ganization doing the survey. Commercial Interviewers. Profes­ sional interviewers are readily availa­ ble in most large cities. The quality of their services varies ...

  9. Interviewer Bias In User Research & Steps To Conquer It

    Any number of actions and behaviours made by the interviewer that cause the interviewee to react and respond in ways that conform with the interviewer's bias. Examples of this could be: using certain language, phrases or leading questions. outwardly expressing one's own beliefs or answers to questions.

  10. Interviewer Bias & Reflexivity in Qualitative Research

    The importance of considering the implications from undo prejudices in qualitative research was discussed in the April 2011 Research Design Review post, " Visual Cues & Bias in Qualitative Research," which emphasized that "there is clearly much more effort that needs to be made on this issue.". Reflexivity and, specifically, the ...

  11. Interview Bias: 9 Types of Implicit Bias To Avoid

    EXAMPLE: You reject a candidate for a programming job because their socioeconomic background makes you question their intelligence, even though they have years of experience. 2. Halo/horn bias. These biases form when a single characteristic or physical trait overshadows an applicant's other qualities.

  12. Coming Face to Face with Interview Bias

    Coming Face to Face with Interview Bias. Between a Graph and a Hard Place / By Sara Wicen / September 29, 2022. Throughout the Between a Graph and A Hard Place blog series, we've often spoken of bias and controlling bias. From keeping your own bias in check during qualitative data analysis to understanding social desirability bias during ...

  13. 7 Biases to avoid in qualitative research

    Consider potential bias while constructing the interview and order the questions suitably. Ask general questions first, before moving to specific or sensitive questions. Leading questions and wording bias. Questions that lead or prompt the participants in the direction of probable outcomes may result in biased answers.

  14. Information bias in health research: definition, pitfalls, and

    Social desirability bias. When researchers use a survey, questionnaire, or interview to collect data, in practice, the questions asked may concern private or sensitive topics, such as self-report of dietary intake, drug use, income, and violence.

  15. How does bias enter the employment interview? Identifying the riskiest

    An important gap in the extant interview bias literature is the lack of research that focuses on a multitude of applicant characteristics while simultaneously investigating the sources of information about applicants as well as the interviewer characteristics that might lead to bias (Triana et al., 2021).

  16. Types of Interviews in Research

    Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

  17. Best Available Evidence or Truth for the Moment: Bias in Research

    The subject of this column is the nature of bias in both quantitative and qualitative research. To that end, bias will be defined and then both the processes by which it enters into research will be entertained along with discussions on how to ameliorate this problem.

  18. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  19. Best Practices for Reducing Bias in the Interview Process

    These structured formats can be adopted to the virtual interviews as well. There is growing literature that using structured interviews reduces bias, increases diversity, and recruits successful residents. Further research to measure the extent of incorporating this method into residency interviews will be needed in the future.

  20. How to reduce bias in interviews

    XM for Strategy & Research Research. Get faster, richer insights with qual and quant tools that make powerful market research available to everyone. ... 10 ways to reduce bias in interviews. Get started addressing the biases that hinder your organization's ability to foster a workplace where everyone belongs.

  21. What is Interview Bias and How to Avoid It

    Research by Schmidt and Hunter has consistently shown that the interview method on average explains an average of 8% variation in employee performance. This means its predictive power is limited when it comes to predicting which employee will perform when put on the job. ... Interview bias refers to how responses from participants are affected ...

  22. PDF Identifying and Avoiding Interview Biases

    rily logical, modern or at times legal. Some common biases that may occur in an interview include, stereotyping, the halo/pitchfork effect, nonv. rbal bias and the "like me" s. rome. Let's explore these. a bit more. First we have stereotyping. This is forming an opinion about how people of a given race, gender, religion or other c.

  23. Increasing rigor and reducing bias in qualitative research: A document

    Qualitative research methods have traditionally been criticised for lacking rigor, and impressionistic and biased results. Subsequently, as qualitative methods have been increasingly used in social work inquiry, efforts to address these criticisms have also increased.