Interview a subsample of participants prior to initiating the study (validated subsample)
Bias is often not accounted for in practice. Even though a number of adjustment and prevention methods to mitigate bias are available, applying them can be rather challenging due to limited time and resources. For example, measurement error bias properties might be difficult to detect, particularly if there is a lack of information about the measuring instrument. Such information can be tedious to obtain as it requires the use of validation studies and, as mentioned before, these studies can be expensive and require careful planning and management. Although conducting the usual analysis and ignoring measurement error bias may be tempting, researchers should always follow the practice of reporting any evidence of bias in their results.
In order to minimize or eliminate bias, careful planning is needed in each step of the research design. For example, several rules and procedures should be followed when designing self-reporting instruments. Training of interviewers is important in minimizing such type of bias. On the other hand, the effect of measurement error can be difficult to eliminate since measuring devices and algorithms are often imperfect. A general rule is to revise the level of accuracy of the measuring instrument before utilizing it for data collection. Such adjustments should greatly reduce any possible defects. Finally, confirmation bias can be eliminated from the results if investigators take into account different factors that can affect human judgment.
Researchers should be familiar with sources of bias in their results, and additional effort is needed to minimize the possibility and effects of bias. Increasing the awareness of the possible shortcomings and pitfalls of decision making that can result in bias should begin at the medical undergraduate level and students should be provided with examples to demonstrate how bias can occur. Moreover, adjusting for bias or any deficiency in the analysis is necessary when bias cannot be avoided. Finally, when presenting the results of a medical research study, it is important to recognize and acknowledge any possible source of bias.
The author reports no conflicts of interest in this work.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on March 10, 2022 by Tegan George . Revised on June 22, 2023.
An interview is a qualitative research method that relies on asking questions in order to collect data . Interviews involve two or more people, one of whom is the interviewer asking the questions.
There are several types of interviews, often differentiated by their level of structure.
Interviews are commonly used in market research, social science, and ethnographic research .
What is a structured interview, what is a semi-structured interview, what is an unstructured interview, what is a focus group, examples of interview questions, advantages and disadvantages of interviews, other interesting articles, frequently asked questions about types of interviews.
Structured interviews have predetermined questions in a set order. They are often closed-ended, featuring dichotomous (yes/no) or multiple-choice questions. While open-ended structured interviews exist, they are much less common. The types of questions asked make structured interviews a predominantly quantitative tool.
Asking set questions in a set order can help you see patterns among responses, and it allows you to easily compare responses between participants while keeping other factors constant. This can mitigate research biases and lead to higher reliability and validity. However, structured interviews can be overly formal, as well as limited in scope and flexibility.
Professional editors proofread and edit your paper by focusing on:
See an example
Semi-structured interviews are a blend of structured and unstructured interviews. While the interviewer has a general plan for what they want to ask, the questions do not have to follow a particular phrasing or order.
Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order. For this reason, they are often considered “the best of both worlds.”
However, if the questions differ substantially between participants, it can be challenging to look for patterns, lessening the generalizability and validity of your results.
An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers.
Unstructured interviews are by definition open-ended. This flexibility can help you gather detailed information on your topic, while still allowing you to observe patterns between participants.
However, so much flexibility means that they can be very challenging to conduct properly. You must be very careful not to ask leading questions, as biased responses can lead to lower reliability or even invalidate your research.
A focus group brings together a group of participants to answer questions on a topic of interest in a moderated setting. Focus groups are qualitative in nature and often study the group’s dynamic and body language in addition to their answers. Responses can guide future research on consumer products and services, human behavior, or controversial topics.
Focus groups can provide more nuanced and unfiltered feedback than individual interviews and are easier to organize than experiments or large surveys . However, their small size leads to low external validity and the temptation as a researcher to “cherry-pick” responses that fit your hypotheses.
Depending on the type of interview you are conducting, your questions will differ in style, phrasing, and intention. Structured interview questions are set and precise, while the other types of interviews allow for more open-endedness and flexibility.
Here are some examples.
Interviews are a great research tool. They allow you to gather rich information and draw more detailed conclusions than other research methods, taking into consideration nonverbal cues, off-the-cuff reactions, and emotional responses.
However, they can also be time-consuming and deceptively challenging to conduct properly. Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from accidentally leading questions.
Here are some advantages and disadvantages of each type of interview that can help you decide if you’d like to utilize this research method.
Type of interview | Advantages | Disadvantages |
---|---|---|
Structured interview | ||
Semi-structured interview | , , , and | |
Unstructured interview | , , , and | |
Focus group | , , and , since there are multiple people present |
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
The four most common types of interviews are:
The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.
There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.
Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .
Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.
This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.
A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of 4 types of interviews .
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
George, T. (2023, June 22). Types of Interviews in Research | Guide & Examples. Scribbr. Retrieved September 11, 2024, from https://www.scribbr.com/methodology/interviews-research/
Other students also liked, unstructured interview | definition, guide & examples, structured interview | definition, guide & examples, semi-structured interview | definition, guide & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Run a free plagiarism check in 10 minutes, automatically generate references for free.
Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .
Understanding research bias is important for several reasons.
It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimise them.
For example, the success rate of the program will likely be affected if participants start to drop out. Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.
Actor–observer bias.
Interviewer bias.
Response bias.
Selection bias
Other types of research bias, frequently asked questions about research bias.
Actor–observer bias occurs when you attribute the behaviour of others to internal factors, like skill or personality, but attribute your own behaviour to external or situational factors.
In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behaviour of others, you are more likely to associate behaviour with their personality, nature, or temperament.
One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the road, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.
At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.
Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.
Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasise findings that ‘prove’ that your lived experience is the case for most families, neglecting other explanations and experiences.
Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.
The main types of information bias are:
Regression to the mean (rtm).
Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.
Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.
As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.
Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.
The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.
Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.
Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgement (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the data collection process.
Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double- and single-blinded research methods.
Based on discussions you had with other researchers before starting your observations, you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.
At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.
Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.
Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.
When the subjects of an experimental study change or improve their behaviour because they are aware they are being studied, this is called the Hawthorne (or observer) effect . Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behaviour in an effort to compensate for their perceived disadvantage.
Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the centre of its distribution on a second measurement.
Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.
In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .
This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.
However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.
Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.
Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.
Participant: ‘I like to solve puzzles, or sometimes do some gardening.’
You: ‘I love gardening, too!’
In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.
Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.
Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.
Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.
Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).
The unconscious form of researcher bias is associated with the Pygmalion (or Rosenthal) effect, where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.
Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .
Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .
This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.
Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.
While interviewing a student, you ask them:
‘Do you think it’s okay to cheat on an exam?’
Common types of response bias are:
Demand characteristics.
Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like ‘agree/disagree’, ‘yes/no’, or ‘true/false’. Acquiescence is sometimes referred to as ‘yea-saying’.
This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.
Q: Are you a social person?
People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.
In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:
Q: What would you prefer?
Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviours or views. Ensuring that participants are not aware of the research goals is the best way to avoid this type of bias.
On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.
Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behaviour.
You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.
Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.
Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.
Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.
When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.
Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.
Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.
Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.
Common types of selection bias are:
Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.
The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.
Sampling bias is often referred to as ascertainment bias in the medical field.
Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.
You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.
You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey , three surveys during the program, and a posttest survey.
Volunteer bias (also called self-selection bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.
Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment – i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.
Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.
Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.
Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.
This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process – focusing on ‘survivors’ and forgetting those who went through a similar process and did not survive.
Note that ‘survival’ does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.
However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.
Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.
You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.
You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.
Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.
While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.
Cognitive bias
Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.
Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.
Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.
Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.
More interesting articles.
13k Accesses
24 Citations
14 Altmetric
Explore all metrics
Objective measures of residency applicants do not correlate to success within residency. While industry and business utilize standardized interviews with blinding and structured questions, residency programs have yet to uniformly incorporate these techniques. This review focuses on an in-depth evaluation of these practices and how they impact interview formatting and resident selection.
Structured interviews use standardized questions that are behaviorally or situationally anchored. This requires careful creation of a scoring rubric and interviewer training, ultimately leading to improved interrater agreements and biases as compared to traditional interviews. Blinded interviews eliminate even further biases, such as halo, horn, and affinity bias. This has also been seen in using multiple interviewers, such as in the multiple mini-interview format, which also contributes to increased diversity in programs. These structured formats can be adopted to the virtual interviews as well.
There is growing literature that using structured interviews reduces bias, increases diversity, and recruits successful residents. Further research to measure the extent of incorporating this method into residency interviews will be needed in the future.
Avoid common mistakes on your manuscript.
Optimizing the criteria to rank residency applicants is a difficult task. The National Residency Matching Program (NRMP) is designed to be applicant-centric, with the overarching goal to provide favorable outcomes to the applicant while providing opportunity for programs to match high-quality candidates. From a program’s perspective, the NRMP is composed of three phases: the screening of applicants, the interview, and the creation of the rank list. While it is easy to compare candidates based on objective measures, these do not always reflect qualities required to be a successful resident or physician. Prior studies have demonstrated that objective measures such as Alpha Omega Alpha status, United States Medical Licensing Exams (USMLE), and class rank do not correlate with residency performance measures [ 1 ]. Due to the variability of these factors to predict success and recognition of the importance of the non-cognitive traits, most programs place increased emphasis on candidate interviews to assess fit [ 2 ].
Unfortunately, the interview process lacks standardization across residency programs. Industry and business have more standardized interviews and utilize best practices that include blinded interviewers, use of structured questions (situational and/or behavioral anchored questions), and skills testing. Due to residency interview heterogeneity, studies evaluating the interview as a predictor of success have failed to reliably predict who will perform well during residency. Additionally, resident success has many components, such that isolating any one factor, such as the interview, may be problematic and argues for a more holistic approach to resident selection [ 3 ]. Nevertheless, there are multiple ways the application review and interview can be standardized to promote transparency and improve resident selection.
Residency programs have begun adopting best practices from business models for interviewing, which include standardized questions, situational and/or behavioral anchored questions, blinded interviewers, and use of the multiple mini-interview (MMI) model. The focus of this review is to take a more in-depth look at practices that have become standard in business and to review the available data on the impact of these practices in resident selection.
Unstructured interviews are those in which questions are not set in advance and represent a free-flowing discussion that is conversational in nature. The course of an unstructured interview often depends on the candidate’s replies and may offer opportunities to divert away from topics that are important to applicant selection. While unstructured interviews may involve specific questions such as “tell me about a recent book you read” or “tell me about your research,” the questions do not seek to determine specific applicant attributes and may vary significantly between applicants. Due to their free-form nature, unstructured interviews may be prone to biased or illegal questions. Additionally, due to a lack of a specific scoring rubric, unstructured interviews are open to multiple biases in answer interpretation and as such generally show limited validity [ 4 ]. For the applicant, unstructured interviews allow more freedom to choose a response, with some studies reporting higher interviewee satisfaction with these questions [ 5 ].
In contrast to the unstructured interview, structured interviews use standardized questions that are written prior to an interview, are asked of every candidate, and are scored using an established rubric. Standardized questions may be behaviorally or situationally anchored [ 5 ]. Due to their uniformity, standardized interviews have higher interrater reliability and are less prone to biased or illegal questions.
Behavioral questions ask the candidate to discuss a specific response to a prior experience, which can provide insight into how an applicant may behave in the future [ 5 ]. Not only does the candidate’s response reflect a possible prediction of future behavior, it can also demonstrate the knowledge, priorities, and values of the candidate [ 5 ]. Questions are specifically targeted to reflect qualities the program is searching for (Table 1 ) [ 5 , 6 , 7 ].
Situational questions require an applicant to predict how they would act in a hypothetical situation and are intended to reflect a realistic scenario the applicant may encounter during residency; this can provide insight into priorities and values [ 5 ]. For example, asking what an applicant would do when receiving sole credit for something they worked on with a colleague can provide insight into the integrity of a candidate [ 4 ]. These types of questions can be especially helpful for fellowships, as applicants would already have the clinical experience of residency to draw from [ 5 ].
Using standardized questions provides a method to recruit candidates with characteristics that ultimately correlate to resident success and good performance. Indeed, structured interview scores have demonstrated an ability to predict which students perform better with regard to communication skills, patient care, and professionalism in surgical and non-surgical specialties [ 8 •]. In fields such as radiology, non-cognitive abilities that can be evaluated in behavioral questions, such as conscientiousness or confidence, are thought to critically influence success in residency and even influence cognitive performance [ 1 ]. This has also been demonstrated in obstetrics and gynecology, where studies have shown that resident clinical performance after 1 year had a positive correlation with the rank list percentile that was generated using a structured interview process [ 9 ].
To be effective, standardized interview questions should be designed in a methodical manner. The first step in standardizing the interview process is determining which core values predict resident success in a particular program. To that end, educational leaders and faculty within the department should come to a consensus on the main qualities they seek in a resident. From there, questions can be formatted to elicit those traits during the interview process. Some programs have used personality assessment inventories to establish these qualities. Examples include openness to experience, humility, conscientiousness, and honesty. Further program-specific additions can be included, such as potential for success in an urban versus rural environment [ 10 ].
Once key attributes have been chosen and questions have been selected, a scoring rubric can be created. The scoring of each question is important as it helps define what makes a high-performing versus low-performing answer. Once a scoring system is determined, interviewers can be trained to review the questions, score applicant responses, and ensure they do not revise the questions during the interview [ 11 ]. Questions and the grading rubric should be further scrutinized through mock interviews with current residents, including discussing responses of the mock interviewee and modifying the questions and rubric prior to formal implementation [ 12 ]. Interviewer training itself is critical, as adequate training leads to improved interrater agreements [ 13 ]. Figure 1 demonstrates the steps to develop a behavioral interview question.
Example of standardized question to evaluate communication with scoring criteria
Rating the responses of the applicants can come with errors that ultimately reduce validity. For example, central tendency error involves interviewers not rating students at the extremes of a scale but rather placing all applicants in the middle; leniency versus severity refers to interviewers who either give all applicants high marks or give everyone low marks; contrast effects involve comparing one applicant to another rather than solely focusing on the rubric for each interviewee. These rating errors reflect the importance of training and providing feedback to interviewers [ 4 ].
Blinding the interviewers to the application prior to meeting with a candidate is intended to eliminate various biases within the interview process (Table 2 ) [ 14 , 15 ]. In addition to grades and test scores, aspects of the application that can either introduce or exacerbate bias include photographs, demographics, letters of recommendation, selection to medical honor societies, and even hobbies. Impressions of candidates can be formed prematurely, with the interview then serving to simply confirm (or contradict) those impressions [ 16 •]. Importantly, application blinding may also decrease implicit bias against applicants who identify as underrepresented in medicine [ 17 ].
Despite the proven success of these various interview tactics, their use in resident selection remains limited, with only 5% of general surgery programs using standardized interview questions and less than 20% using even a limited amount of blinding (e.g., blinding of photograph) [ 2 ]. Some programs have continued to rely on unblinded interviews and prioritize USMLE scores and course grades in ranking [ 18 ]. Due to their potential benefits and ability to standardize the interview process, it is critical that programs become familiar with the various interview practices so that they can select the best applicants while minimizing the significant bias in traditional interview formats.
The use of multiple interviews by multiple interviewers provides an opportunity to ask the applicant more varied questions and also allows for the averaging out of potential interviewer bias leading to more consistent applicant scoring and ability to predict applicant success [ 7 ]. Training of the interviewers in interviewing techniques, scoring, and avoiding bias is also likely to decrease scoring variability. Similarly, the use of the same group of interviewers for all candidates should be encouraged in order to limit variance in scoring amongst certain faculty [ 19 ].
One interview method that incorporates multiple interviewers and has had growing frequency in medical school interviews as well as residency interviews is the MMI model. This system provides multiple interviews in the form of 6–12 stations, each of which evaluates a non-medical question designed to assess specific non-academic applicant qualities [ 20 ]. While the MMI format can intimidate some candidates, others find that it provides an opportunity to demonstrate traits that would not be observed in an unstructured interview, such as multitasking, efficiency, flexibility, interpersonal skills, and ethical decision-making [ 21 ]. Furthermore, MMI has been shown to have increased reliability as shown in a study of five California medical schools that showed inter-interviewer consistency was higher for MMIs than traditional interviews which were unstructured and had a 1:1 ratio of interviewer to applicant [ 22 ].
The MMI format is also versatile enough to incorporate technical competencies even through a virtual platform. In general surgery interviews, MMI platforms have been designed to test traits such as communication and empathy but also clinical knowledge and surgical aptitude through anatomy questions and surgical skills (knot tying and suturing). Thus, MMIs are not only versatile, but also have an ability to evaluate cognitive traits and practical skills [ 23 ].
MMI also has the potential to reduce resident attrition. For example, in evaluating students applying to midwifery programs in Australia, attrition rates and grades were compared for admitted students using academic rank and MMI scores obtained before and after the incorporation of MMIs into their selection program. The authors found that when using MMIs, enrolled students had not only higher grades but significantly lower attrition rates. MMI was better suited to show applicants’ passion and commitment, which then led to similar mindsets of accepted applicants as well as a support network [ 24 ]. Furthermore, attrition rates have been found to be higher in female residents in general surgery programs [ 25 ]. Perhaps with greater diversity, which is associated with use of standardized interviews, the number of women can increase in surgical specialties and thus reduce attrition rate in this setting as well.
An imperative of all training programs is to produce a cohort of physicians with broad and diverse experiences representative of the patient populations they treat. To better address diversity within surgical residencies, particularly regarding women and those who are underrepresented in medicine, it is important that interviews be designed to minimize bias against any one portion of the applicant pool. Diverse backgrounds and cultures within a program enhance research, innovation, and collaboration as well as benefit patients [ 26 ]. Patients have shown greater satisfaction and reception when they share ethnicity or background with their provider, and underrepresented minorities in medicine often go on to work in underserved communities [ 27 ].
All interviewers undoubtedly have elements of implicit bias; Table 2 describes the common subtypes of implicit bias [ 14 ]. While it is difficult to eliminate bias in the interview process, unstructured or “traditional” interviews are more likely to risk bias toward candidates than structured interviews. Studies have demonstrated that Hispanic and Black applicants receive scores one quarter of a standard deviation lower than Caucasian applicants [ 28 ]. “Like me” bias is just one example of increased subjectivity with unstructured interviews, where interviewers prefer candidates who may look like, speak like, or share personal experiences with the interviewer [ 29 ].
Furthermore, unstructured interviews provide opportunities to ask inappropriate or illegal questions, including those that center on religion, child planning, and sexual orientation [ 30 ]. Inappropriate questions tend to be disproportionately directed toward certain groups, with women more likely to get questions regarding marital status and to be questioned and interrupted than male counterparts [ 28 , 31 ].
Structured interviews, conversely, have been shown to decrease bias in the application process. Faculty trained in behavior-based interviews for fellowship applications demonstrated that there were reduced racial biases in candidate evaluations due to scoring rubrics [ 12 ]. Furthermore, as structured questions are determined prior to the interview and involve training of interviewers, structured interviews are less prone to illegal and inappropriate questions [ 32 ]. Interviewers can ask additional questions such as “could you be more specific?” with the caveat that probing should be minimized and kept consistent between applications. This way the risk of prompting the applicant toward a response is reduced [ 4 ].
An added complexity to creating standardized interviews is incorporating a virtual platform. Even prior to the move toward virtual interviews instituted during the COVID-19 pandemic, studies on virtual interviews showed that they provided several advantages over in-person interviews, including decreased cost, reduction in time away from commitments for applicants and staff, and ability to interview at more programs. A significant limitation, for applicants and for programs, is the inability to interact informally, which allows applicants to evaluate the environment of the hospital and the surrounding community [ 33 •]. Following their abrupt implementation in 2020 during the COVID-19 pandemic, virtual interviews have remained in place and likely will remain in place in some form into the future due to their significant benefits in reducing applicant cost and improving interview efficiency. Although these types of interviews are in their relative infancy in the resident selection process, studies have found that standardized questions and scoring rubrics that have been used in person can still be applied to a virtual interview setting without degrading interview quality [ 34 ].
The virtual format may also allow for further interview innovation in the form of standardized video interviews. For medical student applicants, the Association of American Medical Colleges (AAMC) has trialed a standardized video interview (SVI) that includes recording of applicant responses, scoring, and subsequent release to the Electronic Residency Application Service (ERAS) application. Though early data in the pilot was promising, the program was not continued after the 2020 cycle due to lack of interest [ 35 ]. There is limited evidence supporting the utility of this type of interview in residency training, and one study found that these interviews did not add significant benefit as the scores did not associate with other candidate attributes such as professionalism [ 32 ]. Similarly, a separate study found no correlation between standardized video interviews and faculty scores on traits such as communication and professionalism. Granted, there was no standardization in what the faculty asked, and they were not blinded to academic performance of the applicants [ 36 ]. While there was an evaluation of six emergency medicine programs that demonstrated a positive linear correlation between the SVI score and the traditional interview score, it was a very low r coefficient; thus the authors concluded that the SVI was not adequate to replace the interview itself [ 37 ].
The shift to structured interviews in urology has been slow. Within the last decade, studies consistent with other specialties demonstrated that urology program directors prioritized USMLE scores, reference letters, and away rotations at the program director’s institution as the key factors in choosing applicants [ 38 ]. More recently, a survey of urology programs found < 10% blinded the recruitment team at the screening step, with < 20% blinding the recruitment team during the interview itself [ 39 ]. In 2020 our program began using structured interview questions and blinded interviewers to all but the personal statement and letters of recommendation. After querying faculty and interviewees, we have found that most interviewers do not miss the additional information, and applicants feel that they are able to have more eye contact with faculty who are not looking down at the application during the interview. Structured behavioral interview questions have allowed us to focus on the key attributes important to our program. With time we hope to see that inclusion of these metrics helps diversify our resident cohort, improve resident satisfaction with the training program, and produce successful future urologists.
Despite the slow transition in urology and other fields, there is a growing body of literature in support of standardized interviews for evaluating key candidate traits that ultimately lead to resident success and reducing bias while increasing diversity. With time, the hope is that programs will continue incorporating these types of interviews in the resident selection process.
ALTMAIER, EM, et al., The predictive utility of behavior-based interviewing compared with traditional interviewing in the selection of radiology residents. Investigative Radiology 1992;27(5):385–389
Kim RH, et al. General surgery residency interviews: are we following best practices? Am J Surg. 2016;211(2):476-481.e3.
Article Google Scholar
Stephenson-Famy A, et al. Use of the interview in resident candidate selection: a review of the literature. J Grad Med Educ. 2015;7(4):539–48.
Best practices for conducting residency interviews, A.o.A.M. Colleges, Editor. 2016
Black C, Budner H, Motta AL. Enhancing the residency interview process with the inclusion of standardised questions. Postgrad Med J. 2018;94(1110):244–6.
Hartwell CJ, Johnson CD, Posthuma RA. Are we asking the right questions? Predictive validity comparison of four structured interview question types. J Bus Res. 2019;100:122–9.
Beran B, et al. An analysis of obstetrics-gynecology residency interview methods in a single institution. J Surg Educ. 2019;76(2):414–9.
• Marcus-Blank B, et al. Predicting performance of first-year residents: correlations between structured interview, licensure exam, and competency scores in a multi-institutional study. Acad Med. 2019;94(3):378–87. Authors administered 18 behavioral structured interview questions (SI) to measure key noncognitive competencies across 14 programs (13 residency, 1 fellowship) from 6 institutions to determine correlation first-year resident milestone performance in the ACGME's core competency domains and overall scores. They found SI scores predicted midyear and year-end overall performance and year-end performance on patient care, interpersonal and communication skills, and professionalism competencies and that SI scores contributed incremental validity over USMLE scores in predicting year-end performance on patient care, interpersonal and communication skills, and professionalism.
Olawaiye A, Yeh J, Withiam-Leitch M. Resident selection process and prediction of clinical performance in an obstetrics and gynecology program. Teach Learn Med. 2006;18(4):310–5.
Prystowsky MB, et al. Prioritizing the interview in selecting resident applicants: behavioral interviews to determine goodness of fit. Academic pathology. 2021;8:23742895211052884–23742895211052884.
Breitkopf DM, Vaughan LE, Hopkins MR. Correlation of behavioral interviewing performance with obstetrics and gynecology residency applicant characteristics☆?>. J Surg Educ. 2016;73(6):954–8.
Langhan ML, Goldman MP, Tiyyagura G. Can behavior-based interviews reduce bias in fellowship applicant assessment? Acad Pediatr. 2022;22(3):478–85.
Gardner AK, D’Onofrio BC, Dunkin BJ. Can we get faculty interviewers on the same page? An examination of a structured interview course for surgeons. J Surg Educ. 2018;75(1):72–7.
Oberai H, Ila Mehrotra A, Unconscious bias: thinking without thinking. Hum Resour Manag Int Dig 2018;26(6):14–17.
Hull L, Sevdalis N. Advances in teaching and assessing nontechnical skills. Surg Clin North Am. 2015;95(4):869–85.
• Balhara KS, et al. Navigating bias on interview day: strategies for charting an inclusive and equitable course. J Grad Med Educ. 2021;13(4):466–70. Strategies for decreasing bias in the interview process based on best practices from medical and corporate literature, cognitive psychology theory, and the authors' experiences. Provides simple, actionable and accessible strategies for navigating and mitigating the pitfalls of bias during residency interview
Haag J, et al. Impact of blinding interviewers to written applications on ranking of Gynecologic Oncology fellowship applicants from groups underrepresented in medicine. Gynecol Oncol Rep. 2022;39: 100935.
Kasales C, Peterson C, Gagnon E. Interview techniques utilized in radiology resident selection-a survey of the APDR. Acad Radiol. 2019;26(7):989–98.
Levashina J, et al. The structured employment interview: narrative and quantitative review of the research literature. Pers Psychol. 2014;67(1):241–93.
Al Abri R, Mathew J, Jeyaseelan L. Multiple mini-interview consistency and satisfactoriness for residency program recruitment: Oman evidence. Oman Med J 2019;34(3):218–223.
Boysen-Osborn M, et al. A multiple-mini interview (MMI) for emergency medicine residency admissions: a brief report and qualitative analysis. J Adv Med Educ Prof. 2018;6(4):176–80.
Google Scholar
Jerant A, et al. Reliability of multiple mini-interviews and traditional interviews within and between institutions: a study of five California medical schools. BMC Med Educ. 2017;17(1):190.
Lund S, et al. Conducting virtual simulated skills multiple mini-interviews for general surgery residency interviews. J Surg Educ. 2021;78(6):1786–90.
Sheehan A, et al. The impact of multiple mini interviews on the attrition and academic outcomes of midwifery students. Women Birth. 2022;35(4):e318–27.
Article CAS Google Scholar
Khoushhal Z, et al. Prevalence and causes of attrition among surgical residents: a systematic review and meta-analysis. JAMA Surg. 2017;152(3):265–72.
DeBenedectis CM, et al. A program director’s guide to cultivating diversity and inclusion in radiology residency recruitment. Acad Radiol. 2020;27(6):864–7.
Figueroa O. The significance of recruiting underrepresented minorities in medicine: an examination of the need for effective approaches used in admissions by higher education institutions. Med Educ Online. 2014;19:24891–24891.
Costa PC, Gardner AK. Strategies to increase diversity in surgical residency. Current Surgery Reports. 2021;9(5):11.
Gardner AK. How can best practices in recruitment and selection improve diversity in surgery? Ann Surg 2018:267(1)
Resident Match process policy and guidelines. 2022; Available from: https://sauweb.org/match-program/resident-match-process.aspx .
Otugo O, et al. Bias in recruitment: a focus on virtual interviews and holistic review to advance diversity. AEM Education and Training. 2021;5(S1):S135–9.
Hughes RH, Kleinschmidt S, Sheng AY. Using structured interviews to reduce bias in emergency medicine residency recruitment: worth a second look. AEM Educ Train. 2021;5(Suppl 1):S130-s134.
• Huppert LA, et al. Virtual interviews at graduate medical education training programs: determining evidence-based best practices. Acad Med. 2021;96(8):1137–45. Review of existing literature regarding virtual interviews that summarizes best practices for interviews the advantages and disadvantages of the virtual interview format. The authors make the following recommendations: develop a detailed plan for the interview process, consider using standardized interview questions, recognize and respond to potential biases that may be amplified with the virtual interview format, prepare your own trainees for virtual interviews, develop electronic materials and virtual social events to approximate the interview day, and collect data about virtual interviews at your own institution
Chou DW, et al. Otolaryngology residency interviews in a socially distanced world: strategies to recruit and assess applicants. Otolaryngol Head Neck Surg. 2021;164(5):903–8.
AAMC Standardized Video Interview Evaluation Summary. 2022.
Schnapp BH, et al. Assessing residency applicants’ communication and professionalism: standardized video interview scores compared to faculty gestalt. West J Emerg Med. 2019;20(1):132–7.
Chung AS, et al. How well does the standardized video interview score correlate with traditional interview performance? Western Journal of Emergency Medicine. 2019;20(5):726–30.
Weissbart SJ, Stock JA, Wein AJ. Program directors’ criteria for selection into urology residency. Urology. 2015;85(4):731–6.
Chantal Ghanney Simons E, et al. MP19–05 Landscape analysis of the use of holistic review in the urology residency match process. J Urol 2022;207:e308
Download references
Authors and affiliations.
Department of Urology, University of Iowa, Iowa City, USA
Ilana Bergelson, Chad Tracy & Elizabeth Takacs
You can also search for this author in PubMed Google Scholar
Correspondence to Ilana Bergelson .
Conflict of interest.
The authors have no financial or non-financial interests to disclose.
This article does not contain any studies with human or animal subjects performed by any of the authors.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of Topical Collection on Education
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Bergelson, I., Tracy, C. & Takacs, E. Best Practices for Reducing Bias in the Interview Process. Curr Urol Rep 23 , 319–325 (2022). https://doi.org/10.1007/s11934-022-01116-7
Download citation
Accepted : 19 July 2022
Published : 12 October 2022
Issue Date : November 2022
DOI : https://doi.org/10.1007/s11934-022-01116-7
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Advertisement
Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service
Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve
Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground
Know how your people feel and empower managers to improve employee engagement, productivity, and retention
Take action in the moments that matter most along the employee journey and drive bottom line growth
Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people
Get faster, richer insights with qual and quant tools that make powerful market research available to everyone
Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts
Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market
Explore the platform powering Experience Management
Popular Use Cases
Employee Experience
The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.
Hiring the best people for your organization requires removing bias from your interviews. Find out what they are, why it's important, and how to do it at every touchpoint.
At every step of the candidate lifecycle, there are opportunities to promote inclusive hiring practices. And yet, despite your talent acquisition team’s efforts to proactively recruit and select a pool of diverse candidates, interviewer bias can derail the entire process – ultimately hindering your organization’s ability to hire employees with myriad backgrounds and experiences.
To help you overcome the challenges interviewer bias presents, we’ve taken a closer look at the different types of bias, why you should aim to avoid bias, and shared our tips for how to reduce bias during the interview process.
There are a number of ways biases present themselves in an interview setting.
This is one of the most common types of biases. It refers to the opinions you form about a person or situation – in this case, a candidate interviewing for your organization – without knowing you’re doing so.
Your bias towards the candidate is formed by your experiences and knowledge (or opinions) of social norms, stereotypes, cultures, attitudes, and more. While your experiences sometimes serve you in making decisions, unconscious bias also harms your perception of meeting people who aren’t like you, often skewing your judgment to your expectations and preferences instead of being open-minded.
There are a number of other biases that impact your ability to interview a candidate with an open mind.
The inclination to favor a candidate who is most like you – impacts your ability to see the value in those who aren’t like you.
This means you’re more likely to favor the candidate you most recently interviewed.
This occurs when you focus on one particularly great feature about a person and neglect others – including those that are negative.
This is just the opposite: allowing a weak fact to overshadow positive qualities in a candidate.
A bias towards one gender over the other – can cause you to unconsciously prefer a candidate based on his or her gender and the qualities you associate with it.
Or how you perceive your actions as well as those of others, stems from our brain’s flawed ability to assess the reasons for certain behaviors – particularly those that lead to success and failure. In general, we attribute our own accomplishments to our skills and abilities, and our failures to external factors.
The reverse is true for others, especially people we don’t know, such as job candidates. We tend to minimize their accomplishments or attribute them to luck, but attribute career misses to skill deficits.
This refers to how we often search for evidence that aligns with our own opinions, rather than considering the whole picture or person. This often leads to overlooking other information and instead focusing on things that fit your view of a candidate.
Get the HR leader’s guide: Applying diversity, equity, and inclusion to your employee experience program
When we spoke to people as part of our global study of more than 11,800 participants at the end of 2020, a sense of belonging emerged as the strongest driver of employee engagement – ahead of typical drivers like trust in leadership and ability for career growth. Belonging is a core element of inclusion, along with feeling as though you can be yourself at work, and that your organization is a place where everyone can succeed to their full potential, no matter who they are.
We know a culture of equity and inclusion is not only critical to the success of diversity efforts, but creating an equitable and inclusive workplace also creates a positive employee experience .
Organizations that have had diversity, equity, and inclusion (DEI) strategies in place for an extended period of time have reported positive business outcomes, such as:
+ Diverse teams are more innovative and capable of solving complex problems
+ Companies with gender diverse boards have superior financial outcomes
+ Inclusive managers and psychological safety support team effectiveness
+ DEI is highly connected to employee engagement , job satisfaction, and retention
+ Diversity and inclusion impact company reputation and risk management
There is not only strong moral value in building a DEI program – working to eliminate bias and systemic equity issues around gender and race – there’s measurable business value, as well.
Read more: 8 expert tips for fostering equity in the workplace
Get started addressing the biases that hinder your organization’s ability to foster a workplace where everyone belongs.
Here are several ways to reduce bias in your interview process:
Get insights to your candidate experience
Qualtrics // Experience Management
Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.
With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.
May 13, 2024
March 27, 2024
February 21, 2024
January 11, 2024
December 11, 2023
November 3, 2023
October 31, 2023
October 24, 2023
Stay up to date with the latest xm thought leadership, tips and news., request demo.
Ready to learn more about Qualtrics?
When it comes to hiring the right person, there can be a lot of stress involved in the recruitment process. From sifting through CVs to getting through assessments and reports to track down the ideal candidate. This is compounded by the fact that in the past you might have missed out on the perfect candidate because of something called interview bias.
We all carry some biases in our subconscious, and interview bias is no different. It can make hiring the right candidate more difficult because interview bias can interfere with objectivity and cloud the judgment of the person being interviewed.
In this guide we are going to take a closer look at interview bias, understanding what it is, the different types of interview bias and how to reduce it when it comes to making hiring decisions.
Interviewer bias is where the expectations or opinions of the interviewer interferes with the judgment of the interviewee. This can either affect the outcome positively or negatively and that these preconceptions can both consciously and unconsciously influence judgment.
For example, an interviewer may decide that the candidate wasn’t a good fit for the organization because their handshake wasn’t strong enough at the start of the interview or that not enough eye contact was made when answering questions. These are extreme but very common types of negative outcomes in interview bias.
Another form of bias may be that the interviewer feels some sort of affinity towards the interviewee because they like the same football team or share a similar point of view. It’s important to note that some interviewees will answer questions in a manner that is to please the interviewer, clouding that bias further.
Interview bias can exhibit in different ways, not least when the interviewer uses language in the questions that exhibit biases or talk about subjects that would be geared towards more personal choices rather than the role itself. For example, the interviewer may talk about what has recently happened in the news - unless you’re interviewing for a news organization, this is loaded with bias as the answer given may result in a variety of different responses.
Of course, interviewer bias can also be due to body language, facial expression etc. These are preconceived notions that have arisen through years of development in the interviewer and whilst many would consider themselves to have little bias, that is ever rarely the case. After all, we are all human, we will develop these ideas over time and we are all subject to it.
The biggest point to note is that interview bias refers to how responses from participants are affected by aspects of the interviewer. From a handshake to the opinions on a particular area of concern, how the interviewer sees those things can make or break the chances of the person being hired - all because the interviewer sees it differently.
Interview bias can be against or in favor of a particular candidate over another, and this is where bias can play a significant role in both the interview and the outcoming selection. You must take the measures previously discussed in order to limit bias and remove it from the decision making process.
What are the types of bias that can affect interviews? Here are some of the most common forms of bias.
These are generalized opinions formed over time about how people from a given gender, religion or race, think, act, feel, or respond. Example: Presuming that a woman would prefer a desk job over working in engineering is a form of stereotyping bias.
This is where different questions are asked to different candidates. Example: You may ask a caucasian male candidate to describe what their university experience was like compared to a candidate who is a person of color, where you only ask about work experience.
Based on a small amount of negative information, you reject the candidate. Interviewers will weigh negative information twice as much as favorable information. Negative emphasis generally happens on subjective factors such as dress or nonverbal communication which can taint the interviewer’s judgment.
The halo effect is where the interviewer allows one strong point - they personally view it as - to overshadow all the other information presented in the interview. When it works in favor of the candidate, it is known as the halo effect; and when it works in the opposite direction (the interviewer judges potential candidates unfavorably in all areas on the basis of one trait) it is called the horn effect.
Cultural noise is about the failure to recognize a candidate’s responses which are socially acceptable rather than factual. For example, the candidate may give responses that are "politically correct" but not very revealing. Example: An employer may comment, "I note that you are applying for a role that has more working hours. How do you feel about that?" The applicant might say that this is fine even though this is not the case.
There is not just one type of interview bias, there are plenty and whilst we have covered just a few in the previous sections, it is a good idea to understand just what those types of biases are.
We are going to take a closer look at the different types of interview bias and uncover more in each one.
This is a generalized belief about a group of people where the interviewer has a clouded judgment of the candidate based on their social category and not the skills or competencies of the interviewee. It could be that the position requires longer working hours than normal and if there are childcare commitments, a female candidate may be excluded before the interview stage. Other examples of stereotype bias include:
Confirmation bias is something we are all guilty of. In the case of recruitment it is when the interviewee is asked questions to confirm or elicit responses that support the preconceived notions about a particular candidate. This ultimately means that the interviewer is only concerned about confirming an idea that they have of the candidate either from a preconception that comes from a CV or application or from the moment that the interviewer meets the candidate and another form of bias may have crept into the interview stage.
Social desirability bias or cultural noise bias as it can otherwise be called is when the interviewee changes their answers so that they are more desirable from a cultural perspective rather than expressing their own true thoughts.
Recency bias can occur when the interviewer bases their assessment on recent events and not over a wider period of time. Therefore memories of the most recent interview candidates are stronger. It is sometimes called, contrast effect bias - wherein interviewers compare candidates with the preceding interviewee.
Gender and racial bias are pretty self explanatory. In short, this is when the interviewer will either hold a general view about a certain gender or race or that the role is not suitable for either because of preconceived ideas and notions. It is critical that no interviewer should be influenced by prejudice from both a moral standpoint and a legal one as well.
Similarity bias is when interviewers and candidates may discuss hobbies they share or display similar traits in an interview. Hiring decisions based on these similarities rather than a candidate's qualifications may be the result of similarity bias.
Both interviewers and interviewees communicate in a non-verbal manner through body language and eye contact as well through verbal communication. When an interviewer focuses more on the nonverbal aspect rather than the skills of the interviewee this is known as nonverbal bias.
This refers to when one single characteristic overshadows all the other ones for the interviewee. For example this could be where the candidate went to school or in some cases it could be how good looking they are. This will give the interviewee an idea that is positive about all the skillsets of the candidate rather than just one area.
This is in effect the opposite to the halo bias. For example this could be where the candidate may spell something incorrectly on their CV giving the interviewee an idea that is negative about all the skillsets of the candidate rather than just one area.
There are plenty of examples where we can dive deeper into interview bias. Here is one such example that explains affinity bias.
Affinity bias is one of the most common types of interview bias. This is where there is an affinity between the interviewer and the candidate, therefore the candidate is viewed in a better light than they should be. Traditionally this could be from something on the C.V. such as the university they went to was the same, or they had the same manager but many years apart.
However, affinity bias can take place in a moment usually within the interview itself. Let’s say a candidate gets interviewed on a Monday and the interviewer begins with, “how was your weekend?” The response is something like, “Good thank you. I went for a bike ride and did some trails.” The hiring manager is equally a keen cyclist and so they get along. Whether intentional or not, the hiring manager has a favorable view of the candidate, despite the fact that no work-related evidence has been presented yet.
Things like red flags are quickly dismissed from the rest of the interview, and the positive characteristics of the candidate are emphasized even further.
With something like affinity bias, it is better to use blind C.V’s which limit the amount of personal information one can gather and equally, not to ask personal questions, in, before or after the interview.
Another good example of interview bias is the Horns effect.
As previously mentioned, the horns effect is the opposite to the Halo effect. So how does it manifest in an interview scenario? The horns effect means that the candidate has appeared as a “bad” candidate once (maybe briefly) on one item of the interview, and now the interviewer has made up their mind. Anything else, even as accurate as the answer could be will be dismissed or downplayed.
Finally, another type of bias that is common is attribution bias. Similar to confirmation bias but with a twist, it is another form of cognitive bias but in this scenario, the interviewer will make up reasons for facts and things that happen to the candidate, instead of looking at the facts objectively. They try to craft explanations, or more accurately, to invent explanations.
An example would be the following; the candidate shows up late to an interview and the interviewer decides that they came late because the candidate does not care about the role. The truth however is that the interviewer has no way to certainly assert if the candidate cares or not because they cannot read minds. However, attribution bias will dictate that such behavior does not imply a specific case.
The interviewer is dangerous in this scenario because they are painting a picture of the candidate based on their beliefs (attribution) and then frames the rest of the interview so that the candidate lives up to such a faulty image (confirmation).
There are many types of bias that we have discussed, however, it is equally important to understand that removing bias from the interview process helps interviewers to correctly identify the best candidates and be objective.
How can interviewers remove bias from the interview process? Here are some suggestions:
This is a document that is put together to help provide a structure for the interviewing process. It helps keep both the interviewer and the organization interviewing in a consistent and compliant manner which should help all candidates get the same treatment at interview.
Furthermore, this will help interviewers know what to ask and in what order, so as to help provide the same candidate experience for all applicants. Whilst the questions may change based on the industry or requirements for the job, an interview guide can be used to help candidates for a role be treated equally.
Using standardized questions and a scoring criteria removes many different kinds of bias from the interview process. You can develop a scoring system but then this needs to be standardized across each interview. By doing so, you bring clarity to the decision making process based solely on the information you have from the interview over other potential influencing decisions.
Interviewers must have training on equality and diversity, including how to avoid their own unconscious biases. Not only will this help to minimize the impact of hidden intolerances and prejudices but it will also provide a fairer system for all the candidates being interviewed.
Training should include things such as:
Where possible, you can keep many aspects of the candidate selection anonymous. For example in a skills assessment, you can remove the potential for bias and aid recruitment in decision making by removing things such as name, date of birth and even ethnicity from the records. Keeping this information anonymous allows you to thoroughly assess the skills of the candidate without coloring any judgments.
We are of course all naturally biased. What this means is that we need to remove bias at different stages of an interview process, and in some cases use more than one method to help do this. One way to do this is to use multiple interviewers so that the potential for bias affecting the interview process is reduced.
Someone will have less bias in one area than another and vice versa. By opening up the pool of interviewers, you are allowing for more skills to shine through.
Real-time note taking helps minimize the chance of bias. Why? Because notes after the meeting can be tinged with opinions or ideas about the candidate which has no place in the decision making process. Keeping accurate records throughout the interview will help to identify the skills and competencies of the candidate in a clear and concise manner, removing bias along the way.
Whilst there is always room for a bit of small talk to help the candidate feel at ease and at home with the process, making that small talk the topic of conversation can contribute towards bias in the interview.
Being able to keep the small talk, small is essential. This is where sticking to a script and using a marking method can help with the development of the interview process and limit the level of potential bias.
You can use an assessment matrix to help identify a candidate based on a number of different criteria, and all without any form of bias creeping in. From job description, person specification and the agreed weight given to each criterion, an interviewer will ensure that all applicants are assessed objectively, and solely on their ability to do the job satisfactorily.
This will help to ensure that every hiring decision is based on reason and evidence, rather than opinion and potential discriminatory bias.
Where you get your candidates from can easily influence the kind of bias that some may have compared to others. For example if you’re only getting candidates from a job board, it may be worthwhile opening up the application process from a job fair or going to higher education establishments and recruiting from there.
This will open up the variety of candidates from different areas to help your interviewers get the most broad range of candidates from different areas which in turn will provide a less biased based evaluation of the interviewee.
Sometimes, interviews are not always the best option and it’s easy to forget that they are not the only way of gathering information - depending on the role you are hiring for. As an example, large-scale phone interviews can be time-consuming and expensive whilst mailed questionnaires may provide the best option when it comes to getting information from a large number of people.
Here are some other reasons why interviews are not always the best option;
There are also times that when an interview is over, the interviewee wants to go over the copy, change or edit it to make sure that they are addressing any concerns that the interviewer may have. That’s not always a good option - in fact, that shouldn’t really happen but sometimes bias can step in and allow for it to happen. If the subject you're addressing involves technical information, you may have the interviewee check the final result for you, just for accuracy.
Finally there is something which isn’t always considered when it comes to deciding if an interview is the best option or not, that is, do you have a standardized process in place? Research by Schmidt and Hunter has consistently shown that the interview method on average explains an average of 8% variation in employee performance. This means its predictive power is limited when it comes to predicting which employee will perform when put on the job.
There are a number of reasons why the interview process has such a low level of predictive power, but primarily this is focused on the factor that most interviews are unstructured and unstandardized. This means that candidates who have applied for the same job are sometimes asked questions that have nothing to do with the job hence making it difficult to assess their suitability for the job. Another reason why interviews show low predictive power is interviewers are poorly trained and rarely come for the interviews prepared. This needs to be something that changes in order to bring a greater consistency to the interview process and in return, bring out the best results from the candidates.
There is however a solution to a lot of the problems that are brought out from interviews, one that can work against the biases mentioned above. That is to conduct interviews and assessments together, so that the process is standardized and better equipped to deal with bias in the first place.
Interview bias refers to how responses from participants are affected by aspects of the interviewer and when done in such a manner, it can lead to interviewers making a bad decision over who should or who shouldn’t be hired.
We are all biased and the interview stage has many different forms which can affect selection of a great candidate. Being able to limit these biases from the interview process is essential in keeping the organization more responsive to a better crop of candidates for long term success.
The Thomas Recruitment Platform allow candidates to be evaluated fairly, freely and without bias from the information provided interfering in their application. It can also aid your organization in the development of value based questions and analyze results to help make hiring decisions easier. Specifically, the interview guide is dynamically generated for each individual that takes our assessments, and gives you suggested questions to ask in the interviews based on the traits and aptitudes that you define as most important for the role. Using these questions in an interview, you can really get to the heart of each interviewee and understand more about their personality and behavior than you would from a standard interview.
If you would like to know more, please speak to one of our team .
IMAGES
VIDEO
COMMENTS
Residency programs have begun adopting best practices from business models for interviewing, which include standardized questions, situational and/or behavioral anchored questions, blinded interviewers, and use of the multiple mini-interview (MMI) model. The focus of this review is to take a more in-depth look at practices that have become ...
Bias—commonly understood to be any influence that provides a distortion in the results of a study (Polit & Beck, 2014)—is a term drawn from the quantitative research paradigm.Most (though perhaps not all) of us would recognize the concept as being incompatible with the philosophical underpinnings of qualitative inquiry (Thorne, Stephens, & Truant, 2016).
Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.
The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 83-84).. The outcome of a qualitative in-depth interview (IDI) study, regardless of mode, is greatly affected by the interviewer's conscious or unconscious influence within the context of the IDIs—that is, the absence or presence of interviewer bias.
Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...
But while unstructured interviews consistently receive the highest ratings for perceived effectiveness from hiring managers, dozens of studies have found them to be among the worst predictors of ...
The Formative Influences Timeline (FIT) is one such procedure. This article provides a brief history of bias in qualitative research, describes the FIT and how to use it, and reviews nonprofit research studies that either employed—or could have employed—the FIT to produce data relatively uncontaminated by researchers' a priori assumptions.
An interview for research studies may be obtained from a variety of sources: commercial interviewing services; students or research assis tants; and staff members of the or ganization doing the survey. Commercial Interviewers. Profes sional interviewers are readily availa ble in most large cities. The quality of their services varies ...
Any number of actions and behaviours made by the interviewer that cause the interviewee to react and respond in ways that conform with the interviewer's bias. Examples of this could be: using certain language, phrases or leading questions. outwardly expressing one's own beliefs or answers to questions.
The importance of considering the implications from undo prejudices in qualitative research was discussed in the April 2011 Research Design Review post, " Visual Cues & Bias in Qualitative Research," which emphasized that "there is clearly much more effort that needs to be made on this issue.". Reflexivity and, specifically, the ...
EXAMPLE: You reject a candidate for a programming job because their socioeconomic background makes you question their intelligence, even though they have years of experience. 2. Halo/horn bias. These biases form when a single characteristic or physical trait overshadows an applicant's other qualities.
Coming Face to Face with Interview Bias. Between a Graph and a Hard Place / By Sara Wicen / September 29, 2022. Throughout the Between a Graph and A Hard Place blog series, we've often spoken of bias and controlling bias. From keeping your own bias in check during qualitative data analysis to understanding social desirability bias during ...
Consider potential bias while constructing the interview and order the questions suitably. Ask general questions first, before moving to specific or sensitive questions. Leading questions and wording bias. Questions that lead or prompt the participants in the direction of probable outcomes may result in biased answers.
Social desirability bias. When researchers use a survey, questionnaire, or interview to collect data, in practice, the questions asked may concern private or sensitive topics, such as self-report of dietary intake, drug use, income, and violence.
An important gap in the extant interview bias literature is the lack of research that focuses on a multitude of applicant characteristics while simultaneously investigating the sources of information about applicants as well as the interviewer characteristics that might lead to bias (Triana et al., 2021).
Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .
The subject of this column is the nature of bias in both quantitative and qualitative research. To that end, bias will be defined and then both the processes by which it enters into research will be entertained along with discussions on how to ameliorate this problem.
Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.
These structured formats can be adopted to the virtual interviews as well. There is growing literature that using structured interviews reduces bias, increases diversity, and recruits successful residents. Further research to measure the extent of incorporating this method into residency interviews will be needed in the future.
XM for Strategy & Research Research. Get faster, richer insights with qual and quant tools that make powerful market research available to everyone. ... 10 ways to reduce bias in interviews. Get started addressing the biases that hinder your organization's ability to foster a workplace where everyone belongs.
Research by Schmidt and Hunter has consistently shown that the interview method on average explains an average of 8% variation in employee performance. This means its predictive power is limited when it comes to predicting which employee will perform when put on the job. ... Interview bias refers to how responses from participants are affected ...
rily logical, modern or at times legal. Some common biases that may occur in an interview include, stereotyping, the halo/pitchfork effect, nonv. rbal bias and the "like me" s. rome. Let's explore these. a bit more. First we have stereotyping. This is forming an opinion about how people of a given race, gender, religion or other c.
Qualitative research methods have traditionally been criticised for lacking rigor, and impressionistic and biased results. Subsequently, as qualitative methods have been increasingly used in social work inquiry, efforts to address these criticisms have also increased.