Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Qualitative Research Interviews

Try Qualtrics for free

How to carry out great interviews in qualitative research.

11 min read An interview is one of the most versatile methods used in qualitative research. Here’s what you need to know about conducting great qualitative interviews.

What is a qualitative research interview?

Qualitative research interviews are a mainstay among q ualitative research techniques, and have been in use for decades either as a primary data collection method or as an adjunct to a wider research process. A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom.

There are three main types of qualitative research interview – structured, unstructured or semi-structured.

  • Structured interviews Structured interviews are based around a schedule of predetermined questions and talking points that the researcher has developed. At their most rigid, structured interviews may have a precise wording and question order, meaning that they can be replicated across many different interviewers and participants with relatively consistent results.
  • Unstructured interviews Unstructured interviews have no predetermined format, although that doesn’t mean they’re ad hoc or unplanned. An unstructured interview may outwardly resemble a normal conversation, but the interviewer will in fact be working carefully to make sure the right topics are addressed during the interaction while putting the participant at ease with a natural manner.
  • Semi-structured interviews Semi-structured interviews are the most common type of qualitative research interview, combining the informality and rapport of an unstructured interview with the consistency and replicability of a structured interview. The researcher will come prepared with questions and topics, but will not need to stick to precise wording. This blended approach can work well for in-depth interviews.

Free eBook: The qualitative research design handbook

What are the pros and cons of interviews in qualitative research?

As a qualitative research method interviewing is hard to beat, with applications in social research, market research, and even basic and clinical pharmacy. But like any aspect of the research process, it’s not without its limitations. Before choosing qualitative interviewing as your research method, it’s worth weighing up the pros and cons.

Pros of qualitative interviews:

  • provide in-depth information and context
  • can be used effectively when their are low numbers of participants
  • provide an opportunity to discuss and explain questions
  • useful for complex topics
  • rich in data – in the case of in-person or video interviews , the researcher can observe body language and facial expression as well as the answers to questions

Cons of qualitative interviews:

  • can be time-consuming to carry out
  • costly when compared to some other research methods
  • because of time and cost constraints, they often limit you to a small number of participants
  • difficult to standardize your data across different researchers and participants unless the interviews are very tightly structured
  • As the Open University of Hong Kong notes, qualitative interviews may take an emotional toll on interviewers

Qualitative interview guides

Semi-structured interviews are based on a qualitative interview guide, which acts as a road map for the researcher. While conducting interviews, the researcher can use the interview guide to help them stay focused on their research questions and make sure they cover all the topics they intend to.

An interview guide may include a list of questions written out in full, or it may be a set of bullet points grouped around particular topics. It can prompt the interviewer to dig deeper and ask probing questions during the interview if appropriate.

Consider writing out the project’s research question at the top of your interview guide, ahead of the interview questions. This may help you steer the interview in the right direction if it threatens to head off on a tangent.

what is interview schedule in qualitative research

Avoid bias in qualitative research interviews

According to Duke University , bias can create significant problems in your qualitative interview.

  • Acquiescence bias is common to many qualitative methods, including focus groups. It occurs when the participant feels obliged to say what they think the researcher wants to hear. This can be especially problematic when there is a perceived power imbalance between participant and interviewer. To counteract this, Duke University’s experts recommend emphasizing the participant’s expertise in the subject being discussed, and the value of their contributions.
  • Interviewer bias is when the interviewer’s own feelings about the topic come to light through hand gestures, facial expressions or turns of phrase. Duke’s recommendation is to stick to scripted phrases where this is an issue, and to make sure researchers become very familiar with the interview guide or script before conducting interviews, so that they can hone their delivery.

What kinds of questions should you ask in a qualitative interview?

The interview questions you ask need to be carefully considered both before and during the data collection process. As well as considering the topics you’ll cover, you will need to think carefully about the way you ask questions.

Open-ended interview questions – which cannot be answered with a ‘yes’ ‘no’ or ‘maybe’ – are recommended by many researchers as a way to pursue in depth information.

An example of an open-ended question is “What made you want to move to the East Coast?” This will prompt the participant to consider different factors and select at least one. Having thought about it carefully, they may give you more detailed information about their reasoning.

A closed-ended question , such as “Would you recommend your neighborhood to a friend?” can be answered without too much deliberation, and without giving much information about personal thoughts, opinions and feelings.

Follow-up questions can be used to delve deeper into the research topic and to get more detail from open-ended questions. Examples of follow-up questions include:

  • What makes you say that?
  • What do you mean by that?
  • Can you tell me more about X?
  • What did/does that mean to you?

As well as avoiding closed-ended questions, be wary of leading questions. As with other qualitative research techniques such as surveys or focus groups, these can introduce bias in your data. Leading questions presume a certain point of view shared by the interviewer and participant, and may even suggest a foregone conclusion.

An example of a leading question might be: “You moved to New York in 1990, didn’t you?” In answering the question, the participant is much more likely to agree than disagree. This may be down to acquiescence bias or a belief that the interviewer has checked the information and already knows the correct answer.

Other leading questions involve adjectival phrases or other wording that introduces negative or positive connotations about a particular topic. An example of this kind of leading question is: “Many employees dislike wearing masks to work. How do you feel about this?” It presumes a positive opinion and the participant may be swayed by it, or not want to contradict the interviewer.

Harvard University’s guidelines for qualitative interview research add that you shouldn’t be afraid to ask embarrassing questions – “if you don’t ask, they won’t tell.” Bear in mind though that too much probing around sensitive topics may cause the interview participant to withdraw. The Harvard guidelines recommend leaving sensitive questions til the later stages of the interview when a rapport has been established.

More tips for conducting qualitative interviews

Observing a participant’s body language can give you important data about their thoughts and feelings. It can also help you decide when to broach a topic, and whether to use a follow-up question or return to the subject later in the interview.

Be conscious that the participant may regard you as the expert, not themselves. In order to make sure they express their opinions openly, use active listening skills like verbal encouragement and paraphrasing and clarifying their meaning to show how much you value what they are saying.

Remember that part of the goal is to leave the interview participant feeling good about volunteering their time and their thought process to your research. Aim to make them feel empowered , respected and heard.

Unstructured interviews can demand a lot of a researcher, both cognitively and emotionally. Be sure to leave time in between in-depth interviews when scheduling your data collection to make sure you maintain the quality of your data, as well as your own well-being .

Recording and transcribing interviews

Historically, recording qualitative research interviews and then transcribing the conversation manually would have represented a significant part of the cost and time involved in research projects that collect qualitative data.

Fortunately, researchers now have access to digital recording tools, and even speech-to-text technology that can automatically transcribe interview data using AI and machine learning. This type of tool can also be used to capture qualitative data from qualitative research (focus groups,ect.) making this kind of social research or market research much less time consuming.

what is interview schedule in qualitative research

Data analysis

Qualitative interview data is unstructured, rich in content and difficult to analyze without the appropriate tools. Fortunately, machine learning and AI can once again make things faster and easier when you use qualitative methods like the research interview.

Text analysis tools and natural language processing software can ‘read’ your transcripts and voice data and identify patterns and trends across large volumes of text or speech. They can also perform khttps://www.qualtrics.com/experience-management/research/sentiment-analysis/

which assesses overall trends in opinion and provides an unbiased overall summary of how participants are feeling.

what is interview schedule in qualitative research

Another feature of text analysis tools is their ability to categorize information by topic, sorting it into groupings that help you organize your data according to the topic discussed.

All in all, interviews are a valuable technique for qualitative research in business, yielding rich and detailed unstructured data. Historically, they have only been limited by the human capacity to interpret and communicate results and conclusions, which demands considerable time and skill.

When you combine this data with AI tools that can interpret it quickly and automatically, it becomes easy to analyze and structure, dovetailing perfectly with your other business data. An additional benefit of natural language analysis tools is that they are free of subjective biases, and can replicate the same approach across as much data as you choose. By combining human research skills with machine analysis, qualitative research methods such as interviews are more valuable than ever to your business.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

what is interview schedule in qualitative research

Qualitative Research 101: Interviewing

5 Common Mistakes To Avoid When Undertaking Interviews

By: David Phair (PhD) and Kerryn Warren (PhD) | March 2022

Undertaking interviews is potentially the most important step in the qualitative research process. If you don’t collect useful, useable data in your interviews, you’ll struggle through the rest of your dissertation or thesis.  Having helped numerous students with their research over the years, we’ve noticed some common interviewing mistakes that first-time researchers make. In this post, we’ll discuss five costly interview-related mistakes and outline useful strategies to avoid making these.

Overview: 5 Interviewing Mistakes

  • Not having a clear interview strategy /plan
  • Not having good interview techniques /skills
  • Not securing a suitable location and equipment
  • Not having a basic risk management plan
  • Not keeping your “ golden thread ” front of mind

1. Not having a clear interview strategy

The first common mistake that we’ll look at is that of starting the interviewing process without having first come up with a clear interview strategy or plan of action. While it’s natural to be keen to get started engaging with your interviewees, a lack of planning can result in a mess of data and inconsistency between interviews.

There are several design choices to decide on and plan for before you start interviewing anyone. Some of the most important questions you need to ask yourself before conducting interviews include:

  • What are the guiding research aims and research questions of my study?
  • Will I use a structured, semi-structured or unstructured interview approach?
  • How will I record the interviews (audio or video)?
  • Who will be interviewed and by whom ?
  • What ethics and data law considerations do I need to adhere to?
  • How will I analyze my data? 

Let’s take a quick look at some of these.

The core objective of the interviewing process is to generate useful data that will help you address your overall research aims. Therefore, your interviews need to be conducted in a way that directly links to your research aims, objectives and research questions (i.e. your “golden thread”). This means that you need to carefully consider the questions you’ll ask to ensure that they align with and feed into your golden thread. If any question doesn’t align with this, you may want to consider scrapping it.

Another important design choice is whether you’ll use an unstructured, semi-structured or structured interview approach . For semi-structured interviews, you will have a list of questions that you plan to ask and these questions will be open-ended in nature. You’ll also allow the discussion to digress from the core question set if something interesting comes up. This means that the type of information generated might differ a fair amount between interviews.

Contrasted to this, a structured approach to interviews is more rigid, where a specific set of closed questions is developed and asked for each interviewee in exactly the same order. Closed questions have a limited set of answers, that are often single-word answers. Therefore, you need to think about what you’re trying to achieve with your research project (i.e. your research aims) and decided on which approach would be best suited in your case.

It is also important to plan ahead with regards to who will be interviewed and how. You need to think about how you will approach the possible interviewees to get their cooperation, who will conduct the interviews, when to conduct the interviews and how to record the interviews. For each of these decisions, it’s also essential to make sure that all ethical considerations and data protection laws are taken into account.

Finally, you should think through how you plan to analyze the data (i.e., your qualitative analysis method) generated by the interviews. Different types of analysis rely on different types of data, so you need to ensure you’re asking the right types of questions and correctly guiding your respondents.

Simply put, you need to have a plan of action regarding the specifics of your interview approach before you start collecting data. If not, you’ll end up drifting in your approach from interview to interview, which will result in inconsistent, unusable data.

Your interview questions need to directly  link to your research aims, objectives and  research questions - your "golden thread”.

2. Not having good interview technique

While you’re generally not expected to become you to be an expert interviewer for a dissertation or thesis, it is important to practice good interview technique and develop basic interviewing skills .

Let’s go through some basics that will help the process along.

Firstly, before the interview , make sure you know your interview questions well and have a clear idea of what you want from the interview. Naturally, the specificity of your questions will depend on whether you’re taking a structured, semi-structured or unstructured approach, but you still need a consistent starting point . Ideally, you should develop an interview guide beforehand (more on this later) that details your core question and links these to the research aims, objectives and research questions.

Before you undertake any interviews, it’s a good idea to do a few mock interviews with friends or family members. This will help you get comfortable with the interviewer role, prepare for potentially unexpected answers and give you a good idea of how long the interview will take to conduct. In the interviewing process, you’re likely to encounter two kinds of challenging interviewees ; the two-word respondent and the respondent who meanders and babbles. Therefore, you should prepare yourself for both and come up with a plan to respond to each in a way that will allow the interview to continue productively.

To begin the formal interview , provide the person you are interviewing with an overview of your research. This will help to calm their nerves (and yours) and contextualize the interaction. Ultimately, you want the interviewee to feel comfortable and be willing to be open and honest with you, so it’s useful to start in a more casual, relaxed fashion and allow them to ask any questions they may have. From there, you can ease them into the rest of the questions.

As the interview progresses , avoid asking leading questions (i.e., questions that assume something about the interviewee or their response). Make sure that you speak clearly and slowly , using plain language and being ready to paraphrase questions if the person you are interviewing misunderstands. Be particularly careful with interviewing English second language speakers to ensure that you’re both on the same page.

Engage with the interviewee by listening to them carefully and acknowledging that you are listening to them by smiling or nodding. Show them that you’re interested in what they’re saying and thank them for their openness as appropriate. This will also encourage your interviewee to respond openly.

Need a helping hand?

what is interview schedule in qualitative research

3. Not securing a suitable location and quality equipment

Where you conduct your interviews and the equipment you use to record them both play an important role in how the process unfolds. Therefore, you need to think carefully about each of these variables before you start interviewing.

Poor location: A bad location can result in the quality of your interviews being compromised, interrupted, or cancelled. If you are conducting physical interviews, you’ll need a location that is quiet, safe, and welcoming . It’s very important that your location of choice is not prone to interruptions (the workplace office is generally problematic, for example) and has suitable facilities (such as water, a bathroom, and snacks).

If you are conducting online interviews , you need to consider a few other factors. Importantly, you need to make sure that both you and your respondent have access to a good, stable internet connection and electricity. Always check before the time that both of you know how to use the relevant software and it’s accessible (sometimes meeting platforms are blocked by workplace policies or firewalls). It’s also good to have alternatives in place (such as WhatsApp, Zoom, or Teams) to cater for these types of issues.

Poor equipment: Using poor-quality recording equipment or using equipment incorrectly means that you will have trouble transcribing, coding, and analyzing your interviews. This can be a major issue , as some of your interview data may go completely to waste if not recorded well. So, make sure that you use good-quality recording equipment and that you know how to use it correctly.

To avoid issues, you should always conduct test recordings before every interview to ensure that you can use the relevant equipment properly. It’s also a good idea to spot check each recording afterwards, just to make sure it was recorded as planned. If your equipment uses batteries, be sure to always carry a spare set.

Where you conduct your interviews and the equipment you use to record them play an important role in how the process unfolds.

4. Not having a basic risk management plan

Many possible issues can arise during the interview process. Not planning for these issues can mean that you are left with compromised data that might not be useful to you. Therefore, it’s important to map out some sort of risk management plan ahead of time, considering the potential risks, how you’ll minimize their probability and how you’ll manage them if they materialize.

Common potential issues related to the actual interview include cancellations (people pulling out), delays (such as getting stuck in traffic), language and accent differences (especially in the case of poor internet connections), issues with internet connections and power supply. Other issues can also occur in the interview itself. For example, the interviewee could drift off-topic, or you might encounter an interviewee who does not say much at all.

You can prepare for these potential issues by considering possible worst-case scenarios and preparing a response for each scenario. For instance, it is important to plan a backup date just in case your interviewee cannot make it to the first meeting you scheduled with them. It’s also a good idea to factor in a 30-minute gap between your interviews for the instances where someone might be late, or an interview runs overtime for other reasons. Make sure that you also plan backup questions that could be used to bring a respondent back on topic if they start rambling, or questions to encourage those who are saying too little.

In general, it’s best practice to plan to conduct more interviews than you think you need (this is called oversampling ). Doing so will allow you some room for error if there are interviews that don’t go as planned, or if some interviewees withdraw. If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel , delay, or not produce useful data.

You should consider all the potential risks, how you’ll reduce their probability and how you'll respond if they do indeed materialize.

5. Not keeping your golden thread front of mind

We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don’t want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims . Your research aims, objectives and research questions – i.e., your golden thread – should influence every design decision and should guide the interview process at all times. 

A useful way to avoid this mistake is by developing an interview guide before you begin interviewing your respondents. An interview guide is a document that contains all of your questions with notes on how each of the interview questions is linked to the research question(s) of your study. You can also include your research aims and objectives here for a more comprehensive linkage. 

You can easily create an interview guide by drawing up a table with one column containing your core interview questions . Then add another column with your research questions , another with expectations that you may have in light of the relevant literature and another with backup or follow-up questions . As mentioned, you can also bring in your research aims and objectives to help you connect them all together. If you’d like, you can download a copy of our free interview guide here .

Recap: Qualitative Interview Mistakes

In this post, we’ve discussed 5 common costly mistakes that are easy to make in the process of planning and conducting qualitative interviews.

To recap, these include:

If you have any questions about these interviewing mistakes, drop a comment below. Alternatively, if you’re interested in getting 1-on-1 help with your thesis or dissertation , check out our dissertation coaching service or book a free initial consultation with one of our friendly Grad Coaches.

what is interview schedule in qualitative research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

The Interview Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Interviews involve a conversation with a purpose, but have some distinct features compared to ordinary conversation, such as being scheduled in advance, having an asymmetry in outcome goals between interviewer and interviewee, and often following a question-answer format.

Interviews are different from questionnaires as they involve social interaction. Unlike questionnaire methods, researchers need training in interviewing (which costs money).

Multiracial businesswomen talk brainstorm at team meeting discuss business ideas together. Diverse multiethnic female colleagues or partners engaged in discussion. Interview concept

How Do Interviews Work?

Researchers can ask different types of questions, generating different types of data . For example, closed questions provide people with a fixed set of responses, whereas open questions allow people to express what they think in their own words.

The researcher will often record interviews, and the data will be written up as a transcript (a written account of interview questions and answers) which can be analyzed later.

It should be noted that interviews may not be the best method for researching sensitive topics (e.g., truancy in schools, discrimination, etc.) as people may feel more comfortable completing a questionnaire in private.

There are different types of interviews, with a key distinction being the extent of structure. Semi-structured is most common in psychology research. Unstructured interviews have a free-flowing style, while structured interviews involve preset questions asked in a particular order.

Structured Interview

A structured interview is a quantitative research method where the interviewer a set of prepared closed-ended questions in the form of an interview schedule, which he/she reads out exactly as worded.

Interviews schedules have a standardized format, meaning the same questions are asked to each interviewee in the same order (see Fig. 1).

interview schedule example

   Figure 1. An example of an interview schedule

The interviewer will not deviate from the interview schedule (except to clarify the meaning of the question) or probe beyond the answers received.  Replies are recorded on a questionnaire, and the order and wording of questions, and sometimes the range of alternative answers, is preset by the researcher.

A structured interview is also known as a formal interview (like a job interview).

  • Structured interviews are easy to replicate as a fixed set of closed questions are used, which are easy to quantify – this means it is easy to test for reliability .
  • Structured interviews are fairly quick to conduct which means that many interviews can take place within a short amount of time. This means a large sample can be obtained, resulting in the findings being representative and having the ability to be generalized to a large population.

Limitations

  • Structured interviews are not flexible. This means new questions cannot be asked impromptu (i.e., during the interview), as an interview schedule must be followed.
  • The answers from structured interviews lack detail as only closed questions are asked, which generates quantitative data . This means a researcher won’t know why a person behaves a certain way.

Unstructured Interview

Unstructured interviews do not use any set questions, instead, the interviewer asks open-ended questions based on a specific research topic, and will try to let the interview flow like a natural conversation. The interviewer modifies his or her questions to suit the candidate’s specific experiences.

Unstructured interviews are sometimes referred to as ‘discovery interviews’ and are more like a ‘guided conservation’ than a strictly structured interview. They are sometimes called informal interviews.

Unstructured interviews are most useful in qualitative research to analyze attitudes and values. Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective points of view.

Interviewer Self-Disclosure

Interviewer self-disclosure involves the interviewer revealing personal information or opinions during the research interview. This may increase rapport but risks changing dynamics away from a focus on facilitating the interviewee’s account.

In unstructured interviews, the informal conversational style may deliberately include elements of interviewer self-disclosure, mirroring ordinary conversation dynamics.

Interviewer self-disclosure risks changing the dynamics away from facilitation of interviewee accounts. It should not be ruled out entirely but requires skillful handling informed by reflection.

  • An informal interviewing style with some interviewer self-disclosure may increase rapport and participant openness. However, it also increases the chance of the participant converging opinions with the interviewer.
  • Complete interviewer neutrality is unlikely. However, excessive informality and self-disclosure risk the interview becoming more of an ordinary conversation and producing consensus accounts.
  • Overly personal disclosures could also be seen as irrelevant and intrusive by participants. They may invite increased intimacy on uncomfortable topics.
  • The safest approach seems to be to avoid interviewer self-disclosures in most cases. Where an informal style is used, disclosures require careful judgment and substantial interviewing experience.
  • If asked for personal opinions during an interview, the interviewer could highlight the defined roles and defer that discussion until after the interview.
  • Unstructured interviews are more flexible as questions can be adapted and changed depending on the respondents’ answers. The interview can deviate from the interview schedule.
  • Unstructured interviews generate qualitative data through the use of open questions. This allows the respondent to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation.
  • They also have increased validity because it gives the interviewer the opportunity to probe for a deeper understanding, ask for clarification & allow the interviewee to steer the direction of the interview, etc. Interviewers have the chance to clarify any questions of participants during the interview.
  • It can be time-consuming to conduct an unstructured interview and analyze the qualitative data (using methods such as thematic analysis).
  • Employing and training interviewers is expensive and not as cheap as collecting data via questionnaires . For example, certain skills may be needed by the interviewer. These include the ability to establish rapport and knowing when to probe.
  • Interviews inevitably co-construct data through researchers’ agenda-setting and question-framing. Techniques like open questions provide only limited remedies.

Focus Group Interview

Focus group interview is a qualitative approach where a group of respondents are interviewed together, used to gain an in‐depth understanding of social issues.

This type of interview is often referred to as a focus group because the job of the interviewer ( or moderator ) is to bring the group to focus on the issue at hand. Initially, the goal was to reach a consensus among the group, but with the development of techniques for analyzing group qualitative data, there is less emphasis on consensus building.

The method aims to obtain data from a purposely selected group of individuals rather than from a statistically representative sample of a broader population.

The role of the interview moderator is to make sure the group interacts with each other and do not drift off-topic. Ideally, the moderator will be similar to the participants in terms of appearance, have adequate knowledge of the topic being discussed, and exercise mild unobtrusive control over dominant talkers and shy participants.

A researcher must be highly skilled to conduct a focus group interview. For example, the moderator may need certain skills, including the ability to establish rapport and know when to probe.

  • Group interviews generate qualitative narrative data through the use of open questions. This allows the respondents to talk in some depth, choosing their own words. This helps the researcher develop a real sense of a person’s understanding of a situation. Qualitative data also includes observational data, such as body language and facial expressions.
  • Group responses are helpful when you want to elicit perspectives on a collective experience, encourage diversity of thought, reduce researcher bias, and gather a wider range of contextualized views.
  • They also have increased validity because some participants may feel more comfortable being with others as they are used to talking in groups in real life (i.e., it’s more natural).
  • When participants have common experiences, focus groups allow them to build on each other’s comments to provide richer contextual data representing a wider range of views than individual interviews.
  • Focus groups are a type of group interview method used in market research and consumer psychology that are cost – effective for gathering the views of consumers .
  • The researcher must ensure that they keep all the interviewees” details confidential and respect their privacy. This is difficult when using a group interview. For example, the researcher cannot guarantee that the other people in the group will keep information private.
  • Group interviews are less reliable as they use open questions and may deviate from the interview schedule, making them difficult to repeat.
  • It is important to note that there are some potential pitfalls of focus groups, such as conformity, social desirability, and oppositional behavior, that can reduce the usefulness of the data collected.
For example, group interviews may sometimes lack validity as participants may lie to impress the other group members. They may conform to peer pressure and give false answers.

To avoid these pitfalls, the interviewer needs to have a good understanding of how people function in groups as well as how to lead the group in a productive discussion.

Semi-Structured Interview

Semi-structured interviews lie between structured and unstructured interviews. The interviewer prepares a set of same questions to be answered by all interviewees. Additional questions might be asked during the interview to clarify or expand certain issues.

In semi-structured interviews, the interviewer has more freedom to digress and probe beyond the answers. The interview guide contains a list of questions and topics that need to be covered during the conversation, usually in a particular order.

Semi-structured interviews are most useful to address the ‘what’, ‘how’, and ‘why’ research questions. Both qualitative and quantitative analyses can be performed on data collected during semi-structured interviews.

  • Semi-structured interviews allow respondents to answer more on their terms in an informal setting yet provide uniform information making them ideal for qualitative analysis.
  • The flexible nature of semi-structured interviews allows ideas to be introduced and explored during the interview based on the respondents’ answers.
  • Semi-structured interviews can provide reliable and comparable qualitative data. Allows the interviewer to probe answers, where the interviewee is asked to clarify or expand on the answers provided.
  • The data generated remain fundamentally shaped by the interview context itself. Analysis rarely acknowledges this endemic co-construction.
  • They are more time-consuming (to conduct, transcribe, and analyze) than structured interviews.
  • The quality of findings is more dependent on the individual skills of the interviewer than in structured interviews. Skill is required to probe effectively while avoiding biasing responses.

The Interviewer Effect

Face-to-face interviews raise methodological problems. These stem from the fact that interviewers are themselves role players, and their perceived status may influence the replies of the respondents.

Because an interview is a social interaction, the interviewer’s appearance or behavior may influence the respondent’s answers. This is a problem as it can bias the results of the study and make them invalid.

For example, the gender, ethnicity, body language, age, and social status of the interview can all create an interviewer effect. If there is a perceived status disparity between the interviewer and the interviewee, the results of interviews have to be interpreted with care. This is pertinent for sensitive topics such as health.

For example, if a researcher was investigating sexism amongst males, would a female interview be preferable to a male? It is possible that if a female interviewer was used, male participants might lie (i.e., pretend they are not sexist) to impress the interviewer, thus creating an interviewer effect.

Flooding interviews with researcher’s agenda

The interactional nature of interviews means the researcher fundamentally shapes the discourse, rather than just neutrally collecting it. This shapes what is talked about and how participants can respond.
  • The interviewer’s assumptions, interests, and categories don’t just shape the specific interview questions asked. They also shape the framing, task instructions, recruitment, and ongoing responses/prompts.
  • This flooding of the interview interaction with the researcher’s agenda makes it very difficult to separate out what comes from the participant vs. what is aligned with the interviewer’s concerns.
  • So the participant’s talk ends up being fundamentally shaped by the interviewer rather than being a more natural reflection of the participant’s own orientations or practices.
  • This effect is hard to avoid because interviews inherently involve the researcher setting an agenda. But it does mean the talk extracted may say more about the interview process than the reality it is supposed to reflect.

Interview Design

First, you must choose whether to use a structured or non-structured interview.

Characteristics of Interviewers

Next, you must consider who will be the interviewer, and this will depend on what type of person is being interviewed. There are several variables to consider:

  • Gender and age : This can greatly affect respondents’ answers, particularly on personal issues.
  • Personal characteristics : Some people are easier to get on with than others. Also, the interviewer’s accent and appearance (e.g., clothing) can affect the rapport between the interviewer and interviewee.
  • Language : The interviewer’s language should be appropriate to the vocabulary of the group of people being studied. For example, the researcher must change the questions’ language to match the respondents’ social background” age / educational level / social class/ethnicity, etc.
  • Ethnicity : People may have difficulty interviewing people from different ethnic groups.
  • Interviewer expertise should match research sensitivity – inexperienced students should avoid interviewing highly vulnerable groups.

Interview Location

The location of a research interview can influence the way in which the interviewer and interviewee relate and may exaggerate a power dynamic in one direction or another. It is usual to offer interviewees a choice of location as part of facilitating their comfort and encouraging participation.

However, the safety of the interviewer is an overriding consideration and, as mentioned, a minimal requirement should be that a responsible person knows where the interviewer has gone and when they are due back.

Remote Interviews

The COVID-19 pandemic necessitated remote interviewing for research continuity. However online interview platforms provide increased flexibility even under normal conditions.

They enable access to participant groups across geographical distances without travel costs or arrangements. Online interviews can be efficiently scheduled to align with researcher and interviewee availability.

There are practical considerations in setting up remote interviews. Interviewees require access to internet and an online platform such as Zoom, Microsoft Teams or Skype through which to connect.

Certain modifications help build initial rapport in the remote format. Allowing time at the start of the interview for casual conversation while testing audio/video quality helps participants settle in. Minor delays can disrupt turn-taking flow, so alerting participants to speak slightly slower than usual minimizes accidental interruptions.

Keeping remote interviews under an hour avoids fatigue for stare at a screen. Seeking advanced ethical clearance for verbal consent at the interview start saves participant time. Adapting to the remote context shows care for interviewees and aids rich discussion.

However, it remains important to critically reflect on how removing in-person dynamics may shape the co-created data. Perhaps some nuances of trust and disclosure differ over video.

Vulnerable Groups

The interviewer must ensure that they take special care when interviewing vulnerable groups, such as children. For example, children have a limited attention span, so lengthy interviews should be avoided.

Developing an Interview Schedule

An interview schedule is a list of pre-planned, structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue.
  • List the key themes or topics that must be covered to address your research questions. This will form the basic content.
  • Organize the content logically, such as chronologically following the interviewee’s experiences. Place more sensitive topics later in the interview.
  • Develop the list of content into actual questions and prompts. Carefully word each question – keep them open-ended, non-leading, and focused on examples.
  • Add prompts to remind you to cover areas of interest.
  • Pilot test the interview schedule to check it generates useful data and revise as needed.
  • Be prepared to refine the schedule throughout data collection as you learn which questions work better.
  • Practice skills like asking follow-up questions to get depth and detail. Stay flexible to depart from the schedule when needed.
  • Keep questions brief and clear. Avoid multi-part questions that risk confusing interviewees.
  • Listen actively during interviews to determine which pre-planned questions can be skipped based on information the participant has already provided.

The key is balancing preparation with the flexibility to adapt questions based on each interview interaction. With practice, you’ll gain skills to conduct productive interviews that obtain rich qualitative data.

The Power of Silence

Strategic use of silence is a key technique to generate interviewee-led data, but it requires judgment about appropriate timing and duration to maintain mutual understanding.
  • Unlike ordinary conversation, the interviewer aims to facilitate the interviewee’s contribution without interrupting. This often means resisting the urge to speak at the end of the interviewee’s turn construction units (TCUs).
  • Leaving a silence after a TCU encourages the interviewee to provide more material without being led by the interviewer. However, this simple technique requires confidence, as silence can feel socially awkward.
  • Allowing longer silences (e.g. 24 seconds) later in interviews can work well, but early on even short silences may disrupt rapport if they cause misalignment between speakers.
  • Silence also allows interviewees time to think before answering. Rushing to re-ask or amend questions can limit responses.
  • Blunt backchannels like “mm hm” also avoid interrupting flow. Interruptions, especially to finish an interviewee’s turn, are problematic as they make the ownership of perspectives unclear.
  • If interviewers incorrectly complete turns, an upside is it can produce extended interviewee narratives correcting the record. However, silence would have been better to let interviewees shape their own accounts.

Recording & Transcription

Design choices.

Design choices around recording and engaging closely with transcripts influence analytic insights, as well as practical feasibility. Weighing up relevant tradeoffs is key.
  • Audio recording is standard, but video better captures contextual details, which is useful for some topics/analysis approaches. Participants may find video invasive for sensitive research.
  • Digital formats enable the sharing of anonymized clips. Additional microphones reduce audio issues.
  • Doing all transcription is time-consuming. Outsourcing can save researcher effort but needs confidentiality assurances. Always carefully check outsourced transcripts.
  • Online platform auto-captioning can facilitate rapid analysis, but accuracy limitations mean full transcripts remain ideal. Software cleans up caption file formatting.
  • Verbatim transcripts best capture nuanced meaning, but the level of detail needed depends on the analysis approach. Referring back to recordings is still advisable during analysis.
  • Transcripts versus recordings highlight different interaction elements. Transcripts make overt disagreements clearer through the wording itself. Recordings better convey tone affiliativeness.

Transcribing Interviews & Focus Groups

Here are the steps for transcribing interviews:
  • Play back audio/video files to develop an overall understanding of the interview
  • Format the transcription document:
  • Add line numbers
  • Separate interviewer questions and interviewee responses
  • Use formatting like bold, italics, etc. to highlight key passages
  • Provide sentence-level clarity in the interviewee’s responses while preserving their authentic voice and word choices
  • Break longer passages into smaller paragraphs to help with coding
  • If translating the interview to another language, use qualified translators and back-translate where possible
  • Select a notation system to indicate pauses, emphasis, laughter, interruptions, etc., and adapt it as needed for your data
  • Insert screenshots, photos, or documents discussed in the interview at the relevant point in the transcript
  • Read through multiple times, revising formatting and notations
  • Double-check the accuracy of transcription against audio/videos
  • De-identify transcript by removing identifying participant details

The goal is to produce a formatted written record of the verbal interview exchange that captures the meaning and highlights important passages ready for the coding process. Careful transcription is the vital first step in analysis.

Coding Transcripts

The goal of transcription and coding is to systematically transform interview responses into a set of codes and themes that capture key concepts, experiences and beliefs expressed by participants. Taking care with transcription and coding procedures enhances the validity of qualitative analysis .
  • Read through the transcript multiple times to become immersed in the details
  • Identify manifest/obvious codes and latent/underlying meaning codes
  • Highlight insightful participant quotes that capture key concepts (in vivo codes)
  • Create a codebook to organize and define codes with examples
  • Use an iterative cycle of inductive (data-driven) coding and deductive (theory-driven) coding
  • Refine codebook with clear definitions and examples as you code more transcripts
  • Collaborate with other coders to establish the reliability of codes

Ethical Issues

Informed consent.

The participant information sheet must give potential interviewees a good idea of what is involved if taking part in the research.

This will include the general topics covered in the interview, where the interview might take place, how long it is expected to last, how it will be recorded, the ways in which participants’ anonymity will be managed, and incentives offered.

It might be considered good practice to consider true informed consent in interview research to require two distinguishable stages:

  • Consent to undertake and record the interview and
  • Consent to use the material in research after the interview has been conducted and the content known, or even after the interviewee has seen a copy of the transcript and has had a chance to remove sections, if desired.

Power and Vulnerability

  • Early feminist views that sensitivity could equalize power differences are likely naive. The interviewer and interviewee inhabit different knowledge spheres and social categories, indicating structural disparities.
  • Power fluctuates within interviews. Researchers rely on participation, yet interviewees control openness and can undermine data collection. Assumptions should be avoided.
  • Interviews on sensitive topics may feel like quasi-counseling. Interviewers must refrain from dual roles, instead supplying support service details to all participants.
  • Interviewees recruited for trauma experiences may reveal more than anticipated. While generating analytic insights, this risks leaving them feeling exposed.
  • Ultimately, power balances resist reconciliation. But reflexively analyzing operations of power serves to qualify rather than nullify situtated qualitative accounts.

Some groups, like those with mental health issues, extreme views, or criminal backgrounds, risk being discredited – treated skeptically by researchers.

This creates tensions with qualitative approaches, often having an empathetic ethos seeking to center subjective perspectives. Analysis should balance openness to offered accounts with critically examining stakes and motivations behind them.

Potter, J., & Hepburn, A. (2005). Qualitative interviews in psychology: Problems and possibilities.  Qualitative research in Psychology ,  2 (4), 281-307.

Houtkoop-Steenstra, H. (2000). Interaction and the standardized survey interview: The living questionnaire . Cambridge University Press

Madill, A. (2011). Interaction in the semi-structured interview: A comparative analysis of the use of and response to indirect complaints. Qualitative Research in Psychology, 8 (4), 333–353.

Maryudi, A., & Fisher, M. (2020). The power in the interview: A practical guide for identifying the critical role of actor interests in environment research. Forest and Society, 4 (1), 142–150

O’Key, V., Hugh-Jones, S., & Madill, A. (2009). Recruiting and engaging with people in deprived locales: Interviewing families about their eating patterns. Social Psychological Review, 11 (20), 30–35.

Puchta, C., & Potter, J. (2004). Focus group practice . Sage.

Schaeffer, N. C. (1991). Conversation with a purpose— Or conversation? Interaction in the standardized interview. In P. P. Biemer, R. M. Groves, L. E. Lyberg, & N. A. Mathiowetz (Eds.), Measurement errors in surveys (pp. 367–391). Wiley.

Silverman, D. (1973). Interview talk: Bringing off a research instrument. Sociology, 7 (1), 31–48.

Print Friendly, PDF & Email

Logo for Open Educational Resources Collective

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 13: Interviews

Danielle Berkovic

Learning outcomes

Upon completion of this chapter, you should be able to:

  • Understand when to use interviews in qualitative research.
  • Develop interview questions for an interview guide.
  • Understand how to conduct an interview.

What are interviews?

An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a participant’s subjective view of an experience and are also considered suitable for the exploration of sensitive topics.

What are the different types of interviews?

There are four main types of interviews:

  • Key stakeholder: A key stakeholder interview aims to explore one issue in detail with a person of interest or importance concerning the research topic. 3 Key stakeholder interviews seek the views of experts on some cultural, political or health aspects of the community, beyond their personal beliefs or actions. An example of a key stakeholder is the Chief Health Officer of Victoria (Australia’s second-most populous state) who oversaw the world’s longest lockdowns in response to the COVID-19 pandemic.
  • Dyad: A dyad interview aims to explore one issue in a level of detail with a dyad (two people). This form of interviewing is used when one participant of the dyad may need some support or is not wholly able to articulate themselves (e.g. people with cognitive impairment, or children). Independence is acknowledged and the interview is analysed as a unit. 4
  • Narrative: A narrative interview helps individuals tell their stories, and prioritises their own perspectives and experiences using the language that they prefer. 5 This type of interview has been widely used in social research but is gaining prominence in health research to better understand person-centred care, for example, negotiating exercise and food abstinence whilst living with Type 2 diabetes. 6,7
  • Life history: A life history interview allows the researcher to explore a person’s individual and subjective experiences within a history of the time framework. 8 Life history interviews challenge the researcher to understand how people’s current attitudes, behaviours and choices are influenced by previous experiences or trauma. Life history interviews have been conducted with Holocaust survivors 9 and youth who have been forcibly recruited to war. 10

Table 13.4 provides a summary of four studies, each adopting one of these types of interviews.

Interviewing techniques

There are two main interview techniques:

  • Semi-structured: Semi-structured interviewing aims to explore a few issues in moderate detail, to expand the researcher’s knowledge at some level. 11 Semi-structured interviews give the researcher the advantage of remaining reasonably objective while enabling participants to share their perspectives and opinions. The researcher should create an interview guide with targeted open questions to direct the interview. As examples, semi-structured interviews have been used to extend knowledge of why women might gain excess weight during pregnancy, 12 and to update guidelines for statin uptake. 13
  • In-depth: In-depth interviewing aims to explore a person’s subjective experiences and feelings about a particular topic. 14 In-depth interviews are often used to explore emotive (e.g. end-of-life care) 15 and complex (e.g. adolescent pregnancy) topics. 16 The researcher should create an interview guide with selected open questions to ask of the participant, but the participant should guide the direction of the interview more than in a semi-structured setting. In-depth interviews value participants’ lived experiences and are frequently used in phenomenology studies (as described in Chapter 6) .

When to use the different types of interview s

The type of interview a researcher uses should be determined by the study design, the research aims and objectives, and participant demographics. For example, if conducting a descriptive study, semi-structured interviews may be the best method of data collection. As explained in Chapter 5 , descriptive studies seek to describe phenomena, rather than to explain or interpret the data. A semi-structured interview, which seeks to expand upon some level of existing knowledge, will likely best facilitate this.

Similarly, if conducting a phenomenological study, in-depth interviews may be the best method of data collection. As described in Chapter 6 , the key concept of phenomenology is the individual. The emphasis is on the lived experience of that individual and the person’s sense-making of those experiences. Therefore, an in-depth interview is likely best placed to elicit that rich data.

While some interview types are better suited to certain study designs, there are no restrictions on the type of interview that may be used. For example, semi-structured interviews provide an excellent accompaniment to trial participation (see Chapter 11 about mixed methods), and key stakeholder interviews, as part of an action research study, can be used to define priorities, barriers and enablers to implementation.

How do I write my interview questions?

An interview aims to explore the experiences, understandings, opinions and motivations of research participants. The general rule is that the interviewee should speak for 80 per cent of the interview, and the interviewer should only be asking questions and clarifying responses, for about 20 per cent of the interview. This percentage may differ depending on the interview type; for example, a semi-structured interview involves the researcher asking more questions than in an in-depth interview. Still, to facilitate free-flowing responses, it is important to use open-ended language to encourage participants to be expansive in their responses. Examples of open-ended terms include questions that start with ‘who’, ‘how’ and ‘where’.

The researcher should avoid closed-ended questions that can be answered with yes or no, and limit conversation. For example, asking a participant ‘Did you have this experience?’ can elicit a simple ‘yes’, whereas asking them to ‘Describe your experience’, will likely encourage a narrative response. Table 13.1 provides examples of terminology to include and avoid in developing interview questions.

Table 13.1. Interview question formats to use and avoid

Use Avoid
Tell me about… Do you think that…
What happened when… Will you do this…
Why is this important? Did you believe that…
How did you feel when…

How do you…
Were there issues from your perspective…
What are the…

What does...

How long should my interview be?

There is no rule about how long an interview should take. Different types of interviews will likely run for different periods of time, but this also depends on the research question/s and the type of participant. For example, given that a semi-structured interview is seeking to expand on some previous knowledge, the interview may need no longer than 30 minutes, or up to one hour. An in-depth interview seeks to explore a topic in a greater level of detail and therefore, at a minimum, would be expected to last an hour. A dyad interview may be as short as 15 minutes (e.g. if the dyad is a person with dementia and a family member or caregiver) or longer, depending on the pairing.

Designing your interview guide

To figure out what questions to ask in an interview guide, the researcher may consult the literature, speak to experts (including people with lived experience) about the research and draw on their current knowledge. The topics and questions should be mapped to the research question/s, and the interview guide should be developed well in advance of commencing data collection. This enables time and opportunity to pilot-test the interview guide. The pilot interview provides an opportunity to explore the language and clarity of questions, the order and flow of the guide and to determine whether the instructions are clear to participants both before and after the interview. It can be beneficial to pilot-test the interview guide with someone who is not familiar with the research topic, to make sure that the language used is easily understood (and will be by participants, too). The study design should be used to determine the number of questions asked and the duration of the interview should guide the extent of the interview guide. The participant type may also determine the extent of the interview guide; for example, clinicians tend to be time-poor and therefore shorter, focused interviews are optimal. An interview guide is also likely to be shorter for a descriptive study than a phenomenological or ethnographic study, given the level of detail required. Chapter 5 outlined a descriptive study in which participants who had undergone percutaneous coronary intervention were interviewed. The interview guide consisted of four main questions and subsequent probing questions, linked to the research questions (see Table 13.2). 17

Table 13.2. Interview guide for a descriptive study

Research question Open questions Probing questions and topics
How does the patient feel, physically and psychologically, after their procedure? From your perspective, what would be considered a successful outcome of the procedure? Did the procedure meet your expectations? How do you define whether the procedure was successful?
How did you feel after the procedure?

How did you feel one week after the procedure and how does that compare with how you feel now?
How does the patient function after their procedure? After your procedure, tell me about your ability to do your daily activities? Prompt for activities including gardening, housework, personal care, work-related and family-related tasks.

Did you attend cardiac rehabilitation? Can you tell us about your experience of cardiac rehabilitation? What effect has medication had on your recovery?

What are the long-term effects of the procedure? What, if any, lifestyle changes have you made since your procedure?

Table 13.3 is an example of a larger and more detailed interview guide, designed for the qualitative component of a mixed-methods study aiming to examine the work and financial effects of living with arthritis as a younger person. The questions are mapped to the World Health Organization’s International Classification of Functioning, Disability, and Health, which measures health and disability at individual and population levels. 18

Table 13.3. Detailed interview guide

Research questions Open questions Probing questions
How do young people experience their arthritis diagnosis? Tell me about your experience of being diagnosed with arthritis.

How did being diagnosed with arthritis make you feel?

Tell me about your experience of arthritis flare ups what do they feel like?

What impacts arthritis flare ups or feeling like your arthritis is worse?

What circumstances lead to these feelings?

Based on your experience, what do you think causes symptoms of arthritis to become worse?
When were you diagnosed with arthritis?

What type of arthritis were you diagnosed with?

Does anyone else in your family have arthritis? What relation are they to you?
What are the work impacts of arthritis on younger people? What is your field of work, and how long have you been in this role?

How frequently do you work (full-time/part-time/casual)?
How has arthritis affected your work-related demands or career? How so?

Has arthritis led you to reconsider your career? How so?

Has arthritis affected your usual working hours each week? How so?

How have changes to work or career because of your arthritis impacted other areas of life, i.e. mental health or family role?
What are the financial impacts of living with arthritis as a younger person? Has your arthritis led to any financial concerns? Financial concerns pertaining to:

• Direct costs: rheumatologist, prescribed and non-prescribed medications (as well as supplements), allied health costs (rheumatology, physiotherapy, chiropractic, osteopathy, myotherapy), Pilates, and gym/personal trainer fees, complementary therapies.

• Indirect costs: workplace absenteeism, productivity, loss of wages, informal care, cost of different types of insurance: health insurance (joint replacements)

It is important to create an interview guide, for the following reasons:

  • The researcher should be familiar with their research questions.
  • Using an interview guide will enable the incorporation of feedback from the piloting process.
  • It is difficult to predict how participants will respond to interview questions. They may answer in a way that is anticipated or they may provide unanticipated insights that warrant follow-up. An interview guide (a physical or digital copy) enables the researcher to note these answers and follow-up with appropriate inquiry.
  • Participants will likely have provided heterogeneous answers to certain questions. The interview guide enables the researcher to note similarities and differences across various interviews, which may be important in data analysis.
  • Even experienced qualitative researchers get nervous before an interview! The interview guide provides a safety net if the researcher forgets their questions or needs to anticipate the next question.

Setting up the interview

In the past, most interviews were conducted in person or by telephone. Emerging technologies promote easier access to research participation (e.g. by people living in rural or remote communities, or for people with mobility limitations). Even in metropolitan settings, many interviews are now conducted electronically (e.g. using videoconferencing platforms). Regardless of your interview setting, it is essential that the interview environment is comfortable for the participant. This process can begin as soon as potential participants express interest in your research. Following are some tips from the literature and our own experiences of leading interviews:

  • Answer questions and set clear expectations . Participating in research is not an everyday task. People do not necessarily know what to expect during a research interview, and this can be daunting. Give people as much information as possible, answer their questions about the research and set clear expectations about what the interview will entail and how long it is expected to last. Let them know that the interview will be recorded for transcription and analysis purposes. Consider sending the interview questions a few days before the interview. This gives people time and space to reflect on their experiences, consider their responses to questions and to provide informed consent for their participation.
  • Consider your setting . If conducting the interview in person, consider the location and room in which the interview will be held. For example, if in a participant’s home, be mindful of their private space. Ask if you should remove your shoes before entering their home. If they offer refreshments (which in our experience many participants do), accept it with gratitude if possible. These considerations apply beyond the participant’s home; if using a room in an office setting, consider privacy and confidentiality, accessibility and potential for disruption. Consider the temperature as well as the furniture in the room, who may be able to overhear conversations and who may walk past. Similarly, if interviewing by phone or online, take time to assess the space, and if in a house or office that is not quiet or private, use headphones as needed.
  • Build rapport. The research topic may be important to participants from a professional perspective, or they may have deep emotional connections to the topic of interest. Regardless of the nature of the interview, it is important to remember that participants are being asked to open up to an interviewer who is likely to be a stranger. Spend some time with participants before the interview, to make sure that they are comfortable. Engage in some general conversation, and ask if they have any questions before you start. Remember that it is not a normal part of someone’s day to participate in research. Make it an enjoyable and/or meaningful experience for them, and it will enhance the data that you collect.
  • Let participants guide you. Oftentimes, the ways in which researchers and participants describe the same phenomena are different. In the interview, reflect the participant’s language. Make sure they feel heard and that they are willing and comfortable to speak openly about their experiences. For example, our research involves talking to older adults about their experience of falls. We noticed early in this research that participants did not use the word ‘fall’ but would rather use terms such as ‘trip’, ‘went over’ and ‘stumbled’. As interviewers we adopted the participant’s language into our questions.
  • Listen consistently and express interest. An interview is more complex than a simple question-and-answer format. The best interview data comes from participants feeling comfortable and confident to share their stories. By the time you are completing the 20th interview, it can be difficult to maintain the same level of concentration as with the first interview. Try to stay engaged: nod along with your participants, maintain eye contact, murmur in agreement and sympathise where warranted.
  • The interviewer is both the data collector and the data collection instrument. The data received is only as good as the questions asked. In qualitative research, the researcher influences how participants answer questions. It is important to remain reflexive and aware of how your language, body language and attitude might influence the interview. Being rested and prepared will enhance the quality of the questions asked and hence the data collected.
  • Avoid excessive use of ‘why’. It can be challenging for participants to recall why they felt a certain way or acted in a particular manner. Try to avoid asking ‘why’ questions too often, and instead adopt some of the open language described earlier in the chapter.

After your interview

When you have completed your interview, thank the participant and let them know they can contact you if they have any questions or follow-up information they would like to provide. If the interview has covered sensitive topics or the participant has become distressed throughout the interview, make sure that appropriate referrals and follow-up are provided (see section 6).

Download the recording from your device and make sure it is saved in a secure location that can only be accessed by people on the approved research team (see Chapters 35 and 36).

It is important to know what to do immediately after each interview is completed. Interviews should be transcribed – that is, reproduced verbatim for data analysis. Transcribing data is an important step in the process of analysis, but it is very time-consuming; transcribing a 60-minute interview can take up to 8 hours. Data analysis is discussed in Section 4.

Table 13.4. Examples of the four types of interviews

Title
CC Licence
First author and year Cuthbertson, 2019 Bannon, 2021 McGranahan, 2020 Gutierrez-Garcia, 2021
Interview type Key stakeholder Dyad Narrative Life history
Interview guide Appendix A eAppendix Supplement Not provided, but the text states that ‘qualitative semi-structured narrative interviews’ were conducted.’ [methods] Not provided, but the text states that ‘an open and semi-structured question guide was designed for use.' [methods]
Study design Convergent mixed-methods study Qualitative dyadic study Narrative interview study Life history and lifeline techniques
Number of participants 30

Key stakeholders were emergency management or disaster healthcare practitioners, academics specialising in disaster management in the Oceania region, and policy managers.
23 dyads 28 7
Aim ‘To investigate threats to the health and well-being of societies associated with disaster impact in Oceania.’ [abstract] ‘To explore the lived experiences of couples managing young-onset dementia using an integrated dyadic coping model.’[abstract] ‘To explore the experiences and views of people with psychotic experiences who have not received any treatment or other support from mental health services for the past 5 years.’ [abstract] ‘To analyse the use of life histories and lifelines in the study of female genital mutilation in the context of cross-cultural research in participants with different languages.’ [abstract]
Country Australia, Fiji, Indonesia, Aotearoa New Zealand, Timor Leste and Tonga United States England Spain
Length of interview 45–60 minutes 60 minutes 40-120 minutes 3 sessions

Session 1: life history interview

Session 2: Lifeline activity where participants used drawings to complement or enhance their interview

Session 3: The researchers and participants worked together to finalise the lifeline.
The life history interviews ran for 40 – 60 minutes. The timing for sessions 2 and 3 is not provided.
Sample of interview questions from interview guide 1. What do you believe are the top five disaster risks or threats in the Oceania region today?

2. What disaster risks do you believe are emerging in the Oceania region over the next decade?

3. Why do you think these are risks?

4. What are the drivers of these risks?

5. Do you have any suggestions on how we can improve disaster risk assessment?

6. Are the current disaster risk plans and practices suited to the future disaster risks? If not, why? If not, what do you think needs to be done to improve them?

7. What are the key areas of disaster practice that can enhance future community resilience to disaster risk?

8. What are the barriers or inhibitors to facilitating this practice?

9. What are the solutions or facilitators to enhancing community resilience?

[Appendix A]

1. We like to start by learning more about what you each first noticed that prompted the evaluations you went through to get to the diagnosis.

• Can you each tell me about the earliest symptoms you noticed?

2. What are the most noticeable or troubling symptoms that you have experienced since the time of diagnosis?

• How have your changes in functioning impacted you?

• Emotionally, how do you feel about your symptoms and the changes in functioning you are experiencing?

3. Are you open with your friends and family about the diagnosis?

• Have you experienced any stigma related to your diagnosis?

4. What is your understanding of the diagnosis?

• What is your understanding about the how this condition will affect you both in the future? How are you getting information about this diagnosis?

[eAppendix Supplement]

Not provided. Not provided.
Analysis Thematic analysis guided by The Hazard and Peril Glossary for describing and categorising disasters applied by the Centre for Research on the Epidemiology of Disasters Emergency Events Database Thematic analysis guided by the Dyadic Coping Theoretical Framework Inductive thematic analysis outlined by Braun and Clarke. Phenomenological method proposed by Giorgi (sense of the whole):

1. Reading the entire description to obtain a general sense of the discourse

2. The researcher goes back to the beginning and reads the text again, with the aim of distinguishing the meaning units by separating the perspective of the phenomenon of interest

3. The researcher expresses the contents of the units of meaning more clearly by creating categories

4. The researcher synthesises the units and categories of meaning into a consistent statement that takes into account the participant’s experience and language.
Main themes 1. Climate change is observed as a contemporary and emerging disaster risk

2. Risk is contextual to the different countries, communities and individuals in Oceania.

3. Human development trajectories and their impact, along with perceptions of a changing world, are viewed as drivers of current and emerging risks.

4. Current disaster risk plans and practices are not suited to future disaster risks.

5. Increased education and education of risk and risk assessment at a local level to empower community risk ownership.

[Results, Box 1]
1. Stress communication

2. Positive individual dyadic coping

3. Positive conjoint dyadic coping

4. Negative individual dyadic coping

5. Negative conjoint dyadic coping

[Abstract]
1. Perceiving psychosis as positive

2. Making sense of psychotic experiences

3. Finding sources of strength

4. Negative past experiences of mental health services

5. Positive past experiences with individual clinicians

[Abstract]
1. Important moments and their relationship with female genital mutilation

2. The ritual knife: how sharp or blunt it is at different stages, where and how women are subsequently held as a result

3. Changing relationships with family: how being subject to female genital mutilation changed relationships with mothers

4. Female genital mutilation increases the risk of future childbirth complications which change relationships with family and healthcare systems

5. Managing experiences with early exposure to physical and sexual violence across the lifespan.

Interviews are the most common data collection technique in qualitative research. There are four main types of interviews; the one you choose will depend on your research question, aims and objectives. It is important to formulate open-ended interview questions that are understandable and easy for participants to answer. Key considerations in setting up the interview will enhance the quality of the data obtained and the experience of the interview for the participant and the researcher.

  • Gill P, Stewart K, Treasure E, Chadwick B. Methods of data collection in qualitative research: interviews and focus groups. Br Dent J . 2008;204(6):291-295. doi:10.1038/bdj.2008.192
  • DeJonckheere M, Vaughn LM. Semistructured interviewing in primary care research: a balance of relationship and rigour. Fam Med Community Health . 2019;7(2):e000057. doi:10.1136/fmch-2018-000057
  • Nyanchoka L, Tudur-Smith C, Porcher R, Hren D. Key stakeholders’ perspectives and experiences with defining, identifying and displaying gaps in health research: a qualitative study. BMJ Open . 2020;10(11):e039932. doi:10.1136/bmjopen-2020-039932
  • Morgan DL, Ataie J, Carder P, Hoffman K. Introducing dyadic interviews as a method for collecting qualitative data. Qual Health Res .  2013;23(9):1276-84. doi:10.1177/1049732313501889
  • Picchi S, Bonapitacola C, Borghi E, et al. The narrative interview in therapeutic education. The diabetic patients’ point of view. Acta Biomed . Jul 18 2018;89(6-S):43-50. doi:10.23750/abm.v89i6-S.7488
  • Stuij M, Elling A, Abma T. Negotiating exercise as medicine: Narratives from people with type 2 diabetes. Health (London) . 2021;25(1):86-102. doi:10.1177/1363459319851545
  • Buchmann M, Wermeling M, Lucius-Hoene G, Himmel W. Experiences of food abstinence in patients with type 2 diabetes: a qualitative study. BMJ Open .  2016;6(1):e008907. doi:10.1136/bmjopen-2015-008907
  • Jessee E. The Life History Interview. Handbook of Research Methods in Health Social Sciences . 2018:1-17:Chapter 80-1.
  • Sheftel A, Zembrzycki S. Only Human: A Reflection on the Ethical and Methodological Challenges of Working with “Difficult” Stories. The Oral History Review . 2019;37(2):191-214. doi:10.1093/ohr/ohq050
  • Harnisch H, Montgomery E. “What kept me going”: A qualitative study of avoidant responses to war-related adversity and perpetration of violence by former forcibly recruited children and youth in the Acholi region of northern Uganda. Soc Sci Med .  2017;188:100-108. doi:10.1016/j.socscimed.2017.07.007
  • Ruslin., Mashuri S, Rasak MSA, Alhabsyi M, Alhabsyi F, Syam H. Semi-structured Interview: A Methodological Reflection on the Development of a Qualitative Research Instrument in Educational Studies. IOSR-JRME . 2022;12(1):22-29. doi:10.9790/7388-1201052229
  • Chang T, Llanes M, Gold KJ, Fetters MD. Perspectives about and approaches to weight gain in pregnancy: a qualitative study of physicians and nurse midwives. BMC Pregnancy & Childbirth . 2013;13(47)doi:10.1186/1471-2393-13-47
  • DeJonckheere M, Robinson CH, Evans L, et al. Designing for Clinical Change: Creating an Intervention to Implement New Statin Guidelines in a Primary Care Clinic. JMIR Hum Factors .  2018;5(2):e19. doi:10.2196/humanfactors.9030
  • Knott E, Rao AH, Summers K, Teeger C. Interviews in the social sciences. Nature Reviews Methods Primers . 2022;2(1)doi:10.1038/s43586-022-00150-6
  • Bergenholtz H, Missel M, Timm H. Talking about death and dying in a hospital setting – a qualitative study of the wishes for end-of-life conversations from the perspective of patients and spouses. BMC Palliat Care . 2020;19(1):168. doi:10.1186/s12904-020-00675-1
  • Olorunsaiye CZ, Degge HM, Ubanyi TO, Achema TA, Yaya S. “It’s like being involved in a car crash”: teen pregnancy narratives of adolescents and young adults in Jos, Nigeria. Int Health . 2022;14(6):562-571. doi:10.1093/inthealth/ihab069
  • Ayton DR, Barker AL, Peeters G, et al. Exploring patient-reported outcomes following percutaneous coronary intervention: A qualitative study. Health Expect .  2018;21(2):457-465. doi:10.1111/hex.12636
  • World Health Organization. International Classification of Functioning, Disability and Health (ICF). WHO. https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health#:~:text=ICF%20is%20the%20WHO%20framework,and%20measure%20health%20and%20disability.
  • Cuthbertson J, Rodriguez-Llanes JM, Robertson A, Archer F. Current and Emerging Disaster Risks Perceptions in Oceania: Key Stakeholders Recommendations for Disaster Management and Resilience Building. Int J Environ Res Public Health .  2019;16(3)doi:10.3390/ijerph16030460
  • Bannon SM, Grunberg VA, Reichman M, et al. Thematic Analysis of Dyadic Coping in Couples With Young-Onset Dementia. JAMA Netw Open .  2021;4(4):e216111. doi:10.1001/jamanetworkopen.2021.6111
  • McGranahan R, Jakaite Z, Edwards A, Rennick-Egglestone S, Slade M, Priebe S. Living with Psychosis without Mental Health Services: A Narrative Interview Study. BMJ Open .  2021;11(7):e045661. doi:10.1136/bmjopen-2020-045661
  • Gutiérrez-García AI, Solano-Ruíz C, Siles-González J, Perpiñá-Galvañ J. Life Histories and Lifelines: A Methodological Symbiosis for the Study of Female Genital Mutilation. Int J Qual Methods . 2021;20doi:10.1177/16094069211040969

Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Danielle Berkovic is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Library Support for Qualitative Research

  • Interview Research

General Handbooks and Overviews

Qualitative research communities.

  • Types of Interviews
  • Recruiting & Engaging Participants
  • Interview Questions
  • Conducting Interviews
  • Transcription
  • Data Analysis
  • Managing Interview Data
  • Finding Interview Data
  • Past Workshops on Interview Research
  • Methodological Resources
  • Remote & Virtual Fieldwork
  • Data Management & Repositories
  • Campus Access
  • Interviews as a Method for Qualitative Research (video) This short video summarizes why interviews can serve as useful data in qualitative research.  
  • InterViews by Steinar Kvale  Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of the process. After examining the role of the interview in the research process, Steinar Kvale considers some of the key philosophical issues relating to interviewing: the interview as conversation, hermeneutics, phenomenology, concerns about ethics as well as validity, and postmodernism. Having established this framework, the author then analyzes the seven stages of the interview process - from designing a study to writing it up.  
  • Practical Evaluation by Michael Quinn Patton  Surveys different interviewing strategies, from, a) informal/conversational, to b) interview guide approach, to c) standardized and open-ended, to d) closed/quantitative. Also discusses strategies for wording questions that are open-ended, clear, sensitive, and neutral, while supporting the speaker. Provides suggestions for probing and maintaining control of the interview process, as well as suggestions for recording and transcription.  
  • The SAGE Handbook of Interview Research by Amir B. Marvasti (Editor); James A. Holstein (Editor); Jaber F. Gubrium (Editor); Karyn D. McKinney (Editor)  The new edition of this landmark volume emphasizes the dynamic, interactional, and reflexive dimensions of the research interview. Contributors highlight the myriad dimensions of complexity that are emerging as researchers increasingly frame the interview as a communicative opportunity as much as a data-gathering format. The book begins with the history and conceptual transformations of the interview, which is followed by chapters that discuss the main components of interview practice. Taken together, the contributions to The SAGE Handbook of Interview Research: The Complexity of the Craft encourage readers simultaneously to learn the frameworks and technologies of interviewing and to reflect on the epistemological foundations of the interview craft.
  • International Congress of Qualitative Inquiry They host an annual confrerence at the University of Illinois at Urbana-Champaign, which aims to facilitate the development of qualitative research methods across a wide variety of academic disciplines, among other initiatives.
  • METHODSPACE An online home of the research methods community, where practicing researchers share how to make research easier.
  • Social Research Association, UK The SRA is the membership organisation for social researchers in the UK and beyond. It supports researchers via training, guidance, publications, research ethics, events, branches, and careers.
  • Social Science Research Council The SSRC administers fellowships and research grants that support the innovation and evaluation of new policy solutions. They convene researchers and stakeholders to share evidence-based policy solutions and incubate new research agendas, produce online knowledge platforms and technical reports that catalog research-based policy solutions, and support mentoring programs that broaden problem-solving research opportunities.
  • << Previous: Taguette
  • Next: Types of Interviews >>

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.

Qualitative Interviewing

  • Reference work entry
  • First Online: 13 January 2019
  • Cite this reference work entry

what is interview schedule in qualitative research

  • Sally Nathan 2 ,
  • Christy Newman 3 &
  • Kari Lancaster 3  

4657 Accesses

25 Citations

8 Altmetric

Qualitative interviewing is a foundational method in qualitative research and is widely used in health research and the social sciences. Both qualitative semi-structured and in-depth unstructured interviews use verbal communication, mostly in face-to-face interactions, to collect data about the attitudes, beliefs, and experiences of participants. Interviews are an accessible, often affordable, and effective method to understand the socially situated world of research participants. The approach is typically informed by an interpretive framework where the data collected is not viewed as evidence of the truth or reality of a situation or experience but rather a context-bound subjective insight from the participants. The researcher needs to be open to new insights and to privilege the participant’s experience in data collection. The data from qualitative interviews is not generalizable, but its exploratory nature permits the collection of rich data which can answer questions about which little is already known. This chapter introduces the reader to qualitative interviewing, the range of traditions within which interviewing is utilized as a method, and highlights the advantages and some of the challenges and misconceptions in its application. The chapter also provides practical guidance on planning and conducting interview studies. Three case examples are presented to highlight the benefits and risks in the use of interviewing with different participants, providing situated insights as well as advice about how to go about learning to interview if you are a novice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

what is interview schedule in qualitative research

Interviews in the social sciences

what is interview schedule in qualitative research

Interviewing in Qualitative Research

Baez B. Confidentiality in qualitative research: reflections on secrets, power and agency. Qual Res. 2002;2(1):35–58. https://doi.org/10.1177/1468794102002001638 .

Article   Google Scholar  

Braun V, Clarke V. Successful qualitative research: a practical guide for beginners. London: Sage Publications; 2013.

Google Scholar  

Braun V, Clarke V, Gray D. Collecting qualitative data: a practical guide to textual, media and virtual techniques. Cambridge: Cambridge University Press; 2017.

Book   Google Scholar  

Bryman A. Social research methods. 5th ed. Oxford: Oxford University Press; 2016.

Crotty M. The foundations of social research: meaning and perspective in the research process. Australia: Allen & Unwin; 1998.

Davies MB. Doing a successful research project: using qualitative or quantitative methods. New York: Palgrave MacMillan; 2007.

Dickson-Swift V, James EL, Liamputtong P. Undertaking sensitive research in the health and social sciences. Cambridge: Cambridge University Press; 2008.

Foster M, Nathan S, Ferry M. The experience of drug-dependent adolescents in a therapeutic community. Drug Alcohol Rev. 2010;29(5):531–9.

Gillham B. The research interview. London: Continuum; 2000.

Glaser B, Strauss A. The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine Publishing Company; 1967.

Hesse-Biber SN, Leavy P. In-depth interview. In: The practice of qualitative research. 2nd ed. Thousand Oaks: Sage Publications; 2011. p. 119–47

Irvine A. Duration, dominance and depth in telephone and face-to-face interviews: a comparative exploration. Int J Qual Methods. 2011;10(3):202–20.

Johnson JM. In-depth interviewing. In: Gubrium JF, Holstein JA, editors. Handbook of interview research: context and method. Thousand Oaks: Sage Publications; 2001.

Kvale S. Interviews: an introduction to qualitative research interviewing. Thousand Oaks: Sage; 1996.

Kvale S. Doing interviews. London: Sage Publications; 2007.

Lancaster K. Confidentiality, anonymity and power relations in elite interviewing: conducting qualitative policy research in a politicised domain. Int J Soc Res Methodol. 2017;20(1):93–103. https://doi.org/10.1080/13645579.2015.1123555 .

Leavy P. Method meets art: arts-based research practice. New York: Guilford Publications; 2015.

Liamputtong P. Researching the vulnerable: a guide to sensitive research methods. Thousand Oaks: Sage Publications; 2007.

Liamputtong P. Qualitative research methods. 4th ed. South Melbourne: Oxford University Press; 2013.

Mays N, Pope C. Quality in qualitative health research. In: Pope C, Mays N, editors. Qualitative research in health care. London: BMJ Books; 2000. p. 89–102.

McLellan E, MacQueen KM, Neidig JL. Beyond the qualitative interview: data preparation and transcription. Field Methods. 2003;15(1):63–84. https://doi.org/10.1177/1525822x02239573 .

Minichiello V, Aroni R, Hays T. In-depth interviewing: principles, techniques, analysis. 3rd ed. Sydney: Pearson Education Australia; 2008.

Morris ZS. The truth about interviewing elites. Politics. 2009;29(3):209–17. https://doi.org/10.1111/j.1467-9256.2009.01357.x .

Nathan S, Foster M, Ferry M. Peer and sexual relationships in the experience of drug-dependent adolescents in a therapeutic community. Drug Alcohol Rev. 2011;30(4):419–27.

National Health and Medical Research Council. National statement on ethical conduct in human research. Canberra: Australian Government; 2007.

Neal S, McLaughlin E. Researching up? Interviews, emotionality and policy-making elites. J Soc Policy. 2009;38(04):689–707. https://doi.org/10.1017/S0047279409990018 .

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7. https://doi.org/10.1177/1468794112446106 .

Ostrander S. “Surely you're not in this just to be helpful”: access, rapport and interviews in three studies of elites. In: Hertz R, Imber J, editors. Studying elites using qualitative methods. Thousand Oaks: Sage Publications; 1995. p. 133–50.

Chapter   Google Scholar  

Patton M. Qualitative research & evaluation methods: integrating theory and practice. Thousand Oaks: Sage Publications; 2015.

Punch KF. Introduction to social research: quantitative and qualitative approaches. London: Sage; 2005.

Rhodes T, Bernays S, Houmoller K. Parents who use drugs: accounting for damage and its limitation. Soc Sci Med. 2010;71(8):1489–97. https://doi.org/10.1016/j.socscimed.2010.07.028 .

Riessman CK. Narrative analysis. London: Sage; 1993.

Ritchie J. Not everything can be reduced to numbers. In: Berglund C, editor. Health research. Melbourne: Oxford University Press; 2001. p. 149–73.

Rubin H, Rubin I. Qualitative interviewing: the art of hearing data. 2nd ed. Thousand Oaks: Sage Publications; 2012.

Serry T, Liamputtong P. The in-depth interviewing method in health. In: Liamputtong P, editor. Research methods in health: foundations for evidence-based practice. 3rd ed. South Melbourne: Oxford University Press; 2017. p. 67–83.

Silverman D. Doing qualitative research. 5th ed. London: Sage; 2017.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (coreq): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Download references

Author information

Authors and affiliations.

School of Public Health and Community Medicine, Faculty of Medicine, UNSW, Sydney, NSW, Australia

Sally Nathan

Centre for Social Research in Health, Faculty of Arts and Social Sciences, UNSW, Sydney, NSW, Australia

Christy Newman & Kari Lancaster

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sally Nathan .

Editor information

Editors and affiliations.

School of Science and Health, Western Sydney University, Penrith, NSW, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Nathan, S., Newman, C., Lancaster, K. (2019). Qualitative Interviewing. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_77

Download citation

DOI : https://doi.org/10.1007/978-981-10-5251-4_77

Published : 13 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-5250-7

Online ISBN : 978-981-10-5251-4

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Interviewing in qualitative research

  • International Journal of Therapy and Rehabilitation 16(6):309-314
  • 16(6):309-314
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Michael Coughlan at Trinity College Dublin

  • Trinity College Dublin

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Padam Poudel

  • Noble Po-kan Lo
  • Roberta Suzette Hunte
  • Miranda Mosier-Puentes
  • Gita Mehrotra
  • Eva Skuratowicz

Pawan Kumar

  • Muhammad Umar Usman

Gideon Dzando

  • Sindre Aske Høyland

Kari Anne Holte

  • Metari Hunt

Roaa Aggad

  • Charmaine C. Williams
  • Laura B. Bugg

Angela Colantonio

  • Scand J Occup Ther
  • Maria Haage

Carina Tjörnstrand

  • Adri. Labuschagne
  • Kathryn Roulston

Kathleen Demarrais

  • Jamie B. Lewis
  • Jackie Bridges

Graham Box

  • Sheelagh Machin
  • J AM SOC INF SCI TEC

Lokman I. Meho

  • QUAL HEALTH RES
  • Margarete Sandelowski

Wendy Moyle

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Structured Interview | Definition, Guide & Examples

Structured Interview | Definition, Guide & Examples

Published on January 27, 2022 by Tegan George and Julia Merkus. Revised on June 22, 2023.

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews .

In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but this is less common.

While structured interviews are often associated with job interviews, they are also common in marketing, social science, survey methodology, and other research fields.

  • Semi-structured interviews : A few questions are predetermined, whereas the other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Table of contents

What is a structured interview, when to use a structured interview, advantages of structured interviews, disadvantages of structured interviews, structured interview questions, how to conduct a structured interview, how to analyze a structured interview, presenting your results, other interesting articles, frequently asked questions about structured interviews.

Structured interviews are the most systematized type of interview. In contrast to semi-structured or unstructured interviews, the interviewer uses predetermined questions in a set order.

Structured interviews are often closed-ended. They can be dichotomous, which means asking participants to answer “yes” or “no” to each question, or multiple-choice. While open-ended structured interviews do exist, they are less common.

Asking set questions in a set order allows you to easily compare responses between participants in a uniform context. This can help you see patterns and highlight areas for further research, and it can be a useful explanatory or exploratory research tool.

Structured interviews are best used when:

  • You already have a very clear understanding of your topic, so you possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

A structured interview is straightforward to conduct and analyze. Asking the same set of questions mitigates potential biases and leads to fewer ambiguities in analysis. It is an undertaking you can likely handle as an individual, provided you remain organized.

Differences between different types of interviews

Make sure to choose the type of interview that suits your research best. This table shows the most important differences between the four types.

Fixed questions
Fixed order of questions
Fixed number of questions
Option to ask additional questions

Reduced bias

Increased credibility, reliability and validity, simple, cost-effective and efficient, formal in nature, limited flexibility, limited scope.

It can be difficult to write structured interview questions that approximate exactly what you are seeking to measure. Here are a few tips for writing questions that contribute to high internal validity :

  • Define exactly what you want to discover prior to drafting your questions. This will help you write questions that really zero in on participant responses.
  • Avoid jargon, compound sentences, and complicated constructions.
  • Be as clear and concise as possible, so that participants can answer your question immediately.
  • Do you think that employers should provide free gym memberships?
  • Did any of your previous employers provide free memberships?
  • Does your current employer provide a free membership?
  • a) 1 time; b) 2 times; c) 3 times; d) 4 or more times
  • Do you enjoy going to the gym?

Structured interviews are among the most straightforward research methods to conduct and analyze. Once you’ve determined that they’re the right fit for your research topic , you can proceed with the following steps.

Step 1: Set your goals and objectives

Start with brainstorming some guiding questions to help you conceptualize your research question, such as:

  • What are you trying to learn or achieve from a structured interview?
  • Why are you choosing a structured interview as opposed to a different type of interview, or another research method?

If you have satisfying reasoning for proceeding with a structured interview, you can move on to designing your questions.

Step 2: Design your questions

Pay special attention to the order and wording of your structured interview questions . Remember that in a structured interview they must remain the same. Stick to closed-ended or very simple open-ended questions.

Step 3: Assemble your participants

Depending on your topic, there are a few sampling methods you can use, such as:

  • Voluntary response sampling : For example, posting a flyer on campus and finding participants based on responses
  • Convenience sampling of those who are most readily accessible to you, such as fellow students at your university
  • Stratified sampling of a particular age, race, ethnicity, gender identity, or other characteristic of interest to you
  • Judgment sampling of a specific set of participants that you already know you want to include

Step 4: Decide on your medium

Determine whether you will be conducting your interviews in person or whether your interview will take pen-and-paper format. If conducted live, you need to decide if you prefer to talk with participants in person, over the phone, or via video conferencing.

Step 5: Conduct your interviews

As you conduct your interviews, be very careful that all conditions remain as constant as possible.

  • Ask your questions in the same order, and try to moderate your tone of voice and any responses to participants as much as you can.
  • Pay special attention to your body language (e.g., nodding, raising eyebrows), as this can bias responses.

After you’re finished conducting your interviews, it’s time to analyze your results.

  • Assign each of your participants a number or pseudonym for organizational purposes.
  • Transcribe the recordings manually or with the help of transcription software.
  • Conduct a content or thematic analysis to look for categories or patterns of responses. In most cases, it’s also possible to conduct a statistical analysis to test your hypotheses .

Transcribing interviews

If you have audio-recorded your interviews, you will likely have to transcribe them prior to conducting your analysis. In some cases, your supervisor might ask you to add the transcriptions in the appendix of your paper.

First, you will have to decide whether to conduct verbatim transcription or intelligent verbatim transcription. Do pauses, laughter, or filler words like “umm” or “like” affect your analysis and research conclusions?

  • If so, conduct verbatim transcription and include them.
  • If not, conduct intelligent verbatim transcription, which excludes fillers and fixes any grammar issues, and is often easier to analyze.

The transcription process is a great opportunity for you to cleanse your data as well, spotting and resolving any inconsistencies or errors that come up as you listen.

Coding and analyzing structured interviews

After transcribing, it’s time to conduct your thematic or content analysis . This often involves “coding” words, patterns, or themes, separating them into categories for more robust analysis.

Due to the closed-ended nature of many structured interviews, you will most likely be conducting content analysis, rather than thematic analysis.

  • You quantify the categories you chose in the coding stage by counting the occurrence of the words, phrases, subjects or concepts you selected.
  • After coding, you can organize and summarize the data using descriptive statistics .
  • Next, inferential statistics allows you to come to conclusions about your hypotheses and make predictions for future research. 

When conducting content analysis, you can take an inductive or a deductive approach. With an inductive approach, you allow the data to determine your themes. A deductive approach is the opposite, and involves investigating whether your data confirm preconceived themes or ideas.

Content analysis has a systematic procedure that can easily be replicated , yielding high reliability to your results. However, keep in mind that while this approach reduces bias, it doesn’t eliminate it. Be vigilant about remaining objective here, even if your analysis does not confirm your hypotheses .

After your data analysis, the next step is to combine your findings into a research paper .

  • Your methodology section describes how you collected the data (in this case, describing your structured interview process) and explains how you justify or conceptualize your analysis.
  • Your discussion and results sections usually address each of your coded categories, describing each in turn, as well as how often they occurred.

If you conducted inferential statistics in addition to descriptive statistics, you would generally report the test statistic , p -value , and effect size in your results section. These values explain whether your results justify rejecting your null hypothesis and whether the result is practically significant .

You can then conclude with the main takeaways and avenues for further research.

Example of interview methodology for a research paper

Let’s say you are interested in healthcare on your campus. You attend a large public institution with a lot of international students, and you think there may be a difference in perceptions based on country of origin.

Specifically, you hypothesize that students coming from countries with single-payer or socialized healthcare will find US options less satisfying.

There is a large body of research available on this topic, so you decide to conduct structured interviews of your peers to see if there’s a difference between international students and local students.

You are a member of a large campus club that brings together international students and local students, and you send a message to the club to ask for volunteers.

Here are some questions you could ask:

  • Do you find healthcare options on campus to be: excellent; good; fair; average; poor?
  • Does your home country have socialized healthcare? Yes/No
  • Are you on the campus healthcare plan? Yes/No
  • Have you ever worried about your health insurance? Yes/No
  • Have you ever had a serious health condition that insurance did not cover? Yes/No
  • Have you ever been surprised or shocked by a medical bill? Yes/No

After conducting your interviews and transcribing your data, you can then conduct content analysis, coding responses into different categories. Since you began your research with the theory that international students may find US healthcare lacking, you would use the deductive approach to see if your hypotheses seem to hold true.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. & Merkus, J. (2023, June 22). Structured Interview | Definition, Guide & Examples. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/methodology/structured-interview/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, semi-structured interview | definition, guide & examples, unstructured interview | definition, guide & examples, what is a focus group | step-by-step guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • What is a semi-structured interview?

Last updated

5 February 2023

Reviewed by

Cathy Heath

Short on time? Get an AI generated summary of this article instead

When designed correctly, user interviews go much deeper than surface-level survey responses. They can provide new information about how people interact with your products and services, and shed light on the underlying reasons behind these habits.

Semi-structured user interviews are widely considered one of the most effective tools for doing this kind of qualitative research , depending on your specific goals. As the name suggests, the semi-structured format allows for a more natural, conversational flow, while still being organized enough to collect plenty of actionable data .

Analyze semi-structured interviews

Bring all your semi-structured interviews into one place to analyze and understand

A semi-structured interview is a qualitative research method used to gain an in-depth understanding of the respondent's feelings and beliefs on specific topics. As the interviewer prepares the questions ahead of time, they can adjust the order, skip any that are redundant, or create new ones. Additionally, the interviewer should be prepared to ask follow-up questions and probe for more detail.

Semi-structured interviews typically last between 30 and 60 minutes and are usually conducted either in person or via a video call. Ideally, the interviewer can observe the participant's verbal and non-verbal cues in real-time, allowing them to adjust their approach accordingly. The interviewer aims for a conversational flow that helps the participant talk openly while still focusing on the primary topics being researched.

Once the interview is over, the researcher analyzes the data in detail to draw meaningful results. This involves sorting the data into categories and looking for patterns and trends. This semi-structured interview approach provides an ideal framework for obtaining open-ended data and insights.

  • When to use a semi-structured interview?

Semi-structured interviews are considered the "best of both worlds" as they tap into the strengths of structured and unstructured methods. Researchers can gather reliable data while also getting unexpected insights from in-depth user feedback.

Semi-structured interviews can be useful during any stage of the UX product-development process, including exploratory research to better understand a new market or service. Further down the line, this approach is ideal for refining existing designs and discovering areas for improvement. Semi-structured interviews can even be the first step when planning future research projects using another method of data collection.

  • Advantages of semi-structured interviews

Flexibility

This style of interview is meant to be adapted according to the answers and reactions of the respondent, which gives a lot of flexibility. Semi-structured interviews encourage two-way communication, allowing themes and ideas to emerge organically.

Respondent comfort

The semi-structured format feels more natural and casual for participants than a formal interview. This can help to build rapport and more meaningful dialogue.

Semi-structured interviews are excellent for user experience research because they provide rich, qualitative data about how people really experience your products and services.

Open-ended questions allow the respondent to provide nuanced answers, with the potential for more valuable insights than other forms of data collection, like structured interviews , surveys , or questionnaires.

  • Disadvantages of semi-structured interviews

Can be unpredictable

Less structure brings less control, especially if the respondent goes off tangent or doesn't provide useful information. If the conversation derails, it can take a lot of effort to bring the focus back to the relevant topics.

Lack of standardization

Every semi-structured interview is unique, including potentially different questions, so the responses collected are very subjective. This can make it difficult to draw meaningful conclusions from the data unless your team invests the time in a comprehensive analysis.

Compared to other research methods, unstructured interviews are not as consistent or "ready to use."

  • Best practices when preparing for a semi-structured interview

While semi-structured interviews provide a lot of flexibility, they still require thoughtful planning. Maximizing the potential of this research method will depend on having clear goals that help you narrow the focus of the interviews and keep each session on track.

After taking the time to specify these parameters, create an interview guide to serve as a framework for each conversation. This involves crafting a range of questions that can explore the necessary themes and steer the conversation in the right direction. Everything in your interview guide is optional (that's the beauty of being "semi" structured), but it's still an essential tool to help the conversation flow and collect useful data.

Best practices to consider while designing your interview questions include:

Prioritize open-ended questions

Promote a more interactive, meaningful dialogue by avoiding questions that can be answered with a simple yes or no, otherwise known as close-ended questions.

Stick with "what," "when," "who," "where," "why," and "how" questions, which allow the participant to go beyond the superficial to express their ideas and opinions. This approach also helps avoid jargon and needless complexity in your questions.

Open-ended questions help the interviewer uncover richer, qualitative details, which they can build on to get even more valuable insights.

Plan some follow-up questions

When preparing questions for the interview guide, consider the responses you're likely to get and pair them up with some effective, relevant follow-up questions. Factual questions should be followed by ones that ask an opinion.

Planning potential follow-up questions will help you to get the most out of a semi-structured interview. They allow you to delve deeper into the participant's responses or hone in on the most important themes of your research focus.

Follow-up questions are also invaluable when the interviewer feels stuck and needs a meaningful prompt to continue the conversation.

Avoid leading questions

Leading questions are framed toward a predetermined answer. This makes them likely to result in data that is biased, inaccurate, or otherwise unreliable.

For example, asking "Why do you think our services are a good solution?" or "How satisfied have you been with our services?" will leave the interviewee feeling pressured to agree with some baseline assumptions.

Interviewers must take the time to evaluate their questions and make a conscious effort to remove any potential bias that could get in the way of authentic feedback.

Asking neutral questions is key to encouraging honest responses in a semi-structured interview. For example, "What do you consider to be the advantages of using our services?" or simply "What has been your experience with using our services?"

Neutral questions are effective in capturing a broader range of opinions than closed questions, which is ultimately one of the biggest benefits of using semi-structured interviews for research.

Use the critical incident method

The critical incident method is an approach to interviewing that focuses on the past behavior of respondents, as opposed to hypothetical scenarios. One of the challenges of all interview research methods is that people are not great at accurately recalling past experiences, or answering future-facing, abstract questions.

The critical incident method helps avoid these limitations by asking participants to recall extreme situations or 'critical incidents' which stand out in their memory as either particularly positive or negative. Extreme situations are more vivid so they can be recalled more accurately, potentially providing more meaningful insights into the interviewee’s experience with your products or services.

  • Best practices while conducting semi-structured interviews

Encouraging interaction is the key to collecting more specific data than is typically possible during a formal interview. Facilitating an effective semi-structured interview is a balancing act between asking prepared questions and creating the space for organic conversation. Here are some guidelines for striking the right tone.

Beginning the interview

Make participants feel comfortable by introducing yourself and your role at the organization and displaying appropriate body language.

Outline the purpose of the interview to give them an idea of what to expect. For example, explain that you want to learn more about how people use your product or service.

It's also important to thank them for their time in advance and emphasize there are no right or wrong answers.

Practice active listening

Build trust and rapport throughout the interview with active listening techniques, focusing on being present and demonstrating that you're paying attention by responding thoughtfully. Engage with the participant by making eye contact, nodding, and giving verbal cues like "Okay, I see," "I understand," and "M-hm."

Avoid the temptation to rush to fill any silences while they're in the middle of responding, even if it feels awkward. Give them time to finish their train of thought before interrupting with feedback or another prompt. Embracing these silences is essential for active listening because it's a sign of a productive interview with meaningful, candid responses.

Practicing these techniques will ensure the respondent feels heard and respected, which is critical for gathering high-quality information.

Ask clarifying questions in real time

In a semi-structured interview, the researcher should always be on the lookout for opportunities to probe into the participant's thoughts and opinions.

Along with preparing follow-up questions, get in the habit of asking clarifying questions whenever possible. Clarifying questions are especially important for user interviews because people often provide vague responses when discussing how they interact with products and services.

Being asked to go deeper will encourage them to give more detail and show them you’re taking their opinions seriously and are genuinely interested in understanding their experiences.

Some clarifying questions that can be asked in real-time include:

"That's interesting. Could you give me some examples of X?"

"What do you mean when you say "X"?"

"Why is that?"

"It sounds like you're saying [rephrase their response], is that correct?"

Minimize note-taking

In a wide-ranging conversation, it's easy to miss out on potentially valuable insights by not staying focused on the user. This is why semi-structured interviews are generally recorded (audio or video), and it's common to have a second researcher present to take notes.

The person conducting the interview should avoid taking notes because it's a distraction from:

Keeping track of the conversation

Engaging with the user

Asking thought-provoking questions

Watching you take notes can also have the unintended effect of making the participant feel pressured to give shallower, shorter responses—the opposite of what you want.

Concluding the interview

Semi-structured interviews don't come with a set number of questions, so it can be tricky to bring them to an end. Give the participant a sense of closure by asking whether they have anything to add before wrapping up, or if they want to ask you any questions, and then give sincere thanks for providing honest feedback.

Don't stop abruptly once all the relevant topics have been discussed or you're nearing the end of the time that was set aside. Make them feel appreciated!

  • Analyzing the data from semi-structured interviews

In some ways, the real work of semi-structured interviews begins after all the conversations are over, and it's time to analyze the data you've collected. This process will focus on sorting and coding each interview to identify patterns, often using a mix of qualitative and quantitative methods.

Some of the strategies for making sense of semi-structured interviews include:

Thematic analysis : focuses on the content of the interviews and identifying common themes

Discourse analysis : looks at how people express feelings about themes such as those involving politics, culture, and power

Qualitative data mapping: a visual way to map out the correlations between different elements of the data

Narrative analysis : uses stories and language to unlock perspectives on an issue

Grounded theory : can be applied when there is no existing theory that could explain a new phenomenon

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Benn J, Arnold G, D’Lima D, et al. Evaluation of a continuous monitoring and feedback initiative to improve quality of anaesthetic care: a mixed-methods quasi-experimental study. Southampton (UK): NIHR Journals Library; 2015 Jul. (Health Services and Delivery Research, No. 3.32.)

Cover of Evaluation of a continuous monitoring and feedback initiative to improve quality of anaesthetic care: a mixed-methods quasi-experimental study

Evaluation of a continuous monitoring and feedback initiative to improve quality of anaesthetic care: a mixed-methods quasi-experimental study.

Appendix 1 qualitative interview schedule: first time point, evaluative stakeholder interview schedule, (a) preamble to interviews.

Provide standard information form to read. Prompt for clarity/questions. Obtain consent. Provide brief verbal overview of the project and aims. Start recording.

  • Please could you introduce yourself and state your role/specialty for the recording?

(B) General views upon feedback

  • Prompt for perspectives of anaesthetists, patients, other HC [health-care] professionals, managers.
  • Do you think anaesthetists generally get adequate feedback upon these aspects of quality of care?

(C) Evaluation of the current initiative

  • What are your general thoughts about this initiative and the feedback reports that you receive? [Introduce feedback report template]
  • Have we missed anything important?

View in own window

For each measure please rate (on a scale of 1–5)PONVPainTemperature
1: Importance to overall quality of anaesthetic care? (Validity)
2: Confidence in the accuracy of the measure? (Reliability)
3: Degree to which you can influence this measure? (Controllability)
  • (Review the matrix above and check that our interpretation is correct)
  • Were the results as you expected?
  • What are the benefits? What do the reports tell you that you did not know before?
  • Are there any examples of changes you have made to your practice?
  • Does the data help you identify and address underlying reasons for variations?
  • Do you think it’s possible for you to influence the data through changing your practice?
  • If the data suggested there was an opportunity for improvement, would you change your practice?
  • How do you think anaesthetists should use the data?
  • What do you think might prevent anaesthetists from making improvements based upon the data?
  • How do you think the department should use the data?
  • Frequency, length, graphical/text content, technical complexity.
  • How do you feel about being compared with your colleagues? Prompt for any case-mix issues. Is competition important?

(D) Future development

  • Would you be interested in a longer report with a more detailed breakdown/analysis of your data?
  • What further support could be provided for anaesthetists to use this data to improve care?
  • Can you see a role for initiatives of this type in revalidation?

(E) Broader context

  • Any concerns around use of the data?
  • Is there anything about the organisation or context in which you work that might make a system like this one more or less successful?
  • Prompt for comfort with disclosing and discussing personal performance data with peers. Are such discussions constructive/punitive?
  • What other factors influence whether you are comfortable for your data to be collected and used in this way?
  • What support from the broader organisation/department/specialty would you need to use this data effectively for continuous improvement?

Indicator ratings template for use during interview:

For each measure please rate: (low) 1------–2------–3------–4------–5 (high)PONVPainTemperature
1Importance to overall quality of anaesthetic care?
2Confidence in the accuracy of the measure?
3Degree to which you can influence this measure?

Included under terms of UK Non-commercial Government License .

  • Cite this Page Benn J, Arnold G, D’Lima D, et al. Evaluation of a continuous monitoring and feedback initiative to improve quality of anaesthetic care: a mixed-methods quasi-experimental study. Southampton (UK): NIHR Journals Library; 2015 Jul. (Health Services and Delivery Research, No. 3.32.) Appendix 1, Qualitative interview schedule: first time point.
  • PDF version of this title (7.9M)

Other titles in this collection

  • Health Services and Delivery Research

Recent Activity

  • Qualitative interview schedule: first time point - Evaluation of a continuous mo... Qualitative interview schedule: first time point - Evaluation of a continuous monitoring and feedback initiative to improve quality of anaesthetic care: a mixed-methods quasi-experimental study

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Interview Schedule: Definition, Types, Templates and Tips

what is interview schedule in qualitative research

Featured in:

what is interview schedule in qualitative research

Turn the television on, and you are very likely to find a celebrity or some other famous personality conversing with a TV news anchor or program show host. Open the newspaper and there’s a chance that you’ll read about the thoughts of a prominent politician about certain issues, written down by a journalist.

As you walk to your office and pass by the Human Resource department, you see a queue of well-dressed young men and women waiting for their turn to go into the room and talk with the HR manager, who is currently trying to fill a vacant position.

These scenarios all involve conversations and exchanges of ideas, accomplished in the form of an interview.

Interview Schedule: Definition, Types, Templates and Tips

© Shutterstock.com | Rawpixel.com

In this article, we explore 1) what an interview is, 2) the pros and cons of having an interview schedule , 3) the different types of interview schedules , 4) interview schedule templates , and 5) tips and tricks .

INTERVIEW: AN OVERVIEW

Quite possibly, the simplest definition of an “interview” is a “conversation where questions are asked and the corresponding answers are given. The setting and execution of the interview range from casual to semi-formal to formal, and it involves two parties: the interviewer and interviewee. The interviewer’s objective is to collect data and information by asking questions, and probing the answers that are given by the interviewee. It may even be described as the “interviewer’s script”.

An interview may be conducted one-on-one, with one interviewer and one interviewee, or in groups. For example, one interviewer may discuss with multiple interviewees, or more than one interviewers may converse with a single interviewee. Alternatively, it could be a group arrangement, with a panel of interviewers facing a panel of interviewees.

When are interviews conducted? Generally, interviews are used for the following:

  • Hiring or Recruitment. A job interview involves a hiring manager or recruiter talking to or discussing with an applicant or candidate in order to assess the latter’s suitability and fit for an open position.
  • Research. One way to gather data for research (e.g. marketing, economic, and scientific research) is through what is known as “research interview”, where respondents are sought for answers. In scientific research, for example, questions are formulated for the purpose of testing a hypothesis or assumption.
  • Information dissemination. News is the prime example, where a person is asked questions for television, radio, newspaper, or similar media.

How can you tell if an interview is going smoothly? Well, common sense would dictate that an interview is going well if there is a continuous exchange of ideas and information, and this can be attributed to several factors.

  • An objective or purpose , revolving around a specific topic or subject. Is it a job interview? Is the interview meant to find out what the interviewee thinks about a certain issue?
  • The ability of the interviewer to ask questions and encourage the interviewee to connect with him and open up to him. This also refers to his ability to probe deeper and do follow ups on the questions to gain more information.
  • The responsiveness of the interviewee , and his ability to express himself fully in his answers. He should be able to quickly grasp the question and understand what is being asked, so he can provide the answer that the interviewer is looking for.
  • The interview setting . This includes the venue or location, the language used, and other external factors that set the overall tone of the interview.

If any of the above are absent (or present but lacking in any way), then you can expect the interview to go downhill from the beginning. However, another huge reason why most interviews fail or do not achieve the desired results is lack of preparation, particularly on the part of the interviewer. An indication of preparedness is an “interview schedule”.

THE INTERVIEW SCHEDULE

As fun as spontaneous or on-the-spot interviews may seem to be, they will still bomb if no preparation was put into it. Those “ambush” interviews you see on television? They are not completely random or “on-the-spot” as they are presented to be. The questions asked have already been prepared beforehand, and they are often contained in an interview schedule.

An interview schedule is basically a list containing a set of structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue. The schedule will be used by the interviewer, who will fill in the questions with the answers received during the actual interview.

Advantages of an Interview Schedule

  • An interview schedule facilitates the conduct of an interview. Since the questions have already been prepared beforehand, it is easier to carry out and complete the interview.
  • It increases the likelihood of collecting accurate information or data. The questions, which were already prepared beforehand, are expected to be well-thought out and have focus, so they target the “heart of the matter”, thereby ensuring that the answers obtained are correct or accurate. According to Lindlof & Taylor, interview schedules can increase the reliability and credibility of data gathered.
  • It allows interviewers and researchers to get more information , since they can ask follow-up queries or clarifications to the questions they have prepared. Thus, the information gathered is more relevant and useful.
  • The rate and amount of responses are higher. Often, interviews are time-bound. Interviewers are given only a limited amount of time to ask all their questions and get the answers. If he came prepared, then he can utilize that time properly. Otherwise, he will be wasting a lot of time, thinking about what question to ask next. The next thing he knows, time is up, and he barely got anything substantial from the interviewee.
  • It offers flexibility and high customization , and may be used when interviewing different types of people. The interviewer can prepare it with the respondents in mind. For example, an interviewer may have prepared a job interview schedule for the recruitment of a construction worker or laborer. When he is tasked to interview candidates for a senior management position, he may also use the same schedule, but with several adjustments.

Disadvantages of an Interview Schedule

  • It can be time-consuming. Preparation of the interview schedule can take quite a chunk of the time of an interviewer, especially if it is for an extensive or in-depth interview. Significant amounts of research must be performed in order to be able to craft good questions.
  • There is a high risk that the interview and its results may suffer from the bias of the interviewer , as he is the one that will choose the questions to be asked during the interview.
  • Variability may be high when the interview schedule is used by multiple interviewers. This may result to unreliable information gathered during the interviews.

TYPES OF INTERVIEW SCHEDULES

There are two major types of interview schedules or guides that are widely used by interviewers.

In-depth interview schedule

This is used for open-ended interviews, which are aimed at obtaining in-depth information, often on serious topics or sensitive issues. The questions are open-ended, with prompts provided for the interviewer to ask for clarification or further information if necessary.

The interviewee is given more room or leeway to talk about all the topics that will crop up during the interview, so he is free to use his own words and let the ideas flow out of him easily. The key characteristics of this interview schedule are listed below.

  • The schedule contains indications of the interviewee’s awareness of the purpose of the interview and how long it will take.
  • The questions must be crafted to provide answers relevant to the topic or issue . For example, if it is a job interview, the questions should address the matter on whether the applicant being interviewed possesses the qualifications and credentials that make him suitable for the open position. If the interview is for purposes of research or investigation, the questions should answer the main problem or topic of the research or investigation.
  • All questions should be relevant , or have an impact on the purpose or objective of the interview. Remove any irrelevant questions, or those with answers that won’t be of any use to you.
  • It takes the one-step-at-a-time approach , with each question meant to tackle only one issue, instead of addressing several issues all at one. This has a tendency to confuse not only the interviewee, but also the interviewer, and result in the latter losing control of the direction of the interview.
  • Instead of using questions answerable with a Yes or No, the questions are open-ended , which can be used as a starting or reference point for more questions. This way, the interviewer can go deeper in getting information he needs.
  • The questions are neutral , avoiding leading questions that have the potential to dictate the answer to the interviewee.

Structured interview schedule

This type of interview schedule is often compared with the format used in survey forms or questionnaires because of their similarities. The difference lies in the usage; obviously, the interview schedule is used by the interviewer during a face-to-face interaction, while the questionnaire is simply filled out by the respondent.

This interview schedule contains the questions that will be asked, and it is also where the interviewer will record the answers to those questions. Essentially, preparing an interview schedule for a structured interview is the same as preparing a questionnaire. It’s just that the questionnaire will be used solely by the interviewer, and the respondent or interviewee will not get to lay their eyes on the contents.

For more flexibility, however, some interviewers combine the features of these two types when they prepare their interview schedule. It would really be up to the interviewer, and what he deems to be most effective in achieving his objectives.

INTERVIEW SCHEDULE TEMPLATES

There is no single standard template for an interview schedule. Generally, the format will depend on the type and purpose of the interview being conducted, as well as the target respondents or interviewees. However, the interview schedule must have three major parts:

Some researchers call this stage the “warm-up”, where the objective is to create an atmosphere that will accommodate the open and free flow of ideas between the interviewer and interviewee, whether it is one-on-one or in a group.

At the start of the interview, the interviewer should welcome the interviewee and make an effort to put him at ease. If the respondent is relaxed, the interview is likely to go smoothly. The interviewer will then proceed to inform the interviewee of the following:

  • Objectives of the interview. The interviewee deserves to know why the interview is taking place, and why he is involved. In case of a job interview, the applicant being interviewed already knows why he is in the same room with the HR personnel, but it should still be spelled out to him.
  • The topics or points that will be discussed in the course of the interview. This is to further make the interviewee comfortable, since you are giving him something like an ‘advanced warning’ on what will be asked later on in the conversation.
  • The estimated length or duration of the interview. The interviewee does not want to be kept guessing throughout the course of the interview when it will end, or if he will have to spend all morning talking to the interviewer.

The interviewee would like to feel that he will benefit in some way from this interview, so it would definitely help if you give him motivation to answer the questions properly and accurately. If you don’t, he may not be inclined to answer the questions, much less give good answers.

This part of the interview schedule may be formatted in such a way that fits the interviewer’s personality, and even that of the interviewee.

We come to the “core discussion”. This contains the meat of the interview schedule: the topics and the questions to be asked. Again, the content will depend on the topic and the type of interview. The main thing that you should never overlook is that the questions should fulfill the objective of the interview.

Instead of an interview outline, which includes only a list of topics and subtopics, a typical interview schedule also contains major questions, as well as follow-up questions designed to probe or clarify the answers to the previously asked major questions.

When preparing the body of the interview schedule, it is important to leave more than enough space where the interviewer may write down the responses or answers of the interviewee.

The interview is about to be wrapped up. The reason that it is included in the interview schedule is to ensure that the interview will not end abruptly, which may come across as rude to the interviewee.

The closing will cover the main points, in summary, that were talked about during the interview, followed by a brief discussion on the next steps that will be taken after the interview.

You may check out this template for an example of an interview schedule to be used in talking with a university classmate. This other template of a simple interview guide also provides cues on what the interviewer should say during the interview, aside from the questions that he will ask.

In some cases, an interview schedule may be so simple as to contain only the salient points, such as the purpose of the interview, the date, time and location of the conduct of the interview, and the names and contact details of both the interviewer and the interviewee. Take a look at this job interview schedule as an example.

Fortunately, there is a wealth of resources of interview schedule templates that you can find online that you can tweak and adapt to your needs.

TIPS IN PREPARING AND USING AN INTERVIEW SCHEDULE

The main concern in the preparation of an interview schedule is on the questions. What should be asked, and how should they be asked? But that is not all. Even the order or sequence of asking the questions also matters, which is why it should also be reflected on the interview schedule.

Remember the following tips when preparing the guide that you will use for the interview.

  • Do not start the interview with a question probing into any personal information of the interviewee (unless the purpose of the interview is to talk about his personal life). If it’s a job interview, it would be better to get him to talk about his skills, qualifications and work experiences, since that is his comfort zone. If it is a research interview, you can start things off by asking the interviewee about his expertise in the field that you are interviewing him about. Do not ask him personal questions about his family or similar topics.
  • Start with the “lighter” questions , or those that will not immediately put the interviewee or respondent on his guard. The interviewer should be able to answer the question easily, then you can move on gradually to the more sensitive or difficult topics. If you start it with a controversial question, or something that will make the interviewee uncomfortable, that will set a grim tone for the rest of the interview.
  • The general rule is for you to group the questions in a logical manner . You can start with general questions, and work your way toward the specific questions later on. Of course, you may have to be flexible at times, especially when a general question must be followed by a specific question in order to clarify something.
  • For variety and a more natural flow, if you are using both open-ended and closed questions, it would be a good idea to mix them up , instead of asking all the closed questions first and then the open-ended ones in the latter half of the interview. Another suggestion is to adapt the funnel or inverted funnel sequence. The funnel sequence will have you starting with open-ended questions, and gradually but naturally easing into the close-ended ones. The inverted funnel sequence orders the questions in reverse.
  • Keep the respondents or interviewees in mind when preparing the questions. You should know about their background, at least, so you can prepare questions that will resonate with them. If you are going to interview candidates for a supervisory engineering position, you can frame the questions so that the candidates will be able to prove whether they are qualified for the job or not. If you are interviewing a person of interest regarding a recent incident, you should at least find out why he is considered to be a “person of interest”, so you can come up with the proper and relevant questions.
  • The wording of the questions must be clear. Avoid using complicated and highly technical terms, unless you are completely sure that the interviewee is familiar with them. Try using simple language and layman’s terms to avoid confusion. Stay away from colloquial terms and jargon, especially when there are better – easier to understand – alternatives that you can use. Sentence structure is also important. Questions structured into long and run-on sentences may confuse you both, and the interviewee may miss the main point that you are asking about. As earlier mentioned, as much as possible, each question should address a single issue. Avoid placing too many questions in a single sentence, to be read in one breath.
  • Provide adequate space where you can record or write the answers or responses to each question. There is an option to use a recorder during the interview, in case there are some points that you fail to record on the interview schedule. If you are going to use one, you have to inform the interviewee about it at the start of the interview, and get his consent to record the interview.
  • As interviewer, you have to familiarize yourself with the interview schedule. You want the interview to flow naturally, and you definitely don’t want to sound stilted when asking the questions or, worse, as if you rehearsed it. Well, you probably have, but you don’t want to make that apparent to the interviewee. You have to exude confidence; after all, you are the one asking the questions. Once you have prepared the interview schedule, you have to know it inside out.

Comments are closed.

Related posts

39 Top Interviews with Entrepreneurs from Silicon Valley – Interview tour 2015

Dear fellow entrepreneurs and readers of Cleverism! In autumn of 2015 we went for the second time …

How to Write a Business Continuity Plan

Except for time-bound enterprises, or business ventures that are started with the intention of …

SMARKETING: How to Integrate Sales and Marketing

For a very long time, there has always been a thin line dividing “sales” and “marketing”. Some would …

408,000 + job opportunities

what is interview schedule in qualitative research

Not yet a member? Sign Up

join cleverism

Find your dream job. Get on promotion fasstrack and increase tour lifetime salary.

Post your jobs & get access to millions of ambitious, well-educated talents that are going the extra mile.

First name*

Company name*

Company Website*

E-mail (work)*

Login or Register

Password reset instructions will be sent to your E-mail.

what is interview schedule in qualitative research

Extract insights from Customer & Employee Interviews. At Scale.

Defining qualitative research: what is it and how to use.

Insight7

Home » Defining Qualitative Research: What Is It and How to Use

Understanding Qualitative Research is essential for anyone looking to delve into human behavior and social phenomena. Imagine standing in a bustling marketplace, observing interactions, and absorbing the nuances of conversation. This immersive experience reflects the essence of qualitative research, where the focus is on exploring rich, detailed insights rather than mere numerical data.

Qualitative research serves as a valuable tool for understanding complex issues through participant perspectives. By employing various methods like interviews and focus groups, researchers gain in-depth insights into people's thoughts and motivations. This section will explore the importance of qualitative research, its methodologies, and how it can be effectively applied to uncover hidden patterns in human behavior for informed decision-making.

Defining Qualitative Research: What Is It?

Qualitative research is fundamentally about understanding the experiences and perspectives of individuals in depth. It focuses on subjective narratives and often employs methods like interviews, focus groups, and observations. This approach allows researchers to capture the richness of human experiences, facilitating a deeper understanding of the context behind people's thoughts and actions.

Central to this method is the belief that reality is socially constructed, and meanings vary across different cultures and situations. Understanding qualitative research is crucial for anyone looking to explore complex issues where the nuances of human behavior matter. By prioritizing participants' viewpoints, qualitative research reveals insights that quantitative methods might overlook, providing valuable context for decision-making processes. This makes it an essential tool in fields ranging from market research to social sciences, allowing businesses to craft strategies rooted in genuine consumer understanding.

The Core Principles of Qualitative Research

Understanding qualitative research relies on a few core principles that guide its approach and application. First, qualitative research prioritizes depth over breadth, focusing on understanding human experiences and perceptions in context. This method seeks to uncover the meanings participants attach to their experiences, moving beyond mere data points to explore emotions, motivations, and social interactions.

Second, it emphasizes flexibility, allowing researchers to adapt their methods based on ongoing findings and participant feedback. This iterative process enhances the richness of the data collected. Third, qualitative research values the subjective nature of data, recognizing that reality is constructed through individual perspectives. Researchers must remain aware of their biases and strive to present participants' voices authentically. By adhering to these principles, one can gain a more nuanced understanding of qualitative research and its invaluable insights.

Distinguishing Qualitative from Quantitative Research

Qualitative and quantitative research serve distinct purposes and offer different types of insights. Understanding qualitative research involves recognizing its focus on exploring complex human experiences and social phenomena. It employs interviews, focus groups, or open-ended surveys to collect rich, descriptive data. This approach allows researchers to uncover motivations, feelings, and personal narratives that numbers alone cannot reveal.

In contrast, quantitative research emphasizes numerical data and statistical analysis. It uses structured tools like surveys and experiments to gather measurable information. This method seeks to quantify variables and often tests hypotheses, providing a broader overview that can generalize findings to larger populations. Distinguishing between these methodologies is vital for researchers, as blending them can enhance the depth and breadth of analysis, ensuring a more comprehensive understanding of the subject matter.

How to Use Qualitative Research: Understanding Qualitative Research in Practice

Understanding qualitative research in practice begins with recognizing its unique characteristics and applications in various fields. It emphasizes the importance of context, exploring human experiences and social phenomena through methods such as interviews, focus groups, and observations. This approach allows researchers to delve deeply into participants' perspectives, providing rich, nuanced insights that quantitative research often misses.

To effectively use qualitative research, consider these key steps:

  • Define Objectives : Clearly outline what you aim to discover or understand. This helps guide your research design.
  • Select Methodology : Choose appropriate methods for data collection, such as interviews or participant observations, aligned with your objectives.
  • Collect and Analyze Data : Gather qualitative data thoughtfully, then analyze it for patterns and themes that emerge from the narrative.
  • Interpret Findings : Draw meaningful conclusions that can influence practices or inform decisions based on the insights gained.

By following these steps, you enhance your ability to translate qualitative insights into actionable knowledge.

Methods and Techniques

Understanding qualitative research involves a variety of methods and techniques designed to gather rich, meaningful data. One commonly used method is in-depth interviews, where researchers engage directly with participants to explore their thoughts and feelings. This personal approach allows for deeper insights into complex topics that quantitative data may overlook. Additionally, focus groups offer a collaborative environment where diverse perspectives can emerge, revealing shared experiences and differing viewpoints.

Another valuable technique is participant observation, where researchers immerse themselves in the environment they study. This provides context to behaviors and interactions, enhancing the understanding of social dynamics. Finally, content analysis allows researchers to systematically analyze communication materials, such as text or media, to identify underlying themes and patterns. By combining these methods, a comprehensive understanding of the subject matter can be achieved, paving the way for more informed decisions and strategies in research initiatives.

Analyzing Qualitative Data

Analyzing qualitative data involves interpreting non-numerical information to extract meaningful insights. This process often reveals themes, patterns, and relationships that might not be immediately apparent. To master this, it is crucial to adopt systematic approaches. Here are some key strategies to enhance your analytical skills:

Coding : Start by organizing the data through a coding system. Assign labels to segments of your data that represent meaningful concepts. This simplifies the analysis by identifying patterns early on.

Thematic Analysis : After coding, group similar codes into themes. This helps to see broader patterns that emerge from the data, shedding light on underlying trends.

Triangulation : Use multiple sources or methods to validate findings. This reinforces the credibility of your results and helps to mitigate bias.

Interpretation : Finally, analyze the implications of your findings. What do they mean in the context of your research? Connecting insights to the core questions of your study strengthens your overall narrative.

Understanding qualitative research relies heavily on these analytical methods, providing a solid foundation for making informed decisions based on qualitative data. By applying these techniques diligently, researchers can uncover rich insights that drive meaningful conclusions.

Conclusion: Understanding Qualitative Research’s Importance and Application

Qualitative research plays a crucial role in understanding human experiences, emotions, and motivations. By focusing on narrative-driven data, researchers can gain insights that quantitative methods may overlook. This rich data allows for a deeper exploration of complex issues, enabling organizations to derive meaningful conclusions and informed decisions. Understanding qualitative research empowers teams to craft tailored solutions that resonate with their target audience.

Additionally, its application extends across various fields, from market research to user experience design. With the proper tools and methodologies, qualitative research can streamline processes, automating data analysis and improving report generation. Ultimately, prioritizing qualitative methods fosters a more holistic understanding of subjects, driving innovation and enhancing outcomes.

Turn interviews into actionable insights

On this Page

Steps for Identifying Qualitative Research Type of Data Collected

You may also like, qualitative study method: best practices for implementation.

Insight7

Advanced Qualitative Types of Research Design Strategies

Writing a sample of methodology in qualitative research.

Unlock Insights from Interviews 10x faster

what is interview schedule in qualitative research

  • Request demo
  • Get started for free
  • Open access
  • Published: 20 August 2024

“Because people don’t know what it is, they don’t really know it exists” : a qualitative study of postgraduate medical educators’ perceptions of dyscalculia

  • Laura Josephine Cheetham 1  

BMC Medical Education volume  24 , Article number:  896 ( 2024 ) Cite this article

91 Accesses

Metrics details

Dyscalculia is defined as a specific learning difference or neurodiversity. Despite a move within postgraduate medical education (PGME) towards promoting inclusivity and addressing differential attainment, dyscalculia remains an unexplored area.

Using an interpretivist, constructivist, qualitative methodology, this scoping study explores PGME educators’ attitudes, understanding and perceived challenges of supporting doctors in training (DiT) with dyscalculia. Through purposive sampling, semi-structured interviews and reflexive thematic analysis, the stories of ten Wales-based PGME educators were explored.

Multiple themes emerged relating to lack of educator knowledge, experience and identification of learners with dyscalculia. Participants’ roles as educators and clinicians were inextricably linked, with PGME seen as deeply embedded in social interactions. Overall, a positive attitude towards doctors with dyscalculia underpinned the strongly DiT-centred approach to supporting learning, tempered by uncertainty over potential patient safety-related risks. Perceiving themselves as learners, educators saw the educator-learner relationship as a major learning route given the lack of dyscalculia training available, with experience leading to confidence.

Conclusions

Overall, educators perceived a need for greater dyscalculia awareness, understanding and knowledge, pre-emptive training and evidence-based, feasible guidance introduction. Although methodological limitations are inherent, this study constructs novel, detailed understanding from educators relating to dyscalculia in PGME, providing a basis for future research.

Peer Review reports

Dyscalculia is categorised as a specific learning difference or part of neurodiversity in the UK and a learning disability in North America. Learners with dyscalculia are said to have significant difficulties in numerical processing [ 1 ]. It is increasingly acknowledged that these relate to arithmetic, statistics, ordinance, number and code memorisation and recall, with other individual variance [ 2 , 3 ]. Here, I chose to use “specific learning difference” (SpLD) to acknowledge that some feel SpLDs relate to a difference in learning needs but may not always result in learners identifying as disabled [ 4 , 5 ]. Most contemporary definitions state that these challenges are out of keeping with learner age, intelligence level and educational background [ 1 ], evolve over time but persist during adulthood.

Dyscalculia is a comparatively recently recognised SpLD with a relatively low ‘diagnosed’ population prevalence, with estimates ranging between 3% and 7% [ 2 ]. Awareness of dyscalculia is lower than more highly ‘diagnosed’ SpLDs such as dyslexia, dyspraxia and Attention Deficit and Hyperactivity Disorder (ADHD) [ 3 ], with a paucity of research-based evidence, especially relating to adult learners [ 2 ]. Of the two studies exploring dyscalculia in Higher Education Institutions (HEI), from the perspective of learners, both Drew [ 3 ] and Lynn [ 6 , 7 ] outlined poor understanding within adult learning environments and a lack of recognition of dyscalculia and of HEI learning support provision. Additionally, learner challenges were different to those described in dyslexia and dyspraxia studies, with understanding and perception of time, distance, finances, non-integer numbers, memorisation and recall of numerical codes and values being frequent issues. Potential complexity arose through possible coexistence of dyslexia or mathematical anxiety, varying learner-developed coping strategies effectiveness and learner coping mechanisms becoming ineffective during undergraduate or postgraduate education [ 3 ]. Drew’s [ 3 ] three healthcare learner participants had also experienced potential fitness to practice concerns either from themselves or educators.

Context for medical education

The number of DiT in postgraduate medical education (PGME) with dyscalculia remains unknown. Similarly, awareness levels of PGME educators, or what their experiences might be, of facilitating the learning of DiT with dyscalculia is unexplored. Indeed, there has been no published research to date relating to dyscalculia in PGME or undergraduate medical education.

This paucity of knowledge is set in the context of a presumed increasing proportion of UK PGME DiT learners with a disability resulting from increasing numbers of medical students in the UK reporting a disability [ 8 , 9 ] and in other countries such as Australia [ 10 ]. Data collection via the statutory education bodies, and the medical regulator, the General Medical Council (GMC), is challenging given the voluntary nature of SpLD declaration and persisting concerns regarding discrimination and stigma [ 11 ]. My Freedom of Information request to the GMC in February 2022 revealed that 1.25% of registered doctors have declared a ‘learning disability’ (including SpLDs) such as dyslexia.

The impact of dyscalculia on DiT and their educators is unknown. The GMC defines differential attainment as the gap in assessment outcomes between learners grouped by protected characteristic [ 12 ]. It recently commissioned research into recommending education providers create more inclusive learning environments for disabled learners [ 13 ]. Other recent research indicates that differential attainment may persist from school-based examinations through to medical school exit ranking scores and onto PGME examinations [ 14 ].

Currently, there is no publicly available information addressing the support of PGME DiT with dyscalculia within the UK, and no known prospective screening in place. Support, including reasonable adjustments for PGME DiT with additional learning needs is accessed through, and coordinated by, education bodies’ Professional Support Units (PSU), including Health Educator and Improvement Wales’ (HEIW) PSU in Wales. More widely, HEIW, the education body in Wales, is responsible for delivery and quality management of PGME in accordance with UK-level standards set by the GMC and medical speciality Royal Colleges and Faculties. Reasonable adjustments are changes, additions, or the removal of learning environment elements to provide learners with additional support and remediate disadvantage [ 15 ]. They are frequently purported to enable learners with SpLDs to learn and perform to their potential, although evidence for this is variable [ 16 , 17 ], with a marked lack of research relating to adult learners with dyscalculia.

Despite recent shifts from more teacher-centred to more student-centred learning approaches, with a range of andrological learning theories emphasising the learner being at the centre of learning [ 18 ], the educationalist remains a key element of many learning theories and PGME. Many PGME educators are practising doctors and, alongside this, must maintain a contemporaneous understanding of learning theory, training delivery, teaching, supervision and wider educational policies. However, how they approach, or would plan to approach, supporting learning for DiT with dyscalculia is unknown. Therefore, exploring the attitudes and perspectives of PGME DiT or educators regarding dyscalculia, both unresearched previously, through this paradigm could be valuable [ 19 ].

Educational challenges, learning needs and local context

For educators, a pivotal part of facilitating learning is understanding the learning needs of learners, felt to be a cornerstone of adult pedagogy [ 19 , 20 ]. Davis et al. [ 20 ] define learning needs as ‘’any gap between what is and what should be”. These can be established subjectively, objectively or a combination approach. However, Grant [ 19 ] cautions against conducting limiting, formulaic learning need assessments.

Identifying attitudes and understanding

Furthermore, attitudes are said to frame educator approaches and thus the learning experiences learners will have [ 21 ]. Attitudes are defined as “a feeling or opinion about something or someone, or a way of behaving that is caused by this” [ 22 ]. Interpretivism offers a route to exploring such attitudes by outlining that there is no one universal truth or fact, but instead many equally valid realities constructed by different individuals, their meaning-making and their experiences.

Again, research is absent within medical education relating to educators’ attitudes and understanding of learners with dyscalculia and how these might influence their approach. Current research indicates attitudes of HEI educators are often formed through their past - or absent past - experiences, lack of legal obligations knowledge and, for healthcare educators, the patient-centred role of clinical learners [ 23 ]. These appeared to help form their approach to facilitating teaching [ 23 , 24 , 25 , 26 , 27 , 28 , 29 ]. Therefore, understanding PGME educationalist attitudes towards DiT with dyscalculia would be important in helping understand how learning is facilitated.

Thus, there exists a clear lack of published knowledge and understanding regarding dyscalculia set in a context of increasing awareness of the importance of inclusivity and addressing differential attainment within medical education. The importance of educators in facilitating learning of such PGME DiT suggests that exploring their perspectives and understanding could provide valuable insights into this understudied area. Such knowledge could provide benefit to learners and those designing and delivering programmes of learning for DiT and programmes of support for educators. This includes potentially exploring the attitudes and understanding of educators who have no direct experience of dyscalculia, given that this could be the context in which a DiT with dyscalculia finds themselves in a postgraduate learning environment. Assumptions, or perceptions generated without experience or knowledge of dyscalculia, are equally important to understand in a learning context when the awareness level and prevalence of dyscalculia within DiT is unknown. This allows understanding of how learning for DiT with dyscalculia may be facilitated in a knowledge and understanding-poor context, and furthermore, what educator needs exist and what further research is needed.

Consequently, the research question and aims below were constructed.

Research question:

What are the attitudes towards , understanding and perceived challenges of dyscalculia within postgraduate medical training by postgraduate medical educators?

Research aims:

To explore the awareness and understanding of dyscalculia that postgraduate medical educators may or may not have.

To determine the attitudes that postgraduate educators have towards dyscalculia and DiT with dyscalculia and how these might be formed.

To establish the challenges that postgraduate educators perceive they encounter or might encounter when facilitating the learning of a DiT who has dyscalculia.

To provide the basis for future research studies exploring how to facilitate the learning of DiT with dyscalculia during postgraduate training.

This scoping study was designed using an interpretivist, constructivist qualitative methodology to understand the phenomenon, in detail [ 30 ] as part of a Masters in Medical Education programme.

A literature review was undertaken to enable research question and aim construction. Firstly, a focused literature search ascertained the level, and lack, of evidence existing for the study phenomenon followed by four, progressively broader, searches to understand the wider context, between October 2021 and May 2022, revealing the lack of, or limited, literature existing.

The literature search was then performed by me using guidance [ 31 , 32 ] and twenty-seven research search engines. Additionally, a spectrum of journals was searched directly. Literature was also identified through snowballing.

Keyword search terms were developed and refined during the literature search, with limits on further broadening the search based on relevance to the areas of interest: postgraduate learners, educators and SpLDs using different term combinations exploring dyscalculia and postgraduate education, SpLDs and postgraduate healthcare learners, postgraduate educators and attitudes or knowledge or experiences of facilitating learning (appendix 1, supplementary material). Broadening of search terms allowed for exploration of analogous phenomena (other SpLDs), in other postgraduate healthcare and learning contexts, and for further research question development, returning 2,638 items. Papers were initially screened using their titles and the inclusion/exclusion criteria (below) generating 182 articles, papers and theses, with abstracts and reference lists reviewed. 174 papers and eight PhD theses were appraised using guidance [ 32 , 33 , 34 ].

Inclusion criteria were:

Primary research or review.

International or UK-based research reported in English.

Postgraduate higher education (university-level, post Bachelor or equivalent degree) setting.

Relating to postgraduate or higher educationalists’ views from any discipline and knowledge of SpLDs.

Exclusion criteria were:

Literature published in non-English languages.

Opinion and commentary articles.

Undergraduate setting, unless mixed cohort/study with postgraduate learners.

Ultimately, 17 papers and one doctoral thesis were included. Whilst grey literature, this thesis [ 3 ] was included due to the dyscalculia-focused insights provided and limited adult-based dyscalculia research elsewhere. After literature appraisal, research aims and a research question were formed.

Semi-structured interviews were chosen to enable data collection and interpretation through a constructivist lens, via open enquiry rather than hypothesis testing [ 30 , 35 , 36 ]. Study participants were PGME educators, actively involved in DiT learning within any PGME programme within Wales whilst holding a Medical Trainer agreement with HEIW. Participants held a range of educationalist roles, from education supervisor to local speciality-specific Royal College tutor (local speciality training lead) to training programme director (responsible for delivery of speciality-specific training across a region).

Interview question and guide design (appendix 2, supplementary material) drew on the six qualitative and six quantitative research-based, validated published tools used to explore similar phenomena, particularly those of O’Hara [ 37 ], Ryder [ 38 ], L’Ecuyer [ 23 ] and Schabmann et al. [ 39 ]. Design also drew upon Cohen et al’s [ 40 ] recommendations of composing open, neutral questioning.

Interview format was piloted using a PGME educator from England (thus ineligible for study recruitment) with modifications resulting from participant feedback and through adopting reflexivity; as per Cohen et al. [ 41 ] and Malmqvist et al. [ 42 ]. Participant interviews took place between May and June 2022 and were recorded via the University-hosted Microsoft Teams platform, due to the pandemic-based situation and large geographical area involved, whilst maintaining interviewer-interviewee visibility during the dialogue [ 35 ]. Recruitment occurred via purposive sampling, through two HEIW gatekeepers, the national Directors of Postgraduate Secondary (hospital-based) and Primary (General Practice-based) Medical Training in Wales. An email-based invitation with project information was distributed to all postgraduate medical educators with a current HEIW Medical Trainer agreement, regularly engaging in the support of learners within PGME training, in Wales. In this case, the gatekeepers in HEIW were individuals who could grant permission and make contact with all potential eligible participants on behalf of myself, through their email databases, whilst adhering to UK data protection regulations [ 43 , 44 ].

Ethical considerations

Formal ethics approval was gained from the Cardiff University School of Medicine Research Ethics Committee. Health Research Authority ethics approval was considered but deemed unnecessary. Informed written and verbal participant consent was obtained prior to, and at the point of, interview respectively. Additionally, verbal consent for video recording was sought, offering audio recording or notetaking alternatives; however, participant discomfort was not reported. Mitigation options to avoid selection bias included selecting alternative volunteers if significant relationships between the researcher and participant had existed.

Invitations to participate were circulated to approximately 2,400 to 2,500 postgraduate secondary care trainers and 600 primary care trainers. 18 individuals indicated interest in participating, one cancelled and seven did not respond to follow-up within the two-month timeframe the MSc project schedule allowed for. Subsequent reasons given for two out of seven who subsequently responded out of timeframe included clinical demands and unexpected personal matters. 10 postgraduate educators were interviewed and all allowed video-based interview recording. Interviews lasted between 40 and 60 min. Interviews were transcribed verbatim by me and checked twice for accuracy, with participants assigned pseudonyms. Data analysis was conducted using reflexive thematic analysis (RTA) and undertaken by me, the author, as the single coder and Masters student, with transcripts analysed three times.

RTA followed the six-step approach of Braun et al. [ 45 ], Braun and Clarke [ 46 ] and Braun and Clarke [ 47 ], with a primarily inductive approach [ 47 , 48 ] through an iterative process. Both latent and semantic coding approaches were used, guided by meaning interpretation [ 49 ].

RTA allowed exploration through an interpretivist lens. Discussions persist regarding how RTA sample size sufficiency and ‘data saturation’ are determined, with RTA placing more emphasis on the analyst-based individualism of meaning-making. Therefore, mechanisms for determining thematic saturation are purportedly inconsistent and unreliable [ 50 ]. Consequently, sample size was based on the maximum number of participants recruited within the set project time limits.

Reflexivity

I strove to adopt reflexivity throughout, using a research diary and personal reflections, referring to Finlay [ 51 ] who stated that such subjectivity can evolve into an opportunity. My interest in the studied phenomenon resulted partially from my experiences as a DiT with SpLDs and from being a DiT representative. Acknowledging this was important given my perspective, as an intrinsic part of this research, could have affected data gathering, interpretation, and, ultimately, study findings through introducing insider status.

Additionally, holding an influential role within the research, with potential for ‘interviewer bias’ [ 52 ], I adopted Cohen et al.’s [ 53 ] recommendations, committing to conscious neutrality during interviews and use of an interview prompt list, whilst striving to maintain a reflexive approach. Alongside this, the impact on credibility of this study being part of a Masters project, limiting scale and timeframes were considered and mitigated by exploring these within the discussion and referring to this research as a scoping study.

Educators with limited to no direct experience of learners with dyscalculia knew little to nothing about dyscalculia (Fig.  1 ).

figure 1

Summary of themes and subthemes generated

Furthermore, of the participants who did, these educators cited close second-hand experiences with family members or past learners with dyscalculia which helped shape their understanding of dyscalculia. Those that had no direct experience drew on empathy and generalisation, extrapolating from the greater knowledge and confidence they had in their understanding regarding dyslexia or other SpLDs or even analysis of the term ‘dyscalculia’ to form definitions and perceptions.

“Absolutely nothing… I saw it , [dyscalculia in the study invitation] didn’t know what it was and Googled it so very , very little really. I suppose in my simplistic surgical sieve head , I would just sort of apply the bits and pieces I know around dyslexia.” P10 .

All suggested dyscalculia represented a specific set of challenges and associated learning needs relating to numbers, numeracy or quantity where overall intelligence was preserved. Educators saw each learner as being an individual, therefore felt dyscalculia would present as a spectrum, with varying challenges and needs existing. Dyscalculia was seen as persisting lifelong, with the challenges and needs evolving with age and experiences. Common challenges suggested related to calculations, statistics, critical appraisal, awareness of time, organisation and recall of number-based information (such as job lists, blood results), spatial dimension quantification, prescribing, fast-paced tasks and emergencies, exams and learning-based fatigue or high cognitive load. Wellbeing issues relating to dyscalculia were also frequently perceived, with this potentially negatively affecting self-confidence and anxiety levels. All educators saw a key aspect of their role to be provision of pastoral support, in enabling effective learning.

Past educator experiences of dyscalculia were linked to perceived confidence in ability to support future DiT with dyscalculia. Educators felt their limited knowledge, with the primary source of information regarding dyscalculia being DiT with dyscalculia themselves, to be reflective of low levels of awareness, knowledge and identification within PGME, education systems and wider society. Some felt the proportion of PGME DiT with dyscalculia would be lower than for the general population, following challenging assessments during secondary school and undergraduate studies, but might be changing given widening participation initiatives within medicine. Others saw a potential hidden iceberg of later career stage doctors with unidentified dyscalculia who had completed training when speciality assessments relied less on numeracy.

“[It] was only because of my own experiences and my [relative] that I was able to kind of wheedle around and , you know , make them recognise that there was an issue and that , you know. But I - I think had I not had an awareness of it , I probably wouldn’t have recognised it , I think.” P7 .

Educators frequently used empathy when attempting to understand dyscalculia. Educators had mixed feelings about ‘labelling’ DiT as having dyscalculia although all felt identification of additional learning needs was key. Some felt labels were necessary to enable and better support DiT with dyscalculia in the absence of effective, feasible, inclusive education approaches, others noted the potential for stigma or generalisations.

None of the participants had received dyscalculia training. Some felt widespread societal normalisation of mathematics challenges adversely impacted upon if, and at what educational stage, dyscalculia identification occurred and needs were recognised. Many felt assumptions might occur regarding dyscalculia through others making generalisations from better known SpLDs, including dyslexia and dyspraxia, in the absence of other knowledge sources but that these extrapolations could be inaccurate and unhelpful.

“And I think there’s a lot of ‘oh you’re just bad with numbers’ or ‘ohh , you just can’t do , you know people are just , I , I suspect there’s a lot of people who have just been told they’re not very good at maths , aren’t there? And it’s just , you know they can’t , can’t do it , which you know is not really very fair , is it?” P7 .

Many felt PGME might represent a critical juncture for DiT with dyscalculia, where effective coping mechanisms developed in the past become ineffective. A variety of such coping mechanisms were suggested or hypothesised, often outlined as depending on the dyscalculia-based experience level of the educator, including checking work with others, calculator use and avoidance of numeracy-dense work or specialities.

Mechanisms were generally viewed positively except where perceived to reduce the likelihood of a DiT recognising dyscalculia themselves and seeking support.

Most felt positively towards learners with dyscalculia and their learning facilitation, especially those with greater experience of dyscalculia. Many balanced this positivity with potential concerns regarding patient safety. Concerns focused especially on heavily numeracy-based tasks, fast-paced situations, or when working independently in surgical or emergency prescription-based situations. Overall, concerns were heightened due to the clinical patient-based context to PGME learning. Two participants felt that not all DiT with dyscalculia should be supported to continue training in particular specialities where numeracy skills were seen as critical, such as ophthalmology.

“I am , and it just seemed really unfair that this one small thing could potentially have such a big impact and could potentially prevent [them] progressing and succeeding in the way that I think you know , [they , they] had the potential to.” P6 .

Educators outlined a dependence on the bidirectionality of learner-educator relationships to best facilitate DiT learning per se, and it was felt all DiT had a responsibility to be honest with educators. Some cited potential barriers to this collaboration, including past negative learner experiences, felt stigma, limited educator time and frequent DiT rotations.

“It’s a wonderful opportunity for learning which I really enjoy , because I think that this is a two-way process. You know , I think the DiT gives you things that you reflect on and you should be giving the DiT things that they reflect on” P5 .

Most felt they would take a one-to-one learning approach for DiT with dyscalculia. Group-based, fast-paced or numeracy-rich, higher risk clinical activity-based teaching would be more challenging to cater for.

For some, patient safety uncertainties abutted with the duality of being a clinician and educator, with perceived difficulty in quantifying clinical risks associated with learning and educators’ clinical workload demands limiting available time and resources. Thus, many felt that their educator roles always needed to be tempered with their duties as a doctor, prioritising patient safety and quality of care above all else.

“So , it’s not so much the learning , uh , issue that worries me. I think even if someone had dyscalculia the , uh , concepts of medicine could be understood and the basic outline of what we’re doing , but actually you’ve got to be quite precise in the vocational aspect of , of , of the training , and if you get it wrong , it’s a potential major clinical risk and obviously patient safety has to come first in everything that , that we do.” P4 .

Educators wished strongly for pre-emptive support in facilitating the learning of DiT with dyscalculia, feeling great responsibility both for DiT learning but also for upholding clinical standards and safety. Many felt they would approach HEIW’s PSU for reactive support, including seeking learner ‘diagnosis’, although some predicted this support, and their knowledge, might be limited. However, two participants outlined positive experiences after seeking PSU support.

Most educator participants supported reasonable adjustment use if patient safety and quality of care remained prioritised and preserved. Other conditions for supporting reasonable adjustments included if they enabled without giving undue advantage and if educator-related workload was not overly burdensome. Those with experience of dyscalculia more confidently volunteered reasonable adjustments suggestions, ranging from calculation-table or App access to additional time for numeracy-rich activities. Some perceived a challenging divide between clinical educators and SpLD education experts who could make potentially unfeasible reasonable adjustment recommendations, with participants suggesting the importance of greater involvement of clinical educators in developing support processes.

“If I’m honest , I don’t think we do it very well…They’re [reasonable adjustments offered] very simplistic , … you know , they’re very much based on a sort of global ability rather than realising that processing and other things might be impacted… We’re , we’re probably behind the curve and not really doing what could be done” P8 .

Further example quotes for each theme and subtheme can be found within appendix 3, supplementary material.

Experience shapes educator knowledge, understanding and attitudes

This study reveals novel findings regarding dyscalculia in PGME within a vacuum of prior research. Notably, participants’ views towards PGME learners with dyscalculia, including DiT potential to learn, practise and develop effective coping strategies, were substantially more positive and empathetic than in the closest comparable healthcare studies of other SpLDs [ 23 , 24 , 27 , 29 , 54 ]. Furthermore, the potential impact of societal normalisation of numeracy challenges on awareness of, and attitudes towards, dyscalculia explored by some participants has only previously been noted by Drew [ 3 ].

Educators’ expressions of a sense of personal or healthcare-wide lack of awareness and understanding of dyscalculia aligns with the current UK position [ 2 ]. But they also built on this, outlining how generalisation from other SpLDs or disabilities was frequently used to bridge the dyscalculia knowledge gap with some not recognising this as potentially problematic. This suggests a need for enhanced awareness and understanding within the healthcare education community of the potential fallibility of using generalisation to support learners with poorly understood additional needs.

Moreover, no other studies have revealed that healthcare educators with personal experience of a learner relative with a SpLD displayed universally positive attitudes towards DiT with the same SpLD. Whilst this could reflect inter-study methodological differences, inter-professional differences or the increasing emphasis on compassionate clinical practice [ 55 ], it also suggests influence of educator experience in attitude formation.

In addition to their attitudes, the impact of prior experience of learners with dyscalculia on educators’ knowledge, understanding and confidence was often acknowledged as important by participants. This was seen to an extent in the closest comparable SpLD studies, [ 24 , 54 ] and further shows the diverse influence of past educationalist experiences, particularly the establishment of deep, longitudinal relative-based relationships, aligning with social constructivism [ 56 ].

Unlike HEI lecturers in dyslexia studies [ 24 , 54 ], who frequently questioned the needs of learners, educators saw DiT with dyscalculia as intelligent and high-functioning, having credible additional learning needs. Needs were seen as variable unlike elsewhere. Additionally, the level of detail constructed regarding educators’ perceptions of the needs, strengths and challenges of each DiT with dyscalculia, evolving over time and experience, is not seen in non-dyscalculia SpLD studies and only alluded to for dyscalculia [ 3 ]. These differences, which may be partially explained by varying methodologies or cultural norms regarding how different SpLDs are regarded, are important to better understand.

Furthermore, the preferred educator approach of individualising learning for DiT with dyscalculia is not seen elsewhere in the literature, although this aligns with supporting learning within their zone of proximal development (ZPD). Rather, Ryder and Norwich found HEI educators actually expressed negative attitudes towards individualising learning [ 24 ]. Methodological and SpLD-specific factors may contribute to these differences, with this study’s findings aligning more closely with Swanwick’s proposal that PGME often emulates apprenticeship-type learning [ 57 ]. It would be valuable to establish the efficacy of individualised PGME-based approaches to facilitating learning with dyscalculia from DiT and educator perspectives.

Greater educator support and training regarding dyscalculia is needed

Educators’ perceived need for wider awareness of dyscalculia, alongside greater pre-emptive training and guidance tailored towards dyscalculia within PGME learning environments has also been described for other SpLDs [ 23 , 58 , 59 ]. Greater research is needed to develop such awareness and evidence-based training, with similar needs identified more widely in HEI for dyscalculia [ 3 ] and for other SpLDs [ 23 , 24 , 27 ]. Akin to some participants, Swanwick and Morris [ 60 ] discuss the increasing expectations on clinical educationalists to deliver professional-level education and Sandhu [ 61 ] explores participants’ expressed need for greater faculty development whilst rectifying the deficit of evidence-base for PGME educators to use.

The crucial importance of the bidirectionality of the educator-learner relationship, with educators perceiving themselves as learners too, is only subtly alluded to elsewhere [ 3 ]. Given the bidirectional learning relationship was reportedly undermined by frequent DiT placement rotations, fast-paced clinical environments and shift-based training patterns, further exploration of the appropriateness of current UK PGME training design for DiT with dyscalculia could be important.

Coping strategies are important to better understand

As with this study, Drew’s research suggested coping strategies for learners with dyscalculia to be potentially important, effective and helpful but could have limitations [ 3 ]. However, this study provides the first examples of coping strategies, potential or already used, by DiT with dyscalculia. It is crucial that research to develop better understanding of both positive and negative dyscalculia-based coping mechanisms occurs in the future given the broad participant concerns.

Identification is key but not fully enabling

Educators perceived early identification of dyscalculia to be key, showing commonality with dyscalculia, dyslexia and dyspraxia-based studies [ 3 , 25 , 28 ]. That identification was not seen as an absolute solution reinforces the need for further research exploring other disabling factors. However, the witnessed or potential negatives of being ‘labelled’ following dyscalculia ‘diagnosis/identification’, outlined by some participants, have been found only minimally elsewhere within learner-based dyslexia and dyscalculia HEI studies [ 3 , 25 , 28 ]. Negative consequences to labelling included the attitudes learners encountered within the clinical community, suggesting a need to understand cultural norm-related impacts. In contrast, the far greater positives to identification, and the necessity of labelling perceived by educators, were also seen in other SpLD studies [ 3 , 25 , 28 ], enabling self-understanding and access to support. Certainly, the need for improved dyscalculia identification approaches and training is highlighted by the lack of educator confidence in identifying dyscalculia where they had no relative-based experience.

Within the UK, voluntary dyslexia ‘screening’ processes are now offered to some medical students and DiT and similar opportunities could be offered for dyscalculia in the future. Moreover, accumulating evidence indicates an ever-greater importance of establishing equity of learning opportunity and that identification has a positive performance effect for DiT with dyslexia [ 16 , 62 , 63 ].

The PGME clinical context may limit support

Whilst educators clearly adopted a strongly student-centred approach to supporting learning with dyscalculia, addressing the influence of the duality of clinical educator roles on this approach is important. Educator supportive intent was twinned with tension between balancing effective DiT learning with guaranteeing patient safety within diverse, predominantly clinical learning PGME environments, sharing commonalty with L’Ecuyer’s nursing study [ 23 ]. Swanwick and Morris [ 60 ] note this influence on delivering training, with Sandhu [ 61 ] exploring general concerns regarding risk and clinical learning.

Even more pronounced perceived patient safety concerns were expressed in other nursing SpLD studies [ 23 , 29 , 54 , 64 ], and further post-qualification independent working concerns emerged [ 23 , 65 , 66 ], which limited educators’ willingness to support learning. Together, these tensions appear to set learning facilitation for those with dyscalculia within healthcare apart from non-healthcare settings. Therefore, healthcare-specific education research and training is needed to address this, especially given thus far, analogous concerns regarding dyslexia and clinical risk remain unproven.

The influence of educator-reported increasing clinical workload and resource limitations on approach towards supporting DiT with dyscalculia was similarly seen within nursing studies [ 23 , 29 ]. Whilst the impact of clinical demands on UK-based educators are broadly known [ 67 ], greater recognition of the potentially disproportionately negative impact on DiT with dyscalculia needs to be made by those overseeing training delivery.

Uncertainty regarding reasonable adjustments need addressing

Additionally, whilst educators were generally supportive of RAs for DiT with dyscalculia, most intending these to be enabling, caveats to RA introduction were substantial for some. Concerns regarding RA implementation for DiT with dyscalculia were similar to nursing and wider HEI SpLD studies [ 24 , 66 ], but less common or absolute, most relating to feasibility, fairness and adverse impact on educators. These are important to explore if inclusivity in PGME is to be further embraced. Furthermore, and similarly to HEI findings [ 24 ], participant concerns about externally-mandated RAs derived from distant SpLD experts suggest that harnessing coproduction, with greater involvement of clinical educators in RA design, could be important for future endorsement. Additionally, whilst the scale of potential RA suggestions for dyscalculia made in this study is novel, it is important that the experiences of DiT with dyscalculia themselves are captured and used to ensure adjustments are truly enabling.

Therefore, whilst this study reveals important and novel discoveries relating to educators, PGME and dyscalculia, establishing DiT experiences of dyscalculia and PGME is the most crucial avenue of future research to next undertake to better understand and enable both DiT and educators to fulfil their roles effectively and inclusively.

Limitations

As a small, qualitative scoping study undertaken in Wales, study findings cannot and should not be generalisable. Seemingly the first study in this area, transferability should also be considered carefully. Due to purposive sampling, those volunteering may have been more interested in this topic; therefore, findings may not reflect the range of knowledge, attitudes, and experiences of all PGME educators.

Furthermore, use of interviews for data collection and the resultant lack of anonymity may have altered participant contributions. Moreover, despite adopting reflexivity, as a relatively inexperienced, sole researcher, I will have engaged in interviews and analysed data with intrinsic unconscious biases, introducing variability and affecting finding credibility. Despite methodological limitations within this small scoping study, my intention was to construct detailed understanding, providing a basis for future research.

This study reveals, seemingly for the first time, the attitudes, understanding and perceptions of PGME educators relating to DiT with dyscalculia. It highlights that lack of awareness and understanding of dyscalculia exists within the PGME educator community, especially in the absence of relatives with dyscalculia, and that widely accessible, evidence-based approaches to identification, support, teaching approaches and RA provisions are needed and wanted by PGME educators.

The rich stories of participants illuminate the emphasis educators place on experiential learning in informing their perceptions and training approaches, especially in the absence of prospective dyscalculia training or evidence base to draw upon. Given this, including the impact of limited or complete lack of dyscalculia experience and the substitution of generalisation to fill knowledge gaps found in this study, there is a real need for greater PGME-focused research to pre-emptively inform and support all educators.

Furthermore, greater acknowledgement and understanding of the seminal influence that clinical context has on educators, their attitudes towards supporting DiT with dyscalculia and the highly prized bidirectional learning relationships, as revealed in this study, are needed. It highlights the need for greater research to better understand the impact that specific nuances of PGME might have on educators’ support of DiT with dyscalculia and further characterise unmet needs. Future research must begin to address educator uncertainties revealed in this study around potential concerns relating to patient safety and care and differential approaches for dyscalculia and unfairness to other learners to move PGME forward in an effective, inclusive and enabling way.

Notable in this study is the lack of the learner voice, and future research needs to begin to better understand the perceptions and experiences of DiT with dyscalculia of PGME across a wide range of aspects. These could involve those suggested by participants, including DiT PGME learning and assessment experiences, coping strategies, reasonable adjustments and cultural norm impact. Furthermore, clarifying the wider awareness and knowledge levels of PGME educators regarding dyscalculia via more quantitative approaches could help build breadth to the understanding of this poorly understood phenomenon alongside the depth provided by this study.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

Attention Deficit and Hyperactivity Disorder

Doctors in Training

General Medical Council

Higher Education Institution

Health Education and Improvement Wales

Postgraduate Medical Education

Professional Support Unit

Reasonable Adjustment

Reflexive Thematic Analysis

Specific Learning Difference

United Kingdom

Zone of Proximal Development

Laurillard D, Butterworth B. Review 4: The role of science and technology in improving outcomes for the case of dyscalculia. In: Current Understanding, Support Systems, and Technology-led Interventions for Specific Learning Difficulties: evidence reviews commissioned for work by the Council for Science and Technology. Council for Science and Technology, Government Office for Science; 2020. https://assets.publishing.service.gov.uk/media/5f849afa8fa8f504594d4b84/specific-learning-difficulties-spld-cst-report.pdf . Accessed 24th November 2023.

Parliamentary Office for Science and Technology (POST). Postnote: Dyslexia and dyscalculia. London: Parliamentary Office for Science and Technology. 2014. https://www.parliament.uk/globalassets/documents/post/postpn226.pdf . Accessed 9th October 2023.

Drew S. Dyscalculia in higher education. PhD Thesis, Loughborough University, UK; 2016.

Walker E, Shaw S. Specific learning difficulties in healthcare education: the meaning in the nomenclature. Nurse Educ Pract. 2018;32:97–8.

Article   Google Scholar  

Shaw S. The impacts of dyslexia and dyspraxia on medical education. PhD Thesis, University of Brighton and the University of Sussex; 2021. p. 16.

Lewis K, Lynn D. Against the odds: insights from a statistician with dyscalculia. Educ Sci. 2017;8:63. https://doi.org/10.3390/educsci8020063 .

Lewis K, Lynn D. An insider’s view of a mathematics learning disability: compensating to gain access to fractions. Investig Math Learn. 2018;10(3):159–72. https://doi.org/10.1080/19477503.2018.1444927 .

Shrewsbury D. State of play: supporting students with specific learning difficulties. Med Teach. 2011;33(3):254–5.

Google Scholar  

Murphy M, Dowell J, Smith D. Factors associated with declaration of disability in medical students and junior doctors, and the association of declared disability with academic performance: observational study using data from the UK Medical Education Database, 2002–2018 (UKMED54). BMJ Open. 2022;12:e059179. https://doi.org/10.1136/bmjopen-2021-059179 .

Mogensen L, Hu W. ‘A doctor who really knows...’: a survey of community perspectives on medical students and practitioners with disability. BMC Med Educ. 2019;19:288. doi: 10.1186/s12909-019-1715-7

British Medical Association. Disability in the Medical Profession: Survey Findings 2020. 2021. https://www.bma.org.uk/media/2923/bma-disability-in-the-medical-profession.pdf . Accessed 9th October 2023.

General Medical Council. What is differential attainment? 2021. Available from: https:// www.gmc-uk.org/education/standards-guidance-and-curricula/projects/differential-attainment/what-is-differential-attainment . Accessed 9th October 2023.

General Medial Council. Welcomed and valued: Supporting disabled learners in medical education and training. 2019. https://www.gmc-uk.org/-/media/documents/welcomed-and-valued-2021-english_pdf-86053468.pdf . Accessed 9th October 2023.

Ellis R, Cleland J, Scrimgeour D, Lee A, Brennan P. The impact of disability on performance in a high-stakes postgraduate surgical examination: a retrospective cohort study. J Royal Soc Med. 2022;115(2):58–68.

Equality Act. 2010. c. 15. [Internet.] 2010. https://www.legislation.gov.uk/ukpga/2010/15 . Accessed 9th October 2023.

Asghar Z, et al. Performance of candidates disclosing dyslexia with other candidates in a UK medical licensing examination: cross-sectional study. Postgrad Med J. 2018;94(1110):198–203.

Botan V, Williams N, Law G, Siriwardena A. How is performance at selection to general practice related to performance at the endpoint of GP training? Report to Health Education England. 2022. https://eprints.lincoln.ac.uk/id/eprint/48920/ . Accessed 9th October 2023.

Taylor D, Hamdy H. Adult learning theories: implications for learning and teaching in medical education: AMEE Guide 83. Med Teach. 2013. https://doi.org/10.3109/0142159X.2013.828153 .

Grant J. Learning needs assessment: assessing the need. BMJ. 2002;324:156–9. https://doi.org/10.1136/bmj.324.7330.156 .

Davis N, Davis D, Bloch R. Continuing medical education: AMEE Education Guide 35. Med Teach. 2008;30(7):652–66.

Pit-Ten Cate I, Glock S. Teachers’ implicit attitudes toward students from different Social groups: a Meta-analysis. Front Psychol. 2019. https://doi.org/10.3389/fpsyg.2019.02832 .

Cambridge Dictionary. Meaning of attitude in English. [Internet.] 2022. https://dictionary.cambridge.org/dictionary/english/attitude . Accessed 9th October 2023.

L’Ecuyer K. Perceptions of nurse preceptors of students and new graduates with learning difficulties and their willingness to precept them in clinical practice (part 2). Nurse Educ Pract. 2019;34:210–7. https://doi.org/10.1016/j.nepr.2018.12.004 .

Ryder D, Norwich B. UK higher education lecturers’ perspectives of dyslexia, dyslexic students and related disability provision. J Res Spec Educ Needs. 2019;19:161–72.

Newlands F, Shrewsbury D, Robson J. Foundation doctors and dyslexia: a qualitative study of their experiences and coping strategies. Postgrad Med J. 2015;91(1073):121–6. https://doi.org/10.1136/postgradmedj-2014-132573 .

Shaw S, Anderson J. The experiences of medical students with dyslexia: an interpretive phenomenological study. Dyslexia. 2018;24(3):220–33.

L’Ecuyer K. Clinical education of nursing students with learning difficulties: an integrative review (part 1). Nurse Educ Pract. 2019;234:173–84. https://doi.org/10.1016/j.nepr.2018.11.015 .

Walker E, Shaw S, Reed M, Anderson J. The experiences of foundation doctors with dyspraxia: a phenomenological study. Adv Health Sci Educ Theory Pract. 2021;26(3):959–74.

Evans W. If they can’t tell the difference between duphalac and digoxin you’ve got patient safety issues. Nurse lecturers’ constructions of students’ dyslexic identities in nurse education. Nurse Educ Today. 2014;34(6):41–6. https://doi.org/10.1016/j.nedt.2013.11.004 .

Illing J, Carter M. Chapter 27: philosophical research perspectives and planning your research. In: Swanwick T, Forrest K, O’Brien B, editors. Understanding medical education: evidence, theory and practice. 3rd ed. Oxford: Wiley; 2019. pp. 393–6.

Atkinson K, Koenka A, Sanchez C, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

Cohen L, Manion L, Morrison K. Research methods in education. 8th ed. London: Routledge; 2017. pp. 171–86.

Book   Google Scholar  

Critical Skills Appraisal Programme (CASP): Qualitative checklist. In: Critical Appraisal Checklist. Critical appraisal skills programme. [Internet]. 2018. https://casp-uk.net/casp-tools-checklists/ . Accessed 9th October 2023.

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

Cohen L, Manion L, Morrison K. Research methods in education. 8th ed. London: Routledge; 2017. pp. 334–5.

DeJonckheere M, Vaughn L. Semistructured interviewing in primary care research: a balance of relationship and rigour. Fam Med Com Health. 2019. https://doi.org/10.1136/fmch-2018-000057 .

O’Hara C. To Teach or Not to Teach? A study of Dyslexia in Teacher Education. Cardiff Metropolitan University, UK;2013 p. 240.

Ryder D. Dyslexia assessment practice within the UK higher education sector: Assessor, lecturer and student perspectives. University of Exeter; 2016.

Schabmann A, Eichert H-C, Schmidt B, Hennes A-K, Ramacher-Faasen N. Knowledge, awareness of problems, and support: university instructors’ perspectives on dyslexia in higher education. Eur J Spec Needs Educ. 2020;35(2):273–82.

Cohen L, Manion L, Morrison K. Research methods in education. 8th ed. London: Routledge; 2017. pp. 507–24.

Cohen L, Manion L, Morrison K. Research methods in education. 8th ed. London: Routledge; 2017. ;523.

Malmqvist J, Hellberg K, Möllås G, Rose R, Shevlin M. Conducting the pilot study: a neglected part of the research process? Methodological findings supporting the importance of piloting in qualitative Research studies. Int J Qual Methods. 2019;18. https://doi.org/10.1177/1609406919878341 .

Miller T, Bell L. Consenting to what? Issues of access, gate-keeping and ‘informed’ consent. In: Mauthner M, Birch M, Jessop J, Miller T, editors. Ethics in qualitative research. London: Sage; 2002. pp. 53–5.

Cohen L, Manion L, Morrison K. Research methods in education. 8th ed. London: Routledge. 2017.;523.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

Braun V, Clarke V. Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern-based qualitative analytic approaches. Couns Psychother Res. 2020;21(2):37–47.

Braun V, Clarke V. Thematic analysis. In: Cooper H, Camic P, Long D, Panter A, Rindskopf D, Sher K, editors. APA handbook of research methods in psychology, vol. 2: Research designs: quantitative, qualitative, neuropsychological, and biological. Washington, DC: American Psychological Association; 2012. pp. 57–71.

Chapter   Google Scholar  

Braun V, Clarke V. Reflecting on reflexive thematic analysis. Qual Res Sport Exerc Health. 2019;11(4):589–97.

Byrne DA. Worked example of Braun and Clarke’s approach to reflexive thematic analysis. Qual Quant. 2021;56:1391–412.

Braun V, Clarke V. One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qual. Res Psychol. 2021;18(3):328–52.

Finlay L. Outing the researcher: the provenance, process, and practice of reflexivity. Qual Health Res. 2002;12(4):531–45. https://doi.org/10.1177/104973202129120052 .

Beer O. There’s a certain slant of light’: the experience of discovery in qualitative interviewing. OTJR. 1997;17(2):127.

Cohen L, Manion L, Morrison K. Research methods in education. 8th ed. London: Routledge; 2017. ;112.

Cameron H, Nunkoosing K. Lecturer perspectives on dyslexia and dyslexic students within one faculty at one university in England. Teach High Educ. 2012;17(3):341–52.

West M, Coia D. Caring for doctors, caring for patients. London:General Medical Council. 2019. https://www.gmc-uk.org/-/media/documents/caring-for-doctors-caring-for-patients_pdf-80706341.pdf . Accessed 8th October 2023.

Kaufman DM. Teaching and learning in Medical Education: how theory can inform practice. In: Swanwick T, Forrest K, O’Brien BC, editors. Understanding Medical Education evidence theory and practice. New Jersey: Wiley Blackwell; 2019. pp. 58–9.

Swanwick T. Postgraduate medical education: the same, but different. Postgrad Med J. 2015;91:179–81.

Farmer M, Riddick B, Sterling C. Dyslexia and inclusion: assessment and support in higher education. London: Whurr; 2002. pp. 175–81.

Mortimore T. Dyslexia in higher education: creating a fully inclusive institution. J Res Spec Educ Needs. 2013;13:38–47. https://doi.org/10.1111/j.1471-3802.2012.01231.x .

Morris C. Chapter 12: Work-based learning. In: Swanwick T, Forrest K, O’Brien B, editors. Understanding medical education: Evidence, theory and practice. 3rd ed. Oxford: Wiley; 2019. p.168.

Sandhu D. Postgraduate medical education – challenges and innovative solutions. Med Teach. 2018;40(6):607–9.

Ricketts C, Brice J, Coombes L. Are multiple choice tests fair to medical students with specific learning disabilities? Adv Health Sci Educ Theory Pract. 2010;15:265–75. https://doi.org/10.1007/s10459-009-9197-8 .

Asghar Z, Williams N, Denney M, Siriwardena A. Performance in candidates declaring versus those not declaring dyslexia in a licensing clinical examination. Med Educ. 2019;53(12):1243–52.

Riddell S, Weedon E. What counts as a reasonable adjustment? Dyslexic students and the concept of fair assessment. Int Stud Sociol Educ. 2006;16(1):57–73. https://doi.org/10.1080/19620210600804301 .

Riddick R, English E. Meeting the standards? Dyslexic students and the selection process for initial teacher training. Eur J Teach Educ. 2006;29(2):203–22. https://doi.org/10.1080/02619760600617383 .

Morris D, Turnbull P. Clinical experiences of students with dyslexia. J Adv Nurs. 2006;54(2):238–47. https://doi.org/10.1111/j.1365-2648.2006.03806.x .

General Medical Council. National Training Survey 2024 results. [Internet]. 2024 p. 4–5, 24–25, 28–32. https://www.gmc-uk.org/-/media/documents/national-training-survey-summary-report-2024_pdf-107834344.pdf . Accessed 26/7/2024.

Download references

Acknowledgements

LJC would like to thank her academic supervisor Ms Helen Pugsley, Centre for Medical Education at Cardiff University, for her guidance and encouragement during LJC’s Masters project. LJC would also like to thank all the interview participants who took an active part in shaping this project. LJC is extremely grateful for their time, honesty and for providing such vivid and illuminating windows into their roles as educators. LJC would also like to thank Dr Colette McNulty, Dr Helen Baker and wider staff members at HEIW for their support in circulating her study invitation to trainers across Wales.

LJC did not receive any funding for, or as part of, the research project described in this paper.

Author information

Authors and affiliations.

Aneurin Bevan University Health Board, Newport, UK

Laura Josephine Cheetham

You can also search for this author in PubMed   Google Scholar

Contributions

LJC designed and undertook the entirety of the research project described in this paper. She also wrote this paper in entirety.

Corresponding author

Correspondence to Laura Josephine Cheetham .

Ethics declarations

Ethics approval and consent to participate.

This study received ethical approval from Cardiff University’s Medical Ethics Committee. After discussions, it was felt that NHS Research Ethics Committee approval was not needed. Written and verbally informed consent to participate was obtained, with prospective participants being provided with information regarding the study and their rights at least three weeks before interviews took place.

Consent for publication

Research participants gave written and verbal consent for the contents of their interviews to be analysed and reported as part of this study.

Competing interests

The authors declare no competing interests.

Author’s information

LJC is currently a final year GP registrar working in Wales with keen interests in differential attainment, inclusivity within education and civil learning environments. This paper is borne from a project she designed and undertook as part of her Masters in Medical Education at Cardiff University.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cheetham, L.J. “Because people don’t know what it is, they don’t really know it exists” : a qualitative study of postgraduate medical educators’ perceptions of dyscalculia. BMC Med Educ 24 , 896 (2024). https://doi.org/10.1186/s12909-024-05912-2

Download citation

Received : 27 November 2023

Accepted : 14 August 2024

Published : 20 August 2024

DOI : https://doi.org/10.1186/s12909-024-05912-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dyscalculia
  • Postgraduate
  • Neurodiversity

BMC Medical Education

ISSN: 1472-6920

what is interview schedule in qualitative research

  • Open access
  • Published: 19 August 2024

Updating a conceptual model of effective symptom management in palliative care to include patient and carer perspective: a qualitative study

  • Emma J. Chapman 1 ,
  • Carole A. Paley 1 ,
  • Simon Pini 2 &
  • Lucy E. Ziegler 1  

BMC Palliative Care volume  23 , Article number:  208 ( 2024 ) Cite this article

114 Accesses

Metrics details

A conceptual model of effective symptom management was previously developed from interviews with multidisciplinary healthcare professionals (HCP) working in English hospices. Here we aimed to answer the question; does a HCP data-derived model represent the experience of patients and carers of people with advanced cancer?

Semi-structured interviews were undertaken with six patients with advanced cancer and six carers to gain an in-depth understanding of their experience of symptom management. Analysis was based on the framework method; transcription, familiarisation, coding, applying analytical framework (conceptual model), charting, interpretation. Inductive framework analysis was used to align data with themes in the existing model. A deductive approach was also used to identify new themes.

The experience of patients and carers aligned with key steps of engagement, decision making, partnership and delivery in the HCP-based model. The data aligned with 18 of 23 themes. These were; Role definition and boundaries, Multidisciplinary team decision making, Availability of services/staff, Clinician-Patient relationship/rapport, Patient preferences, Patient characteristics, Quality of life versus treatment need, Staff time/burden, Psychological support -informal, Appropriate understanding, expectations, acceptance and goals- patients, Appropriate understanding, expectations, acceptance and goals-HCPs, Appropriate understanding, expectations, acceptance and goals- family friends, carers, Professional, service and referral factors, Continuity of care, Multidisciplinary team working, Palliative care philosophy and culture, Physical environment and facilities, Referral process and delays. Four additional patient and carer-derived themes were identified: Carer Burden, Communication, Medicines management and COVID-19. Constructs that did not align were Experience (of staff), Training (of staff), Guidelines and evidence, Psychological support (for staff) and Formal psychological support (for patients).

Conclusions

A healthcare professional-based conceptual model of effective symptom management aligned well with the experience of patients with advanced cancer and their carers. Additional domains were identified. We make four recommendations for change arising from this research. Routine appraisal and acknowledgement of carer burden, medicine management tasks and previous experience in healthcare roles; improved access to communication skills training for staff and review of patient communication needs. Further research should explore the symptom management experience of those living alone and how these people can be better supported.

Peer Review reports

A conceptual model of effective symptom management was previously developed from qualitative data derived from interviews with healthcare professionals working in English hospices to elicit their views about the barriers and facilitators of effective symptom management [ 1 ]. The model delineated the successful symptom management experience into four steps of: engagement, decision-making, partnership and delivery. Constructs contributing to these were identified (Table 1 ).

Our original model was based solely on Healthcare professional (HCP) input. However, the perception of professionals may vary from that of patients and carers. A recent patient and professional survey of needs assessments in an oncology inpatient unit showed discrepancies between perception of unmet needs between staff and patients [ 2 ]. For this reason, we were concerned that what was deemed important by HCP working in palliative care may not mirror the concerns and experience of patients and carers.

Here we aimed to answer the question; does an HCP data-derived model represent the experience of patients and carers of people with advanced cancer?. If necessary, the original conceptual model of effective symptom management will be updated.

Qualitative, semi-structured interviews were chosen to gain an in-depth understanding of the experience from the perspective of a range of patients and carers. All methods were carried out in accordance with the principles of the Declaration of Helsinki. Ethical approval was granted by a UK research ethics committee ( North of Scotland [ 2 ] Research Ethics Committee (20/NS/0086)). Verbal, recorded informed consent was given using a verbal consent script (Supplementary information 1). Our original intention had been to conduct interviews face to face facilitated by a set of laminated prompt cards based upon those used in the HCP interviews. However, adaptation to telephone interviews in patient’s homes was necessary due to COVID-19 restrictions and it became apparent that the card exercise did not work well remotely. We continued interviews based on the interview schedule but without the use of prompt cards. EC is a female, non-clinical senior research fellow in palliative care. She has experience of qualitative interviews and led the development of the original HCP-based model of effective symptom management [ 1 ]. Audio recordings were transcribed verbatim by a senior academic secretary.

Recruitment

Participants who met the inclusion criteria were identified by a research nurse at the participating hospice. Eligible patients were those who met all 5 criteria:

Diagnosed with advanced disease (i.e., cancer that is considered to be incurable).

Had been referred to the participating hospice.

Were 18 years of age or over.

Were able to speak and understand English.

Were able to give informed consent.

Eligible carers were people who met all 4 criteria:

Were the informal carer of an eligible patient (who may or may not also be participating in the study).

Patients or carers were excluded if they:

Exhibited cognitive dysfunction which would impede their being able to give informed consent and take part in the study.

Were deemed by hospice staff to be too ill or distressed.

Access to the inpatient unit was not possible at this time due to Covid-19 restrictions. The research nurse introduced the study, provided a participant information sheet and completed a consent to contact form. The first contact with the researcher was made by telephone to confirm (or not) interest in participation and answer questions. An interview time not less than 48 h after provision of the participant information sheet, was scheduled. The researcher and the participant information sheet explained the overall aim of the RESOLVE research programme to improve health status and symptom experience for people living with advanced cancer (Supplementary information 2). The verbal consent statements made it clear that this was a conversation for research purposes only and would not have any impact on the care the patient received (Supplementary information 3). Permission was granted that the researcher may contact the clinical team at the hospice if there was a serious concern for welfare that required urgent attention. Verbal informed consent was collected, and audio recorded at the start of the interview with participants answering yes or no to each of the statements in the verbal consent script (Supplementary information 3). Participants were told that we had already interviewed HCPs about what helped or hindered effective symptom management and now we wanted to understand their perspective too.

Data Collection

Interview topic guides (Supplementary information 4 and 5) were used. Interviews were conducted by EC over the telephone and audio recorded onto an encrypted Dictaphone. Files were downloaded onto a secure University of Leeds drive and then deleted from the Dictaphone. No video was recorded. The researcher made brief field notes directly after the interview on impression, emotion and participant backgrounds that were disclosed.

An Excel spreadsheet was used to facilitate data management. We explored the constructs of patient and carer experience as defined by our existing model. An inductive framework analysis was used to align data with themes in the existing conceptual model. A deductive approach was also used to identify new themes not included in the original model. Two researchers (EC and CP) independently conducted framework analysis on all transcripts. Data was then compared and discussed until a consensus data set was developed. The study is reported in accordance with Standards for Reporting Qualitative Research (SRQR) recommendations [ 11 ].

Twelve participants were interviewed in their own homes by telephone. In five interviews a family member or friend was also present, and they were interviewed as a dyad. One interview was with a carer of a patient (patient not interviewed) and one interview was with a patient alone. Interviews lasted between 21 and 45 min. Basic self-declared demographic information was collected (Table 2 ).

One person was approached by a research nurse and provided with participant information sheet. However, when they spoke with the researcher on the telephone it was clear that they had not read the participant information sheet. The individual declined for the information to be read out loud with them. Informed consent could therefore not be given and an interview was not carried out. Upon reflection, this person was keen to informally chat to the researcher but was perhaps seeking social interaction rather than research participation. All other participants completed the interview as planned.

Participant background was relevant as one carer and one patient, had experience of working in healthcare and this may have shaped their experience and understanding. Analysis was based on the framework method; transcription, familiarisation, coding, applying analytical framework (conceptual model), charting, interpretation.

Data aligned with 18 of 23 constructs in the professional based model (Table 3 ). Pseudonyms are used to protect confidentiality.

Four constructs that had featured in the healthcare professional based model did not feature in the patient and carer derived data. These were perhaps not unexpectedly related to characteristics of staff; Experience (of staff), Training (of staff), Psychological support (for staff) and the provision of formal psychological support (for patients). One construct ‘Guidelines and Evidence’ was not explicitly mentioned by patients and carers. However, a carer did comment that at time of referral to the hospice, the patient had been on two different does of co-codamol simultaneously ‘ You were on co-codamol, the 500/8 plus co-codamol 500/30’ (Patricia, carer) which suggested to the researchers that the patient had been taking the medication in a way contrary to guidelines. Medications were then optimised by hospice staff. Four additional patient and carer-derived themes were identified: Carer Burden, Communication, Medicines management and Impact of COVID-19 (Fig. 1 ).

figure 1

The conceptual model of effective symptom management in palliative care was updated to also reflect patient and carer perspective. Specifically, the need for support with communication and medicines management plus consideration of the carer burden were included

Carer burden

Our HCP-based conceptual model identified a role for the carer in shaping symptom management experience in either a positive or negative way [ 1 ]. The patient and carer derived data presented here provides additional insight into their role and the activities required of them. Carer burden is a multifaceted experience, however our interview schedule specifically asked about symptom management experience.

The carer was sometimes responsible for raising concerns and initiating the referral for specialist palliative cares support ‘it was at some stage earlier in this year when I was a little anxious about your health and contacted the chemo wing at (hospital) and one of the nurses there thought it would be helpful to me and Patient to put us in touch with (the hospice) (Kathleen, carer).

Carers were enmeshed into the disease and symptom experience of the patient, referring to ‘we’ when talking about the patient’s cancer treatment, pain and referral to hospice.

Olivia (carer): Immune therapy we’d had a reaction to and we’d resolved the reaction but it concluded in stopping any treatment and we then went to a situation where we were not able to manage the pain from the cancer successfully and it was recommended by our oncologist that (the hospice) may have some expertise that we could….
Olivia (carer): Tap into…as I say that was a difficult decision for us to agree for Anthony to go into (the hospice).

However, on occasion the insight from the carer was not acted upon leading to a delay in support for distressing symptoms ‘ I kept saying to people, he’s losing weight, he’s in pain and they just kept saying well he shouldn’t be in this amount of pain ‘cos of what his bloods are like. And I kept saying well what you’re saying he should be like, I can tell you he’s not like and we’re not ones to you know erm (he) isn’t one to be bothering the doctor.’ (Sandra, carer).

Once the patient was receiving palliative care the carer took responsibility for obtaining and retaining knowledge either because the patient could not, due to memory problems from medication, or their condition, or they were not willing to do this for themselves.

Martin (patient): ‘she knows better than me ‘cos I’m always, I’m not very good at remembering stuff’
Martin (patient): I’m not interested no I understand you do have a very important role and she’s taken the lead on it now, that’s definitely the case’

And with another couple

Terry (patient): Sorry I’ve got my wife at the side of me ‘cos she knows better than me ‘cos I’m always, I’m not very good at remembering stuff.
Stacey (carer): I’m usually present yeah, I’m usually around. I tend to be the one that asks more questions.

However, in our interviews occasionally discordance between patient and carer opinion was seen with the carer rating the symptoms more troublesome than the patient’s recollection.

Interviewer: So was it (the pain) stopping you doing any activities that you had been able to do?
Marti, (patient): Oh I see, not particularly no
Mary (carer): I would probably disagree with that sorry. I would say that Martin’s management of the pain and our management of the pain and everything was kind of a constant thing, that’s all we, you know if felt like we were talking about it all the time, his pain’.

Despite an integral role in facilitating effective symptom management carers could feel unacknowledged, specifically by hospital staff. ‘ at the same time they’re telling me I’m not a carer and yet you know Wendy would be in a very sorry state if I wasn’t on the ball all the time’ (Patricia, carer). Specialist palliative care staff were better at providing acknowledgement and consideration of individual capabilities.

Patricia (carer): ‘So they understand that I’m not sort of hale and hearty and I’ve got my limitations….and it’s just lovely them knowing and actually accepting that I am caring for patient, we are doing the best that we can and that they are there for us.’. This simple step of acknowledgement was appreciated and a factor in allowing the carer to continue to support the patient.
Olivia (carer): ‘You know I do feel that it’s about me as well, it’s not just about Anthony which, it is really all about Anthony but you know it’s important that I continue with my wellbeing in order that I can support and look after him’ .

Communication

The impact of communication of effective symptom management occurred at different levels. As would be expected, communication needed to be tailored to the background, previous experience and outlook of the individual. In particular, we noted that a patient who had a healthcare background themselves welcomed more in-depth discussion and input into decision making.

Andrew (patient): I’ve dealt with people with cancers and terminal illnesses. Yeah, I know about syringe drives and everything…The important thing is to be able to discuss it and with my knowledge of medication as well, I mean I can discuss it in depth.’ .

Interestingly, this person also equated being admitted to the hospice with the use of a syringe driver and end of life, illustrating that regardless of the patient’s professional background, a thorough explanation without any assumptions on understanding would still be necessary. Andrew (patient):  ‘I mean I could go into (the hospice) at any time knowing this but with my work record and everything else, I know what it all entails I mean I’d probably go in and they’d probably want to put me on a syringe drive with Oramorph and Midazolam and Betamethasone and everything else and I know that is the beginning of the end once you start on the syringe driver and everything because it just puts you to sleep and just makes you comfortable and you don’t really have no quality of life’ .

Patients and carers valued being able to get in contact with someone when difficulties arose. Kathleen (carer): ‘Ease of communication is important to us so it’s easy to get in touch with somebody’ .

For some people, at the earlier stages after referral to the palliative care team, the only support that they required was just telephone contact.

Kathleen (carer): ‘What we have at the moment is a phone number to call and another lady, a nurse who actually rings us probably about once a fortnight yeah to check if we have any anxieties, problems.’ .

Palliative care professionals had a key role in mediating communication between patients and carers and other services. Kathleen (carer):  ‘she said yes, do you think Harry would mind us contacting the GP you know and I said I’m sure he would, if I think it’s a good idea he’d go along with it so that’s what we did, she did, she contacted our GP which meant that we got a telephone appointment and something happened very quickly’ .

This extended to explaining the purpose and results of tests such as X-rays.

Stacey (carer): Yeah he went when he was admitted he went for an Xray and that was the hospice, it was (clinical nurse specialist) that had organised that. We didn’t really know what was happening in the hospital but we came home again and he didn’t really know why he’d had the Xray or anything.
So when he spoke to the nurse at (the hospice), she sort of went through it all with him and talked him through it and that was really informative and helpful

There was a feeling that communication was better in specialist palliative care compared to the general National Health Service (NHS).

Olivia (carer): ‘There is an awful lot to be learned from the NHS about liaising and communications they could learn an awful lot from the way that the palliative care is operating and running’.

The carer also became an advocate for the patient’s needs and relaying information about symptoms and concerns to the healthcare professionals which the patient may not have themselves. Andrew (patient): ‘ I mean she (partner) tells (hospice nurse) things that I don’t’ cos‘ I mean I sometimes bottle quite a few things up and don’t say nothing but (partner) notices these things and then she will tell (hospice nurse) about them’.

This was also seen during a research interview, where the patient was willing for the carer to ‘tell the story’ on their behalf.

Mary (carer): Sorry I’m doing all the talking.
Martin (patient): Well no you need to because I’m useless.

We identified that patients had unmet needs in communicating about their condition ‘ Yeah, erm, again it’s, people are very reticent to use the word cancer. So they balk at saying the word’ (Wendy, patient)  and symptom experience with family and friends other than their regular carer.

Wendy (patient): I don’t know where she’s (my sister) at in terms of knowing about my symptoms and about the treatment I’m having, well no I do tell her actually, it’s not that I don’t but she has very bad arthritis…so I don’t push that too much because I’m thinking she’s actually in as much pain as I might be.’

This lack of communication could come from a position of wishing to protect the feelings of family members:

Wendy (patient): ‘Oh it’s been very difficult with family. You don’t know how much you want to tell them and you don’t know how far down the line you are anyway. I think over the years, I’ve been protecting my family’ )

Sometimes there were other important conversations that had not been held with family members.

Martin (patient): ‘I suppose my point in bringing up was because they’re particularly good kids and they are particularly, although I wouldn’t like them to hear me say it but they are, very good’ .

The work of medicines management

Medicines management was a time consuming and complex task, even for carers who has a background working in healthcare.

Sandra (carer): ‘I’m having to ring back my fourth phone call today to see is it a week off or have they forgotten to give him it. The communication isn’t great and I kind of think you know I’m kind of used to the NHS I’m, I know to ring and that sort of thing but I do think, I think if someone isn’t, got a health background or that sort of background there’s a lot of left to guesswork’ .

Commonly, the responsibility of managing the medicines could be delegated to the carer due to the side effects of the medication on the patient’s memory. It was felt that the patient would not have been able to manage by themselves. Mary (carer): ‘ a lot of the medication has made him not so aware, maybe a little bit muddled at times and his memory’s not as good as it was….you know he does forget quite easily so I wouldn’t, I have to say I wouldn’t trust him with his medication at all.’.

Carers took responsibility for ensuring medications were taken on time. As previously reported, this carer viewed this a joint endeavour with the patient.

Patricia (carer): I wake (patient) at 9 o’clock and make sure that she has her Lansoprazole and that she has her 12 hourly Longtech tablet. I generally am doing everything and as I say, we put the injection in at lunchtime every day and at night I remind her, not that she doesn’t, she doesn’t really need reminding but at 9 o’clock, I say have you had your tablets?’ .

The carer (who did not have a healthcare background) had developed an understanding of complex concepts such as the different modes of metabolism of medication for pain.

Patricia (carer): ‘So she’s now on a different set of pain relief which, the morphine was better but not better for her. So the pain killing stuff that she’s on is processed through the liver rather than through the kidneys and the kidney function has stabilised.’ .

Impact of COVID-19

Interviewees were asked about whether COVID-19 had impacted upon their experience. It seemed that for this selected group of patients and carers the impact was minimal.

Patricia (carer): ‘Can I just add that Covid seems to have, people have been complaining that this has stopped and that’s stopped whereas with Wendy her appointments, they’ve always wanted face to face and we’ve done phone appointments when it’s been appropriate and the care has been absolutely marvelous’.

Availably of hospice staff sometimes filled the gap in other services.

Kathleen (carer): ‘Because of lockdown and the virus and everything obviously all that (GP support) changed and you did start to feel a bit isolated and alone ‘cos you don’t always want to have to get in the car and drive to (hospital) for something if it’s not absolutely necessary and so therefore having someone else to talk to who knew more about things because obviously we’re learning as we go along Harry and I, it was very helpful’.

Problems were attributed to the general NHS system rather than being COVID-19 specific.

Sandra (carer): ‘I think as far as forthcoming information, I don’t think Covid has any bearing on that to be honest. You know, it just, I think it’s just an age-old problem in the NHS is communication.’ .

The close alignment of this patient and carer data with our HCP-based conceptual model provides additional reinforcement of the importance of multidisciplinary working and continuity of care in shaping symptom management experience. Indeed, the ability to see preferred member of general practices staff was recently reported as a factor associated with satisfaction with ends of life care in England [ 3 ].

Palliative care takes a holistic view of the patient and carer, the concerns of both being intertwined and interdependent. The observation that carers and patients viewed themselves as a single unit and talked about ‘we’ when describing the experience of symptoms and service referral, aligns with the dimension of the carer ‘living in the patients world’ and living in ‘symbiosis’ recently described by Borelli et al [ 4 ] and in earlier qualitative work with advanced cancer patients [ 5 ]. Carer opinion can be a close but not always perfect proxy of patient voice, even in this small sample we observed some discordance between patient and carer perception of symptom burden. However, carers were vitally important for communication with healthcare providers, relaying concerns, managing medication and generally advocating for the patient when they were unable or willing to do so. In the UK in 2022, the number of people living alone was 8.3 million. Since 2020, the number of people over 65 years old living alone has also increased [ 6 ]. Household composition is not a general indicator of wider social support networks, but these data do suggest that there could be a considerable number of people with palliative care needs without live-in carer support. This raises the questions of whether the experience of those living without a supportive carer can be equitable and how services might better facilitate this.

Home-based palliative care is thought to reduce symptom burden for patients with cancer [ 7 ]. To enable this, it is therefore vital that carers are adequately supported. Carer burden is a multifaceted experience, however our interview schedule specifically asked about symptom management experience. In agreement with the term ‘role strain’ in the review by Choi and Seo [ 8 ] we saw carers involvement in symptom management and in mediating communication between the patient and healthcare providers. Additional aspects reported by Choi et Seo include physical symptoms of the carer, psychological distress, impaired social relationships, spiritual distress, financial crisis, disruption of daily life and uncertainty [ 8 ] and these will not have all been probed by our interview topic guide.

Although in our original study HCPs talked about medicines from their perspective, the role of the carer was not discussed. Medicines management was an important way that carers facilitated effective symptom management but is a complex task. One carer commented: ‘I have to say that would be a nightmare if I wasn’t a nurse by background’ . Our data on the difficulties with medicine management are not novel and closely mirror the report of Pollock et al., [ 9 ]. Our findings echo and support their conclusions that managing medicine at home during end-of-life care could be improved by reducing the work of medicines management and improving co-ordination and communication in health care and we echo their calls for further research in the area.

We identified that patients and carers viewed mediating communication as an important role for healthcare professionals. This could be enabling communication between patients and carers and other healthcare professionals, for example arranging follow-up care or explaining information received. There was also a need for better communication between patients and their family members. As reviewed and synthesised by Murray et al., (2014) the importance of effective communication in palliative care has been long recognised [ 10 ]. In our study, an opportunity for HCPs to facilitate better communication about symptom experience between patients and their wider family was identified. Our previous survey of English hospices found that healthcare professionals, particularly nurses and allied health professionals felt that they needed more training in basic and advanced communication skills [ 11 ]. Having relevant experience and if the appropriate training was provided, staff may be well placed to support patients with developing an approach to these potentially difficult conversations. Participants were offered a choice of joint or individual interviews, but most chose to be interviewed as a dyad. It is possible that being interviewed as a pair may have altered the information disclosed. Although the aim was to discuss factors that impacted upon effective symptom management, discussions at times deviated to a more general appraisal of a participant’s experiences and all data collected may not be relevant to the research question.

When data was collected that lead to the development of the HCP-based model of effective symptom management (May to November 2019) a global pandemic was unforeseen. At the time of the patient and carer interview described here (October to December 2020), COVID-19 restrictions were in place in the UK. The patients and carers we interviewed were already receiving specialist palliative care support as outpatients. For these individuals it appeared that the impact of COVID-19 pandemic had had minimal impact on their care. The availability and reassurance of telephone support from hospice staff seemed in part to ameliorate the reduced support available from other services such as GPs. This contrasts sharply with the negative impact of COVID-19 on the experience of patients and carers in the more immediate end of life phase [ 12 ], receiving oncology care [ 13 ] or with cancer more generally [ 14 ]. Selection bias is likely as patients and carers with the capacity and willingness to participate in our research study possibly reflect those where the illness is in a more stable phase and immediate needs were being met. Indeed, participants talked about difficulties before referral to specialist palliative care and with other services but were overwhelmingly positive about the support currently being provided by the hospice.

Limitations

Due to the constraints of conducting a research study during the COVID-19 lockdown, more purposive sampling was not possible, this led to a lack of diversity in our sample. All participants identified themselves as of white British or white Scottish ethnicity which potentially means issues related to diverse ethnicities were not captured. All the patients who participated (and the non-participating patient whose carer was interviewed) lived with another person and had carer/family support. The experience of those managing their symptoms in isolation was therefore not captured. All participants were currently accessing support from a single hospice, the experience of those not yet receiving specialist support or receiving support from a different organisation may differ. The sample were diverse in age and included males and females, but all carers were female. Demographic information was not collected on socioeconomic background. COVID-19 restrictions necessitated the use of telephone interviews which may have lost subtle communications cues such as body language or conversely may have facilitated candid description. The transcripts do suggest that participants felt comfortable to tell their experience and they mostly spoke freely with limited prompting. One participant mentioned that he found it very difficult to leave the house, and therefore a telephone interview might have facilitated his inclusion. In some interviews more data was derived from the opinion of the carer than the patient, with the pair agreeing that the carer took responsibility for many tasks involved in managing the condition. We cannot be certain that carer interpretation accurately matches patient experience for all symptoms [ 15 ].

We set out to answer the question; does a healthcare professional data derived model represent the experience of patients and carers of people with advanced cancer? Overall, the answer was yes, as our healthcare professional based conceptual model of effective symptom management aligned well with the experience of patients with advanced cancer and their carers. Domains that did not align were those specifically related to professionals; experience (of staff), training (of staff), guidelines and evidence, psychological support (for staff) and the provision of formal psychological support (for patients), a resource patients and carers might be unaware of. Additional domains of carer burden, communication, medicine management and the impact of COVID-19 were identified. We make four recommendations arising from this research.

Routine appraisal and acknowledgement of carer burden, medicine management tasks and previous experience in healthcare roles.

Increased access to communication skills training for staff caring for palliative care patients and their families.

Review of patient communication needs with support provided where needed.

Further research into the symptom management experience of those living alone and exploration of how these people can be better supported.

Availability of data and materials

Original recordings generated and analysed during the current study are not publicly available due to protection of confidentiality. Anonymised transcripts with identifiable information removed may be available from the corresponding author on reasonable request.

Abbreviations

Coronavirus disease 2019

Healthcare professional

National Health Service

United Kingdom

Chapman EJ, Pini S, Edwards Z, Elmokhallalati Y, Murtagh FEM, Bennett MI. Conceptualising effective symptom management in palliative care: a novel model derived from qualitative data. BMC Palliat Care. 2022;21(1):17.

Article   PubMed   PubMed Central   Google Scholar  

Cosgrove D, Connolly M, Monnery D. Holistic care needs in an inpatient oncology unit: patients versus professionals. BMJ Support Palliat Care. 2023. Published Online First: https://doi.org/10.1136/spcare-2023-004617 .

ElMokhallalati Y, Chapman E, Relton SD, Bennett MI, Ziegler L. Characteristics of good home-based end-of-life care: analysis of 5-year data from a nationwide mortality follow-back survey in England. Br J Gen Pract. 2023;73(731):e443–50.

Borelli E, Bigi S, Potenza L, Gilioli F, Efficace F, Porro CA, et al. Caregiver’s quality of life in advanced cancer: validation of the construct in a real-life setting of early palliative care. Front Oncol. 2023;13:1213906.

McDonald J, Swami N, Pope A, Hales S, Nissim R, Rodin G, et al. Caregiver quality of life in advanced cancer: Qualitative results from a trial of early palliative care. Palliat Med. 2018;32(1):69–78.

Article   PubMed   Google Scholar  

Office for National Statistics (ONS). Statitical bulletin, Familes and households in the UK:2022 2023 [updated 18th May 2023. Available from: https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/families/bulletins/familiesandhouseholds/2022 .

Gomes B, Calanzani N, Curiale V, McCrone P, Higginson IJ. Effectiveness and cost-effectiveness of home palliative care services for adults with advanced illness and their caregivers. Cochrane Database Syst Rev. 2013;2013(6):CD007760. https://doi.org/10.1002/14651858.CD007760.pub2 .

Choi S, Seo J. Analysis of caregiver burden in palliative care: an integrated review. Nurs Forum. 2019;54(2):280–90.

Pollock K, Wilson E, Caswell G, Latif A, Caswell A, Avery A, et al. Family and health-care professionals managing medicines for patients with serious and terminal illness at home: a qualitative study. NIHR J Libr. 2021.

Atkin H, McDonald C, Murray CD. The communication experiences of patients with palliative care needs: a systematic review and meta-synthesis of qualitative findings. Palliat Support Care. 2015;13(2):369–83.

Paley CA, Keshwala V, Farfan Arango M, Hodgson E, Chapman EJ, Birtwistle J. Evaluating provision of psychological assessment and support in palliative care: A national survey of hospices in England. Progress in Palliative Care. 2024;32(1):11–21.

Article   Google Scholar  

Bailey C, Guo P, MacArtney J, Finucane A, Meade R, Swan S, Wagstaff E. “Palliative care is so much more than that”: a qualitative study exploring experiences of hospice staff and bereaved carers during the COVID-19 pandemic. Front Public Health. 2023;11:1139313.

de Joode K, Dumoulin DW, Engelen V, Bloemendal HJ, Verheij M, van Laarhoven HWM, et al. Impact of the coronavirus disease 2019 pandemic on cancer treatment: the patients’ perspective. Eur J Cancer. 2020;136:132–9.

Moraliyage H, De Silva D, Ranasinghe W, Adikari A, Alahakoon D, Prasad R, et al. Cancer in lockdown: impact of the COVID-19 pandemic on patients with cancer. Oncologist. 2021;26(2):e342–4.

Article   PubMed   CAS   Google Scholar  

McPherson CJ, Addington-Hall JM. Judging the quality of care at the end of life: can proxies provide reliable information? Soc Sci Med. 2003;56(1):95–109.

Download references

Acknowledgements

We are grateful to the patients and carers who in giving valuable time to share their experiences, made this research possible. We thank research nurses Kath Black and Angela Wray for their support with recruitment.

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: this work was supported by Yorkshire Cancer Research programme grant L412, RESOLVE: “Improving health status and symptom experience for people living with advanced cancer”. The sponsor had no role in study design or the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.

Author information

Authors and affiliations.

Academic Unit of Palliative Care, Worsley Building, University of Leeds, Clarendon Way, LS2 9NL, UK

Emma J. Chapman, Carole A. Paley & Lucy E. Ziegler

Division of Psychological and Social Medicine, Worsley Building, University of Leeds, Clarendon Way, LS2 9NL, UK

You can also search for this author in PubMed   Google Scholar

Contributions

Original idea, EC and SP; Data collection, EC; Data Analysis, EC and CP; Data interpretation, All, Methodological oversight, SP and LZ; writing the manuscript, All. All authors contributed to the development of the updated conceptual model and approved the final submission.

Corresponding author

Correspondence to Emma J. Chapman .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval was granted by a UK research ethics committee ( North of Scotland (2) Research Ethics Committee (20/NS/0086)). All participants gave informed consent for participation and for the use of their direct quotations in research publications.

Consent for publication

Participants gave consent for publication of their direct quotations.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1, supplementary material 2, supplementary material 3, supplementary material 4, supplementary material 5, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Chapman, E.J., Paley, C.A., Pini, S. et al. Updating a conceptual model of effective symptom management in palliative care to include patient and carer perspective: a qualitative study. BMC Palliat Care 23 , 208 (2024). https://doi.org/10.1186/s12904-024-01544-x

Download citation

Received : 08 March 2024

Accepted : 08 August 2024

Published : 19 August 2024

DOI : https://doi.org/10.1186/s12904-024-01544-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Symptom management
  • Conceptual model
  • Communication skills
  • Medicines management

BMC Palliative Care

ISSN: 1472-684X

what is interview schedule in qualitative research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 August 2024

To share or not to share, that is the question: a qualitative study of Chinese astronomers’ perceptions, practices, and hesitations about open data sharing

  • Jinya Liu   ORCID: orcid.org/0000-0002-9804-8752 1 ,
  • Kunhua Zhao 2 , 3 ,
  • Liping Gu 2 , 3 &
  • Huichuan Xia   ORCID: orcid.org/0000-0002-0838-7452 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1063 ( 2024 ) Cite this article

43 Accesses

Metrics details

  • Science, technology and society
  • Social policy

Many astronomers in Western countries may have taken open data sharing (ODS) for granted to enhance astronomical discoveries and productivity. However, how strong such an assumption holds among Chinese astronomers has not been investigated or deliberated extensively. This may hinder international ODS with Chinese astronomers and lead to a misunderstanding of Chinese astronomers’ perceptions and practices of ODS. To fill this gap, we conducted a qualitative study comprising 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers to understand their choices and concerns regarding ODS. We found that many Chinese astronomers conducted ODS to promote research outputs and respected it as a tradition. Some Chinese astronomers have advocated for data rights protection and data infrastructure’s further improvement in usability and availability to guarantee their ODS practices. Still, some Chinese astronomers agonized about ODS regarding the validity of oral commitment with international research groups and the choices between international traditions and domestic customs in ODS. We discovered two dimensions in Chinese astronomers’ action strategies and choices of ODS and discussed their descriptions and consequences. We also proposed the implications of our research for enhancing international ODS in future work.

Similar content being viewed by others

what is interview schedule in qualitative research

Citizen scientists—practices, observations, and experience

what is interview schedule in qualitative research

Perceived benefits of open data are improving but scientists still lack resources, skills, and rewards

what is interview schedule in qualitative research

A focus groups study on data sharing and research data management

Introduction.

Open data sharing (ODS) emphasizes scientific data’s availability to the public beyond its usability and distribution within academic communities (UNESCO, 2021 ). ODS has become increasingly significant since the Big Data era has engendered a paradigm shift towards data-intensive science (Tolle et al., 2011 ), and ODS has promoted data-intensive science to incorporate all stakeholders, such as researchers, policymakers, and system designers to address data processing and utilization issues collectively (Kurata et al., 2017 ; Zuiderwijk et al., 2024 ). Meanwhile, ODS has improved scientific discovery and productivity since different governments and funding agencies have endorsed ODS and published policies to facilitate it (Lamprecht et al., 2020 ). For example, the UK Research and Innovation (UKRI) issued the “Concordat on open research data” in 2016 to ensure that research data gathered and generated by the UK research community must be openly available to the public (UK Research and Innovation, 2016 ). The Chinese government published a “Scientific Data Management Methods” policy in 2018, requiring government-funded research to share its data with the public (General Office of the State Council of China, 2018 ). Besides such government initiatives, the scientific community has also proposed guiding principles for ODS, such as the “FAIR principles” to facilitate data sharing in respect of Findability, Accessibility, Interoperability, and Reuse (Wilkinson et al., 2016 ).

Astronomy is data-intensive and has long been regarded as a prime model of ODS for other scientific fields. For example, the famous Large Synoptic Survey Telescope (LSST) project has committed to real-time ODS after its start-up in 2025 and has released early survey data since June 2021 (Guy et al., 2023 ). Scholars have conducted a few studies to dig out the good practices of ODS in astronomy and found that ODS has a long tradition in astronomy supported by its well-established knowledge infrastructure and data policies (Zuiderwijk and Spiers, 2019 ; Borgman et al., 2021 ). Still, scholars found that some astronomers were hesitant to conduct ODS due to the high reward expectations (e.g., acknowledgment, institutional yearly evaluation, extra citation) and extra efforts (e.g., additional data description) required in ODS practices (Zuiderwijk and Spiers, 2019 ; Kim and Zhang, 2015 ); some astronomers also raised barriers about the usability and availability of data infrastructure to support ODS practices (Pepe et al., 2014 ).

Despite the ODS tradition in astronomy, researchers’ motivations and barriers to ODS may differ based on their cultural contexts. Most empirical studies of ODS have been conducted in Western and developed countries (Genova, 2018 ). Whether these findings hold in non-Western cultures deserves further exploration. Chinese culture and customs differ from Western ones, which may impose distinctive influences on Chinese people’s perspectives and behaviors. For example, Confucianism often renders Chinese individual researchers stick to collectivism or the societal roles assigned to them (Jin and Peng, 2021 ), which is less common in Western culture or academia to our knowledge. Also, scientific research paradigms have originated from and situated in Western culture for a long time. They call for critical examinations and alternative perspectives at the individual and societal or cultural levels, and ODS has been regarded as an essential lens to deliberate it (Serwadda et al., 2018 ; Bezuidenhout and Chakauya, 2018 ; Zuiderwijk et al., 2024 ).

Besides our concerns about cultural and research paradigm differences, Chinese astronomers’ distinctive characteristics have also motivated us to conduct this study. First, based on our prior experience with some Chinese astronomers, we have observed that Chinese astronomers follow enclosed or independent data-sharing norms that are uncommon to researchers in other disciplines. Their research seems to be more international than domestic. Since a slogan from the Chinese government has influenced many research disciplines (including ours) in China, advocating Chinese scholars to “Write your paper on the motherland” (Wang et al., 2024 ), we wondered how such propaganda would impact Chinese astronomers’ attitudes and behaviors. Second, a recent study has revealed that some Chinese astronomers struggled with ODS because they respected it as a tradition on the one hand and desired to gain career advantages (e.g., more data citations) on the other (Liu J, 2021). This finding contrasts another recent study’s conclusion that Chinese early career researchers (ECRs) (in non-astronomy disciplines) would only welcome ODS if the evaluation system rewarded them (Xu, et al., 2020 ). Hence, we wanted to investigate Chinese astronomers’ motivations and barriers regarding ODS further.

Finally, though ODS has been well-acknowledged internationally, it has not been studied or implemented extensively in most research disciplines in China, with astronomy as a rare exception. Hence, we posited that research about ODS in astronomy might shed light on other research disciplines’ popularization of ODS in China. In addition, previous studies on ODS in China have primarily focused on the Chinese government’s open data policies, infrastructure conditions, and management practices (Zhang, et al., 2022 ; Huang et al., 2021 ). To the best of our knowledge, little attention has been paid to Chinese researchers’ perceptions and practices. Thus, we wanted to conduct an exploratory investigation with Chinese astronomers to fill this gap and foster international ODS and research collaboration in Chinese astronomy and other research disciplines more broadly.

With these motivations in mind, we proposed the following research questions.

How do Chinese astronomers perceive and practice open data sharing?

Why do some Chinese astronomers hesitate over the issue of open data sharing?

To address those research questions, we conducted a qualitative study comprising 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers to understand their practices and concerns regarding ODS. We found that many Chinese astronomers conducted ODS to promote research outputs and respected it as a tradition. Some Chinese astronomers have advocated for data rights protection and data infrastructure’s further improvement in usability and availability to guarantee their ODS practices. Still, some Chinese astronomers agonized about ODS regarding the validity of oral commitment with international research groups and the choices between international traditions and domestic customs in ODS. We discovered two dimensions in Chinese astronomers’ action strategies and choices of ODS and discussed these findings and implications. This study makes the following contributions. First, it provides a non-Western viewpoint for global ODS in astronomy and recommendations for advancing global and Chinese ODS policies and practices. Second, it reveals Chinese astronomers’ concerns, motivations, and barriers to conducting ODS. This may inspire domestic government, international research policymakers, and ODS platforms and practitioners to empathize with and support Chinese astronomers. Finally, this study may shed light on implementing ODS in other research disciplines in China, which has not been popular.

Literature review

The background of ods in science.

The open data movement in scientific communities was initiated at the beginning of the 21st century (e.g., Max Planck Society, 2003) (Tu and Shen, 2023 ). ODS, also known as open research data, advocates that the openness of scientific data to the public is imperative to science (UNESCO, 2021 ; Fox et al., 2021 ). Prior research has inquired about researchers’ intrinsic and extrinsic motivations for ODS. Intrinsic motivations include personal background and ethical perspectives. For example, a researcher’s personal background (research experience, gender, position, age, etc.) has been found to affect their ODS preferences, and significant differences have been observed in research experience (Zuiderwijk and Spiers, 2019 ; Digital Science et al., 2024 ). Also, a researcher’s ethical stance influences their ODS practices. Some researchers conduct ODS because they want to benefit the research community and promote reciprocity among data stakeholders, such as data producers, funders, and data users (Lee et al., 2014 ; Ju and Kim, 2019 ). Extrinsic motivations for ODS include incentive policies, data infrastructure, and external pressures from funders, journals, or community rules. Incentive policies, such as the promise of data citation and the rewarding credit from their institutions, effectively enhance ODS (Dorch et al., 2015 ; Popkin, 2019 ). Also, a well-established infrastructure could facilitate ODS by reducing its cost (Kim and Zhang, 2015 ). Moreover, regulations from researchers’ stakeholders (e.g., journals and funders) press their ODS practices as well. One example is developing data policies. Kim and Stanton proposed that journal regulative pressure has significantly positive relationships with ODS behaviors (Kim and Stanton, 2016 ).

Despite the motivations, researchers in ODS still have valid justifications for not conducting such practices (Zuiderwijk et al., 2024 ; Boeckhout et al., 2018 ). Sayogo and Pardo categorized those barriers into (1) technological barriers, (2) social, organizational, and economic barriers, and (3) legal and policy barriers (Sayogo and Pardo, 2013 ). More specifically, at the individual level, Houtkoop et al. found that ODS was uncommon in psychology due to psychologists’ insufficient training and extra workload (Houtkoop et al., 2018 ). Meanwhile, Banks et al. indicated that researchers in organizational research were afraid of exposing the quality of their data (Banks et al., 2022 ). In addition, researchers’ ethical concerns also influence their ODS practices, primarily privacy and fairness issues. Walsh et al. identified the privacy risks related to identity, attribute, and membership disclosure as the main ethical concerns about ODS (Walsh et al., 2018 ). Anane et al. worried that ODS could compromise fairness because some new or busy researchers might lose their data rights during the critical post‐first‐publication period (Anane-Sarpong et al., 2020 ). At the societal level, inadequate data policies have failed to guarantee researchers’ data rights, and property rights are unclear. Enwald et al. proposed that researchers in physics and technology, arts and humanities, social sciences, and health sciences were concerned about legal issues (e.g., confidentiality and intellectual property rights), misuse or misinterpretation of data, and loss of authorship (Enwald et al., 2022 ). Anane et al. found that data ownership was a crucial barrier affecting public health researchers’ willingness to share data openly (Anane-Sarpong et al., 2018 ).

The factors that influence astronomical ODS practices

Astronomy has been a prime example of ODS practices in scientific communities (Koribalski, 2019 ). For example, in gamma-ray astronomy, astronomers have explored how to render high-level data formats and software openly accessible and sharable for the astronomical community (Deil et al., 2017 ). In space-based astronomy, ODS has been an established norm in its research community for a long history (Harris and Baumann, 2015 ). In the interdisciplinary field of astrophysics, evidence has shown that papers with links to data, which also represent an approach of ODS, have a citation advantage over papers that did not link the data (Dorch et al., 2015 ). Additionally, many data archives in astronomy have been openly accessible to the public to increase their reusable value and potential for rediscovery (Rebull, 2022 ).

Prior studies have examined the socio-technical factors fostering ODS. Data policies support ODS implementations, and existing data infrastructure plays an essential role in ODS practices in astronomy (Pasquetto et al., 2016 ; Genova, 2018 ). For example, Reichman et al. attributed astronomy’s long tradition of ODS to its extensive and collaborative infrastructure (e.g., software and data centers) (Reichman et al., 2011 ). In practice, some famous astronomy organizations have built solid data infrastructures to support ODS, such as NASA Astrophysics Data System (ADS) and the International Virtual Observatory Alliance (IVOA) (Kurtz et al., 2004 ; Genova, 2018 ). Astronomy’s integrated knowledge infrastructure spanning decades and countries, encompassing observational data, catalogs, bibliographic records, archives, thesauri, and software, prompts global ODS among astronomers (Borgman et al., 2021 ). Many astronomers have a strong sense of duty to their research communities and the public. Thus, they would accept requests for data to assist colleagues and facilitate new scientific discoveries, which enhances ODS (Stahlman, 2022 ). Besides, astronomers perceived reciprocity influences their ODS practices. They aspire to improve their research outputs’ visibility and contribute to new, innovative, or high-quality research via ODS (Zuiderwijk and Spiers, 2019 ).

Still, some factors may hinder astronomers’ ODS practices. At the individual level, ODS may bring them extra learning load and academic reputation risks. For example, if astronomers perceive challenges in ODS or feel they need to acquire further knowledge, they may be less inclined to engage in such practices (Gray et al., 2011 ). Additionally, astronomers expressed concerns about the possibility of others discovering mistakes in the data (Zuiderwijk and Spiers, 2019 ). Pepe et al. also showed that the difficulty of sharing large data sets and the overreliance on non-robust, non-reproducible mechanisms for sharing data (e.g., via email) were the main hindrances to astronomers’ ODS practices (Pepe et al., 2014 ). At the societal level, an exponential increase in astronomical data volume has led to a continuous enrichment of utilization scenarios. ODS may involve data privacy or national security issues, especially when such data is integrated with other datasets. Thus, Harris and Baumann regarded the primary concern in global ODS as safeguarding national security and establishing appropriate licensing mechanisms (Harris and Baumann, 2015 ).

The development of ODS in China

The Chinese government has recognized ODS as a national strategy in both scientific and public service domains. They issued the “Scientific Data Management Methods” in 2018 and “Opinions on Building a More Perfect System and Mechanism for the Market-oriented Allocation of Factors” in 2022. These policies require that data from government-funded research projects must be shared with the public according to the principle of “openness as the norm and non-openness as the exception” (General Office of the State Council of China, 2018 ; General Office of the State Council of China, 2024 ). The Chinese government applied the “hierarchical management, safety, and control” concept as ODS arrangements to realize a dynamic ordered open research data at the social level (Li et al., 2022 ).

At the institutional level, the Chinese Academy of Sciences (CAS) has been actively promoting infrastructure construction and institutional repositories to support ODS. For example, CAS has affiliated eleven out of twenty national-level data centers that are foundational for ODS in China since 2019. Meanwhile, many Chinese journals have published data policies requesting that researchers append their papers with open-access data. The National Natural Science Foundation of China (NSFC) has funded over 6000 data-intensive research programs, encouraging ODS among them in compliance with the NSFC’s mandate (Zhang et al., 2021 ). Regarding Chinese researchers’ attitudes and practices toward ODS, Zhang et al. have observed that Chinese data policies have shifted from focusing on data management to encompassing both data governance and ODS. This shift has shrunk the gap between Chinese researchers’ positive attitudes toward ODS and their less active ODS behaviors (Zhang et al., 2021 ). Driven by journal policies, Chinese researchers’ ODS behaviors have been encouraged. For example, Li et al. found that more than 90% of the published dataset of ScienceDB is also paper-related data and proposed that the pressure from journals has been the main driving force for researchers to conduct ODS (Li et al., 2022 ). ScienceDB (Science Data Bank) is a general-purpose repository in China that publishes scientific research data from various disciplines (Science Data Bank, 2024 ).

Methodology

We conducted a qualitative study comprising 14 interviews and 136 open-ended survey questions with Chinese astronomers from 12 institutions. Our interview questions were semi-structured. Some were framed from the existing literature, and others were generated during the interviews based on the interviewees’ responses. Our open-ended questions are extended from a recent survey on data management services in Chinese astronomy (Liu, 2021 ). Table 1 depicts the formation of our interview questions that served as the major source of our research data. We acknowledge that the interviewees’ responses could be influenced by questions and context during the interview and tried to avoid such biases with the following strategies. First, although Chinese astronomers were hard to contact and recruit, we did our best to diversify our interview sample. Our interviewed Chinese astronomers included researchers and practitioners in observatories, scholars and Ph.D. students in astronomy at top universities in China, and researchers in astronomical research centers. Second, we conducted our interviews in different contexts, such as on campus, in observatories, at research centers, and over phones. Thus, we tried to de-contextualize our interview questions to reduce potential biases. Finally, our qualitative data and analysis were not only from interviews but also from our previous survey. We used the interview and survey data to corroborate and complement each other.

Data collection and analysis

Our interviews were conducted in person or via WeChat video. They lasted 30–45 min and were recorded and fully transcribed. Our recruitment was challenging and time-consuming due to COVID-19 and the limited number of Chinese astronomers available for the interview. We have obtained their informed consent and have followed strict institutional rules to protect their privacy and data confidentiality. In addition, we conducted a survey using the online platform ‘Survey Star’ and obtained responses from 136 Chinese astronomers. For the scope of this paper, we focus on reporting qualitative data.

We kept our first round of data analysis, including notetaking and transcription, simultaneous with the interview progress. Meanwhile, we have fully transcribed and translated the interview recordings in Chinese into verbatim in English. As for the data analysis part, we employed the thematic analysis technique to extract and analyze themes from the interview transcripts (The interviewees are numbered with the letter P) and open-ended survey responses (The survey responses are numbered with the letter Q). Thematic analysis is well-suited for analyzing interview transcripts and open-ended survey responses (Braun and Clarke, 2006 ). We referenced Braun and Clarke’s recommended phases and stages of the analysis process (Braun and Clarke, 2006 ). First, we read through transcriptions and highlight meaning units. Simultaneously, we conducted coding and identified participants’ accounts, which were presented in the form of notes. Second, we categorized the codes and subsequently attributed them with themes that corresponded to ethical concerns. Third, we verified the themes by having them reviewed by two additional authors to ensure high accuracy in our analysis. Finally, we linked our themes with existing literature to provide a more comprehensive narrative of our findings. Table 2 lists the demographic information of the interviewees.

We referenced Stamm et al.’s work to categorize the career stages of the Chinese astronomers we interviewed (Stamm et al., 2017 ). As shown in Table 2 , Most interviewees fall into the Senior-career stage because they have rich research experiences and resources in ODS.

Three types of Chinese astronomers’ behaviors at different ODS stages

We categorize the Chinese astronomers’ ODS behaviors into three types at different stages of ODS. First, Chinese astronomers mentioned that one type of ODS behavior is making the data publicly available on a popular platform (e.g., Github, NASA ADS, arXiv) or data centers after the proprietary data period has expired. The proprietary data period, or the exclusive data period, refers to the time between researchers first accessing the data and publishing their findings. This period typically ranges from one year to two years in astronomy, which aims to cover a normal and complete astronomical research cycle. P13 explained:

The data is not in our hands. After we use the telescope to complete the observations, the data will be stored in the telescope’s database. During the proprietary period (12 months), only you can view it. After the proprietary data period has passed, anyone can view it. (P13)

She meant that the raw data produced by astronomers were stored by the builders, who were also responsible for making those data visible to the public when the proprietary data period had expired. Zuiderwijk and Spiers’s survey has also revealed that astronomers seldom store raw data due to their inability to build a data center. Consequently, astronomers often do not influence data-sharing decisions directly but only propose data collection ideas (Zuiderwijk and Spiers, 2019 ).

Secondly, Chinese astronomers also regraded sharing the data with research teams or individuals upon their requests during the proprietary data period, which is also feasible. For example, P5, said:

I published one paper using research data whose proprietary period hasn’t expired. If someone emailed me to inquire whether they could obtain the data for “Figure 2” [here P5 referred to an exemplary figure in her previous publication]. I usually send the data to them. It is common [in astronomy] to communicate with the author via email to consult their willingness toward ODS. (P5)

P5 assumed that sharing data privately was allowed and common among astronomers when the proprietary data period had not yet expired. To some extent, P5 also transformed this private approach toward a visible approach by making his processed data public and publishing it on open platforms.

P11 added the reason why astronomers used this private approach:

The data is not immediately made available. There is a proprietary data period of one or two years. Priority is given to the direct contributors to use the data and produce the first batch of scientific results. After the proprietary data period has expired, others were allowed to discover the value of the data jointly…Other astronomers may also be interested in the data during the proprietary data period. After all, during this period, others were unable to conduct observations and produce data. (P11)

P11 explained that during the period when he applied for observation, others could not produce the data by using the same telescope. However, they might still be interested in such data. Thus, he might share their research data privately with other astronomers if he deemed it necessary for the other astronomers’ research.

Finally, besides the open sharing of research data, two other astronomers also introduced the third type of ODS behavior, the open sharing of research software, tools, and codes. P12 explained:

When the project was completed, project funders required all the research data to be submitted to a certain location for public use. We also needed to submit the software, tools, and related codes developed by astronomers. (P12)

According to P12, ODS is not merely about data per se but also its associated processing tools and accompaniment.

Another astronomer, P10, mentioned that astronomers may also share their software openly to enhance their research influence. P10 said:

Astronomers may openly share their programs in theoretical research and data simulation, particularly simulation programs or source files. They create open-source materials related to their articles and then make their software or related models available online. They also require acknowledgment if someone uses them later. Nowadays, many astronomers use this method for ODS. (P10)

Individual factors concerning Chinese astronomers’ motivations for ODS

Ods is a tradition and duty.

Twelve Chinese astronomers also mentioned that ODS was a traditional norm in astronomy, and they have been obeying it since they entered this scientific field. P11 said:

We have known a traditional norm since we started working in this field. That is, every time you apply for telescope observations and obtain data, this data must be made public one year later. Even if you have not completed your research or published a paper by then, the data will still be made public. For us astronomers, ODS is a natural practice and meaningful endeavor. We believe that astronomy is a role model of ODS for other research fields to follow. (P11)

Four Chinese astronomers also introduced the influence of the tradition of ODS on their motivations for ODS. For example, P10 said:

In the past, I have obtained data of my interest from other astronomers by emailing them. Therefore, if someone approaches me for data, I would also be willing to provide it. (P10)

Another two astronomers elaborated that they acknowledge the ODS tradition due to its benefit to both astronomers and telescopes. P1 said:

According to the international convention, to promote the influence of the telescope and enrich its research outputs, the data is released to the public based on different proprietary data periods. Each data release includes not only raw data but also data products generated by technical personnel processing the raw data. (P1)
I do not process raw data; instead, I typically utilize data products generated by telescopes. These data products, which are openly available in the public domain, assist individuals like me who lack technical expertise in processing raw data to conduct scientific research. Thus, we must also acknowledge the telescope’s contribution when publishing our findings. This is the norm in astronomy. (P13)

P1’s and P13’s opinions were common, which elaborated that telescopes have offered astronomers different kinds of data, enhancing their potential research outputs. In return, when researchers utilize the data generated by telescopes, they also contribute to the telescope’s influence and reputation.

It is worth noting that this tradition is also in telescopes’ data policies, which influences Chinese telescopes’ data proprietary periods setting. For example, the Chinese astronomy projects LAMOST and FAST release data policies that mention the proprietary data period following international conventions. As indicated by P6, the international convention typically observes the proprietary data period of six months to one and a half years.

Six Chinese astronomers believed that ODS is an established tradition in astronomy and ought to be respected and enacted as a duty without considering external factors or consequences. For example, P8, mentioned that:

Astronomy is a very pure discipline without economic benefit, and we have the tradition of ODS. Therefore, they state their data source or post a link to their data directly. My willingness to conduct ODS is also influenced by this atmosphere. Besides that, I regard ODS as a basic requirement because data should be tested [via ODS]. (P8)

Another two astronomers considered ODS in astronomy the nature of science, which motivated them to pursue the goal of openness persistently. For example, P11 said:

Astronomy exemplifies a characteristic of being borderless, where there is a strong inclination towards open academic exchange and sharing of resources and tools. Additionally, astronomy is pure due to its non-profit nature. Thus, astronomers have always maintained simplicity, leading to a culture of openness. (P11)

ODS brings beneficial consequences

Still, four Chinese astronomers hoped to improve their research influence and citations through ODS, especially the research to which they had devoted the most effort. For example, P10 said:

Astronomers not only release their data but also the software or code to process it. This is because if other astronomers use my software and code to process the data, they would also cite the papers with my shared software and code. This will increase the influence of my papers and software or code. (P10)

A similar perspective came from our survey responses Q19, Q22, Q34, and Q47, who also perceived that ODS could improve the research impact of their papers and data. For example, Q22 stated:

I have encountered situations where other researchers requested access to my data. One of the reasons I am willing to share data [with them] is to increase my paper citations. (Q22)

Additionally, some Chinese astronomers practiced ODS to replicate and validate their research. For example, Q26 said:

The primary reason I endorse ODS is to replicate my data analysis by peers and enable independent verification of my research outputs. (Q26)

ODS engenders reciprocity and collaboration opportunities

Fourteen Chinese astronomers mentioned that ODS could increase their research outputs and provide possibilities to obtain other astronomers’ data, thereby promoting the prosperity of research outputs in the entire astronomy community. More importantly, they have established a new type of collaborative opportunity through ODS when data are sufficient but resources/capacities to utilize data are limited. For example, P12 expressed that ODS had a positive impact on the research outputs of the scientific community:

An astronomer I respect once stated that initially, they wanted to conceal all research data, but this proved impossible due to the vast amount of data produced by the telescope. As a result, they released all the data from their large-scale projects. The outcome of this ODS behavior rendered explosive growth in research outputs. (P12)

Another two astronomers noted that ODS was essential to cultivate more astronomers to form collaborative efforts to increase research outputs in the scientific community. P6 said:

The data generated by telescopes used to observe transient events have not been subject to the proprietary data period. Once I observe such events, I will encourage other researchers to join in and rapidly identify these unexpected phenomena, facilitating subsequent observations using various telescopes to maximize scientific output as quickly as possible. (P6)

P6 elaborated that astronomers rely on collaborative efforts for special observations, such as discovering new stars, which maximizes the utilization of global telescope resources. This motivation strengthens collaborations among astronomers from different research teams. P14 added:

New events [e.g., new star discoveries] in astronomy often occur in transience. If I do not share information about these events, other astronomers will not know about them. With limited resources, I may be unable to observe them through other telescopes. However, sharing preliminary data about these events can maximize global resources. This allows for a collaborative effort to observe the event using resources from around the world. (P14)

P14 stated that ODS has the potential to appeal to more astronomers to research contributions through their subsequent and collective efforts based on the initial observation. P14’s opinion echoed Reichman et al.’s findings, which revealed that extensive and collaborative infrastructure was the primary driver behind the adoption of ODS (Reichman et al., 2011 ).

Prior research also indicated that limited resources and capacities would increase collaboration among astronomers in astrophysics research (Zuiderwijk and Spiers, 2019 ). A similar opinion also arose from our survey responses Q18, Q30, and Q52. For example, Q30 said:

I am good at processing data instead of writing papers. ODS can allow me to collaborate with someone who is good at writing papers to co-produce the research output. (Q30)

Societal factors concerning Chinese astronomers’ barriers to ODS

The limitations of verbal agreements in international collaboration.

Although most Chinese astronomers endorsed ODS, three were concerned about other astronomers who might have violated their initial commitments to using data for scientific purposes. For example, P7 commented:

I used to have experiences with foreign collaborators who violated their initial commitments, resulting in unpleasant consequences. Specifically, they promised in emails that they would process the data using a different approach from ours. However, they ended up using the same method and perspective as ours. There was not much to be said about it, as it was not illegal or against data policies’ regulations. It is a matter of trust and promises, and all I can do is not share data with them in the future. (P7)

P10 also added that often, the astronomers’ commitment to email correspondence had to rely on their self-discipline to materialize:

If the proprietary data period has not expired and you share the data with others, you have no control over what they do with it except to trust their promise in the email. This situation relies on the self-discipline of astronomers. (P10)

Three astronomers were also concerned about the validity of oral agreements about ODS. They referred to them as “gentlemen’s agreements.” For example, P14 explained:

In principle, data can be shared with others without a signed contract between us but based on the so-called gentleman’s agreement. Thus, some Chinese astronomers may not be willing to make their research data public because they must assume that everyone is a gentleman [to keep their promise], which may not always be the case as there are also scientists who are not accountable due to a highly competitive environment [in science]. (P14)

P14 regarded the “gentlemen’s agreements” as effective only to those who acted in good faith in fulfilling their commitments. They would not impose or presuppose any “ethical” constraints on collaborators. Hence, he noted that some astronomers were unwilling to share data openly within the proprietary data period because they did not trust the other astronomers’ accountability to fulfill their “gentlemen’s agreements.” Besides that, P6 explained the reason that astronomers have broken their commitments. He said:

In astronomy, some data policies have not been effectively constrained because it is impossible to encompass all subsequent data usage and collaboration situations at first…Also, there are many astronomy alliances. If you are not part of our alliance, you are not bound to commitments, which may lead to disputable issues. (P6)

Data is too dear to share immediately

Ten Chinese astronomers considered that the data they obtained possessed unique scientific values that could contribute to their publication priority and prolificity. Given the fact that publication priority, authorship order, and quantity are still the most important and prevalent factors in evaluating a scholar in China, it becomes comprehensible that these astronomers have expressed concerns about the risk of losing the ‘right of first publication’ if they openly share their processed data too soon. For example, P9 confessed:

I am unwilling to conduct ODS primarily because my research findings have not been published yet. I am concerned that ODS might lead to someone else publishing related findings before I do. (P9)

Similar concerns were also expressed in our survey responses Q42, Q46, and Q53. Q53 provided a more detailed explanation:

The individuals or organizations that produce data should have the right to use it first and only make it publicly available after a round of exploration and the publication of relevant research results. If the data is shared openly and completely from the outset, the number of people or organizations willing to invest time and money in obtaining data in the future will decrease since they can use data obtained by others instead of acquiring it by themselves. (Q53)

Another astronomer, P12, held a negative attitude toward ODS at the early stage of research because he was concerned that their data processing capacity was slower than the other research groups once the data was shared with them:

I put a lot of effort into processing data, and if my research findings have not been published but I release my data in three months [some international rules recommend astronomers to open their data as soon as possible], then someone with a more sophisticated data processing software may be able to write and analyze their research paper within a week because they already have the complete workflow prepared. This may upset the sharers who intended to publish a similar finding, as their work has been done so quickly [sooner than the sharer]. (P12)

A similar opinion could be seen in our survey response Q46:

The scientific community should ensure that those who have worked hard to produce the data also have the priority to publish their research findings before the data has been made publicly available. (Q46)

The disparities between the Chinese and foreign research infrastructures

Five Chinese astronomers expressed their concerns about the disparities between the Chinese and foreign research infrastructures. For example, P9 expressed his concern that adhering to international rules in astronomy might contradict the domestic rules in China due to national security and data confidentiality considerations. He said:

International organizations hope our country will lead in ODS, which may sometimes harm our interests. This is especially the case for the data produced through Chinese telescopes, which are published in international academic journals upon the international journal publishers’ requests because this data may involve confidential engineering tasks in Chinese telescopes that are subject to national security purposes. (P9)

Another astronomer, P4, also mentioned that astronomical data may include equipment parameters that may trigger national security concerns. Hence, she has undergone desensitization before conducting ODS:

Astronomical raw data are generated by the equipment directly and are categorized as first-level data [machine-generated data] in the data policies. More importantly, raw astronomical data should be processed before being opened to the public because the raw data may raise [national] security concerns and leakage equipment parameters. (P4)

P4’s concerns about national security are also reflected in China’s national data policies. For example, the Chinese government mandates the “hierarchical management, safety, and control” policy to supervise ODS to balance its order and dynamic (Li et al., 2022 ).

P8 added that Chinese astronomers are sometimes limited by national rules and domestic data infrastructure usability and accessibility. P8 said:

In some Chinese astronomical projects, only certain frequency bands are internationally permitted, and the first to occupy them claims ownership. Moreover, our data storage and ODS are limited by technical difficulties. We don’t have ODS platforms like NASA ADS. Even if there are, these platforms are currently not as recognized internationally as those abroad. Therefore, when astronomers publish papers or data, they default to submitting them to international platforms. (P8)

Societal factors concerning Chinese astronomers’ hesitations for ODS

The pressure from domestic data policies.

Five Chinese astronomers have mentioned that ODS is subject to the requirements of domestic data policies. Thus, they sense the pressure to conduct ODS. For example, P6 indicated that many astronomy projects in China were government-funded and required data sharing and submission conforming to government regulations as the priority.

Chinese telescopes are primarily funded by the government, as researchers have not yet had the ability to build a telescope on their own. The entire Chinese population is considered one collective, while those non-Chinese are another. The Chinese government aims to promote ODS to data generated by projects funded by public funds. If researchers have not submitted research data to the government-delegated data center, it could potentially impact their subsequent research project approval. By contrast, some foreign telescopes are built by private institutions and may not have the option for ODS. (P6).

Another astronomer, P3, proposed that Chinese mandatory data policies prompt the ODS scale. However, complicated troubles remained.

Our data policies are mandatory, especially for projects funded by national grants. That is, if you don’t conduct ODS, your projects may not be accepted. The volume of ODS is rising consequently. However, the issues related to ODS still need to improve, such as the Chinese astronomers’ initiative willing to ODS is weak, and [sometimes] their open data cannot be reused. There is a need further to investigate Chinese researchers’ [ODS] behaviors, particularly to find the stimulations for them to conduct ODS proactively. (P3)

Besides, three Chinese astronomers shared that the traditional funding source in astronomy also motivated their ODS. P8 explained:

In China, astronomical data [from national telescopes] is mostly institutional and collective. One can apply to use a telescope at a particular institution to obtain astronomical data. The applications may receive different priorities, but the data is not privately owned. (P8)

P8 meant that Chinese astronomers relied on large telescope projects funded by the government. Consequently, the ownership of their observed data belongs to the collective astronomical community in China rather than individual astronomers or research teams.

The language prerequisite in astronomy

Three astronomers have also introduced the issue of a language prerequisite in scientific communication. For example, P12 explained:

[Modern] astronomy predominantly originated from developed nations. Consequently, our conferences, data, and textbooks are primarily in English. However, this can be a barrier for young Chinese astronomers who are not proficient in English. At least among the researchers around me, everyone contends that English is a necessary prerequisite for entering the field of astronomy. That is to say, the entry barrier for astronomy is very high. I termed it “aristocratic science” because it is difficult to conduct astronomical research without good equipment, proficient English, or substantial funding. (P12)

Another astronomer, P9, dismissed astronomical journals in Chinese because these journals would not be acknowledged in the international astronomy community:

I believe English is a strict prerequisite in astronomy. If your English is poor, you may be restricted from engaging in ODS communication. I support [the slogan] publishing in Chinese to enhance Chinese scholars’ international influence, but most astronomical research originates from the West and is primarily dominated by Western institutions. Besides that, domestic journals are not valuable enough for academic evaluation or promotion due to their low influence factor. (P9)

Finally, P13 added that if Chinese astronomers always use English in ODS, it might potentially clash with the academic discourse system in China.

Some people may wonder why, as Chinese researchers, we need to use English to communicate our work. From my personal perspective, of course, I fully support promoting our research discourse system using Chinese as the primary language. However, from a [scientific] communication standpoint, there are times when we need to collaborate with foreign astronomers or improve communication efficiency [in English]. (P13)

The awareness of a competitive environment

Four Chinese astronomers have expressed concerns about ODS due to the highly competitive scientific community to which they belong. For example, P14 stated:

The field we are currently working in is highly competitive, so we need to consider protecting our team’s efforts. If we release the data, there is a possibility that other researchers using more advanced software tools could publish their findings before us. (P14)

Another astronomer, P12, remarked that this competitive atmosphere varies depending on the research directions. He said:

Competition is inevitable but varies across research areas. I engaged in two research areas. One is characterized by intense competition, but the other is more friendly. The highly competitive research area has many researchers pursuing high-quality data and tackling cutting-edge topics. Sometimes, competing with those who publish first or faster becomes necessary. In addition, one kind of “Nei Juan” may exist, which is competing to see who can open data faster. Because the faster your proposal is promised, the sooner your observation project will be approved. (P12)

“Nei Juan” (a.k.a. involution) manifests a fierce but often unfruitful competition to catch up with colleagues, peers, and generations (Li, 2021 ). P12 acknowledged the competitive environment that would push him to publish first or faster but also regarded “Nei Juan” as not always bad for ODS. Still, P9 considered that the “Nei Juan” issue may arise because Chinese astronomers want to catch up with the international astronomical development phase.

Generally speaking, astronomy is relatively less “Nei Juan” compared to other disciplines. However, its rapid development has begun to become more intense. Particularly, Chinese astronomy is in a phase of catching up, characterized by a collaborative yet competitive atmosphere with the international community. Our national astronomical teams, as a collective, are exerting great efforts to excel in some major projects compared to their foreign counterparts, engaging in strenuous research endeavors. (P9)

However, another astronomer, P11, regarded that ODS meant not “the sooner, the better.” P11 argued:

Some data may have been obtained through instrument testing, and its quality is not particularly high, resulting in lower reliability. If it is made openly accessible immediately, users may not obtain accurate results. Besides, the raw data may contain variances or noises originating from different instruments, requiring standardized processing through software to transform it into [reliable] data products. Only then can scientific users and the public truly benefit from this data. (P11)

The interpretation of Chinese astronomers’ ODS motivations and behaviors

Chinese astronomers’ motivations and behaviors in ODS can be interpreted threefold. First, a few Chinese astronomers’ obedience to ODS is traditional. They value the tradition of ODS in astronomy and contend that it should be respected and obeyed as an intrinsic duty (Heuritsch, 2023 ). Also, they acknowledge the value of astronomical ODS practices for scientific research and the whole scientific community, which makes them devote themselves to such practices (e.g., P8, P12). Hence, for them, extrinsic principles (e.g., FAIR), policies (e.g., those from the Chinese government), or individual research outputs do not determine their ODS decisions and behaviors. As P11 said, he had learned and obeyed this tradition since he entered the field of astronomy. This finding in China corroborates Stahlman’s prior research, indicating that astronomers have a strong sense of duty to their research communities and the public (Stahlman, 2022 ). Still, we found it impressive because these Chinese astronomers adhere to ODS traditions, dismissing the government slogan “Write your paper on the motherland,” which is rare in other research disciplines (including ours) in China.

Second, many Chinese astronomers would evaluate the consequences of ODS. One evaluation lens is self-interest. For example, several Chinese astronomers (e.g., P6, P12) have pointed out that ODS can potentially increase individual research outputs and their academic reputation, which motivates them to do it. It is noteworthy that some Chinese astronomers increase research outputs through ODS, both in terms of their personal contributions and for the entire astronomy community. Their evaluation priority is their own data/paper citation over ODS practices. Another evaluation lens is reciprocity. Some Chinese astronomers (e.g., P1, P10) perceive that the data sharer and user roles in ODS among astronomers can be exchanged. An open data sharer can become a user, and vice versa, in different research projects and times. As P10 mentioned, many Chinese astronomers have received the benefits of ODS from other astronomers when they lacked data or resources. As a result, they aspire to contribute to the community by providing opportunities and resources for fellow astronomers who face challenges similar to those they did previously. Thus, they adopt ODS in a respectful manner, hoping to receive the same treatment in the future. Abele-Brehm et al.’s study has revealed that researchers tended to conduct ODS out of reward promises (Abele-Brehm et al., 2019 ). Our findings complement it by differentiating self-interest-oriented and reciprocity-oriented rewards from ODS.

Third, some Chinese astronomers’ choice of ODS can be interpreted as contractual. Without ODS, they cannot receive government funding or get their research proposal accepted, which may impede their research progress and contribution. This finding corroborates Zuiderwijk and Spiers’ research, highlighting the significance of resource constraints and individual expectations benefits, which they could get extra citation or potential collaboration opportunities as essential motivators for ODS in astronomy (Zuiderwijk and Spiers, 2019 ). Furthermore, the development of modern astronomy in China is relatively retarded compared to the U.S. or European counterparts. The Chinese government sponsors most astronomical projects with public funding, hoping to enhance Chinese astronomy through centralized power and resources. For example, in 2018, the Chinese government implemented a scientific data management policy mandating the sharing of research data generated by public funding (General Office of the State Council of China, 2018 ). Thus, Chinese astronomers in contract with government-funded telescopes must enact ODS.

The societal barriers to Chinese astronomers’ ODS practices

We identified a few societal barriers to Chinese astronomers’ ODS practices. First, insufficient data rights protection during ODS may hinder Chinese astronomers’ enthusiasm or trust in conducting ODS. For example, P6 has raised the concern that some astronomical data policies are typically formulated by scientific alliances and only bind members within project teams. Thus, astronomers who do not belong to these alliances do not need to obey these policies. Moreover, P10 and P14 both complained that though they had contributed much data, time, and effort, some global ODS practices relied on verbal agreements, which often lacked enforcement and easily compromised their data rights in an international project. This insufficient protection of data rights may give rise to conflicts of interest among collaborating parties, discouraging subsequent data-sharing practices among Chinese astronomers.

Second, a data infrastructure that is weak in its usability and accessibility may deter some Chinese astronomers from choosing ODS. As P8 remarked, Chinese open research data infrastructures have not been well developed regarding data usability and accessibility, which pushes domestic astronomers to publish data via foreign open research platforms. This concern partly reflects the reality of the underdevelopment of data infrastructure in China, indicating that most of China’s domestic research data repositories have yet to establish licenses, privacy, and copyright guidelines. (Li et al., 2022 ).

Additionally, we found that a highly competitive environment could potentially trigger “Nei Juan” related to competing for publication priority, which could also affect Chinese astronomers’ ODS attitudes and behaviors. Specifically, the increasing emphasis on academic performance has led many Chinese researchers into a “weird circle” of self-imposed pressure to publish papers continuously. This phenomenon is exacerbated by the tenure system in top Chinese universities, which has significantly shaped researchers’ academic work and day-to-day practices (Xu and Poole, 2023 ). Thus, within the intensely competitive scientific landscape and the dominant evaluation system for paper publications, Chinese astronomers may potentially prioritize rapid paper publication over ODS because when scientific resources and academic promotions are scarce, data is invaluable to a researcher. As implied in P14’s quote, some Chinese astronomers may delay or opt out of ODS unless their data rights and research benefits can be ensured.

Two dimensions in the action strategies in Chinese astronomers’ choices for ODS

Apart from the individual and societal factors that motivate or deter Chinese astronomers’ OBS behaviors, we have identified two dimensions in the action strategies that influence their choice of ODS. These two dimensions are presented and interpreted in Table 3 .

First, some Chinese astronomers hesitated to ODS because they had to choose between domestic customs and international traditions in astronomy, which might influence or even determine some Chinese astronomers’ behaviors concerning ODS. For example, several Chinese astronomers (e.g., P11, P13) prioritized compliance with domestic policies over international ones in determining where and how to implement ODS (Zhang et al., 2023). Besides, as explained by P4, almost all Chinese astronomers receive national funding, which would influence their ODS behaviors due to national funding agencies’ requirements for project commitment and applications. China’s “dual track” approach emphasizing data openness and national security simultaneously requires researchers to obey the “Openness as the normal and non-openness as the exception” principle (Li et al., 2022 ). Meanwhile, open data governance and open data movement have gradually impacted government policies as various national security and personal privacy issues are emerging (Arzberger et al., 2004 ). Despite this, ODS policies or concerns about national security and personal privacy may not be suitable for astronomy because astronomy rarely involves security and privacy issues (as highlighted by P9 and P12). As the discrepancy between domestic and international policy environments widens, choosing different norms may pressure Chinese astronomers’ ODS behaviors.

Second, we found some ethical problems related to ODS from the language prerequisite or preference in Chinese astronomy. As mentioned by P12, language has become an entrance bar in Chinese astronomy because astronomy is sort of “aristocratic science” in the sense that English proficiency is a prerequisite for anyone or any institution that wants to participate in astronomy research and practices seriously. Consequently, there is no comparable citizen science project in China to Galaxy Zoo or Zooniverse in the U.S., and local or private colleges in China cannot afford to establish astronomy as a scientific discipline in their institutions because many people in Chinese citizen science projects or below-the-top institutions are not proficient in English. Related to it, as mentioned by P9, domestic journals about astronomy in China are unanimously regarded as inferior and not valuable enough for academic evaluation or promotion. This phenomenon in Chinese astronomy is distinctive from the other research disciplines in China, where domestic journals are not “biased” based on publication language.

Third, domestic astronomy projects obeying international propriety data period policies may exert extra pressure or restraint on Chinese astronomers to conduct ODS. For example, the LAMOST and FAST projects in China follow international conventions in setting their propriety data period and ODS policies in English. As a result, Chinese astronomers who are poor in English would confront logistic hindrances in harnessing these domestic astronomy projects to share their data, ideas, and publications in Chinese. If they want to implement international ODS via LAMOST or FAST, they must spend extra time, effort, or funding translating their data and ideas into English, which may affect their time and resource allocation in the other research activities within the proprietary data period, such as ODS. Hence, we surmise that this language obstacle for some Chinese astronomers could demotivate or discourage them from ODS.

Fourth, some Chinese astronomers may choose between personal development and scientific advancement regarding ODS. First, it may be due to the adverse effects of the Chinese academic promotion system on some astronomers. In China, universities and research institutions typically use publication lists to evaluate academic performance and promotion (Cyranoski, 2018 ). As P14 mentioned, competition for research publication has been growing in some areas of astronomy (e.g., burst source). Some Chinese astronomers may withhold ODS to prioritize their data rights and timely publication. It may also be interpreted by a prevalent phenomenon in the Chinese academy nowadays called “Nei Juan.” Consequently, some Chinese scholars, including astronomers, are pushed to be competitive or “selfish” to increase their research publications, citation metrics, funding opportunities, and data rights. Prior works have found that researchers’ data-sharing willingness tends to be low when perceived competition is high (Acciai et al., 2023 ; Thursby et al., 2018 ), and researchers’ intrinsic motivation gradually weakens when researchers’ organizations implement accountability measures (such as contract signing) and increasingly pursue performance-oriented academic research (Gu and Levin, 2021 ). These findings may also explain some Chinese astronomers’ hesitation about ODS.

Last but not least, astronomy is highly international, and ODS can encourage collaboration among astronomers from different countries. Nevertheless, as mentioned by P7, some collaborators may compromise their promises for data use, which disincentivizes data sharers’ willingness for continuous ODS. Astronomers, through the joint observations of multiple telescopes, can collectively identify the underlying reasons behind astronomical phenomena and thereby promote scientific advancement. However, with the impact of “Nei Juan” and the limitations of verbal commitments, some Chinese astronomers may find it challenging to choose between ODS and prioritizing their academic interests.

Conclusion and implications for future research

Many astronomers in Western countries may have taken ODS for granted to enhance astronomical discoveries and productivity. However, how strong such an assumption holds among Chinese astronomers has not been investigated or deliberated extensively. This may hinder international ODS with Chinese astronomers and lead to a misunderstanding of Chinese astronomers’ perceptions and practices of ODS. Thus, in this paper, we reported our findings from 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers about their motivations and hesitations regarding ODS. Our study found that many Chinese astronomers regarded ODS as an international and established duty to obey or reciprocity to harness. However, some Chinese astronomers would also agonize about ODS for data rights concerns, usable and accessible data infrastructure preferences, and “Nei Juan” or academic promotion pressures. Synthesizing these findings, we summarize them as Chinese astronomers’ concerns and choices between domestic customs and international traditions in ODS. Despite the findings, our research has several limitations. First, we still need more data to test and generalize our findings about ODS to Chinese scholars in other disciplines. Second, we have not conducted a comparative analysis of perceptions, concerns, and behavioral differences among astronomers in other countries. In the future, we intend to address this gap by conducting a global study to provide a more comprehensive understanding of ODS in science.

Our research has several implications for future work. First, we advocate for empathy and compromise between domestic customs and international traditions in Chinese astronomy. Undoubtedly, developed and English-speaking countries have been dominant in science and research paradigms for a long time. On the positive side, such dominance has established various traditions, such as ODS in astronomy, which are respected and obeyed by many scholars worldwide, such as many astronomers in China. On the negative side, such long-standing scientific dominance may trigger a developing country’s domestic countermeasures or competing policies, which can agonize some domestic researchers and impede global ODS. For example, as we have revealed, some Chinese astronomers had regarded astronomy as an “aristocratic science” and screened out Chinese astronomers or citizen science participants who were not proficient in English. Future research can investigate further the power dynamics between international traditions and domestic customs in other cultures or research disciplines beyond ODS in astronomy.

Second, we suggest that the international astronomy community publish more inclusive ODS rules that consider the societal contexts of researchers from different countries with different cultural or language backgrounds. Efforts should be made to minimize the reinforcement of one’s dominant position in scientific research through ODS, and to develop more inclusive, sustainable, and equitable rules that appeal to more advantaged countries to join. This may be achieved by providing different languages of ODS platforms, translation assistance to draft collaboration agreements, and multiple options for international collaboration and communication among astronomers from different countries. In this regard, the CARE (Collective benefits, Authority to control, Responsibility, and Ethics) principles serve as a good example (Global Indigenous Data Alliance, 2019 ). Also, we propose that the Chinese government, academic institutions, and funding agencies be more globally leading and open-minded to stimulate ODS, not merely within the border but endeavor to become a global leader or at least an essential stakeholder to promote knowledge sharing and scientific collaboration.

Third, our research findings indicate that individual ethical perspectives among astronomers play a significant role in guiding their ODS practices. To start, reciprocity effectively enhances ODS regardless of the established or domestic research policies. Thus, we suggest that policymakers in China consider emphasizing more on the reciprocity benefits and build a collaborative effort across the scientific community. As the qualitative data from our findings revealed, collaboration benefits from ODS are highly motivating for Chinese astronomers. Still, we have identified concerns among Chinese astronomers. For instance, they have highlighted concerns about the limitations of verbal commitments for ODS within the proprietary data period, potentially engendering “free-riders” in research. Further, we noticed that some Chinese astronomers conduct ODS based on their respect for this tradition and obey it as their duty without considering external factors such as individual interests or community benefits. We posit that this ethical perspective is aligned with deontology. Therefore, we suggest that stakeholders of ODS, such as the scientific community, research institutions and organizations, and ODS platform developers, could propose specific norms or mottos regarding the ODS tradition in astronomy to stimulate astronomers’ voluntary sense of duty to conduct it.

Finally, since we found that some astronomers conducted ODS primarily for self-interests in academia, efforts should be made to ensure that the rights of researchers in astronomy are protected and that they do not bear any risks caused by others (e.g., data misuse, verbal breach of contract). Future research can administer surveys or experiments to explore how significantly these individual factors impact astronomers’ ODS behaviors.

Data availability

The complete translated and transcribed data from our study is available at Peking University Open Research Data ( https://doi.org/10.18170/DVN/JLJGPF ).

Abele-Brehm AE, Gollwitzer M, Steinberg U, Schönbrodt FD (2019) Attitudes toward open science and public data sharing. Soc Psychol 50(4):252–260. https://doi.org/10.1027/1864-9335/a000384

Article   Google Scholar  

Acciai C, Schneider JW, Nielsen MW (2023) Estimating social bias in data sharing behaviours: an open science experiment. Sci Data 10(1):233. https://doi.org/10.1038/s41597-023-02129-8

Article   PubMed   PubMed Central   Google Scholar  

Anane-Sarpong E, Wangmo T, Tanner M (2020) Ethical principles for promoting health research data sharing with sub-Saharan Africa. Dev World Bioeth 20(2):86–95. https://doi.org/10.1111/dewb.12233

Article   PubMed   Google Scholar  

Anane‐Sarpong E, Wangmo T, Ward CL, Sankoh O, Tanner M, Elger BS (2018) You cannot collect data using your own resources and put It on open access”: perspectives from Africa about public health data‐sharing. Dev World Bioeth 18(4):394–405. https://doi.org/10.1111/dewb.12159

Arzberger P, Schroeder P, Beaulieu A, Bowker G, Casey K, Laaksonen L, Moorman D, Uhlir P, Wouters P (2004) Promoting access to public research data for scientific, economic, and social development. Data Sci J 3:135–152. https://doi.org/10.2481/dsj.3.135

Banks GC, Field JG, Oswald FL, O'Boyle EH, Landis RS, Rupp DE, Rogelberg SG (2019) Answers to 18 Questions About Open Science Practices. J Bus Psychol 34:257–270. https://doi.org/10.1007/s10869-018-9547-8

Bezuidenhout L, Chakauya E (2018) Hidden concerns of sharing research data by low/middle-income country scientists. Glob Bioeth 29(1):39–54. https://doi.org/10.1080/11287462.2018.1441780

Boeckhout M, Zielhuis GA, Bredenoord AL (2018) The FAIR guiding principles for data stewardship: fair enough? Eur J Hum Genet 26(7):931–936. https://doi.org/10.1038/s41431-018-0160-0

Borgman CL, Wofford MF, Golshan MS, Darch PT (2021) Collaborative qualitative research at scale: Reflections on 20 years of acquiring global data and making data global. J Assoc Inf Sci Technol 72(6):667–682. https://doi.org/10.1002/asi.24439

Braun V, Clarke V (2006) Using thematic analysis in psychology. Qual Res Psychol 3(2):77–101. https://doi.org/10.1191/1478088706qp063oa

Cyranoski D (2018) China awaits controversial blacklist of ‘poor quality’ journals. Nature 562(7728):471–472. https://doi.org/10.1038/d41586-018-07025-5

Article   ADS   CAS   PubMed   Google Scholar  

Deil C, Boisson C, Kosack K, Perkins J, King J, Eger P, … & Lombardi S (2017, January) Open high-level data formats and software for gamma-ray astronomy. In AIP Conference Proceedings (Vol. 1792, No. 1). AIP Publishing

Digital Science, Hahnel M, Smith G, Schoenenberger H, Scaplehorn N, Day L (2024) The State of Open Data 2023 (Version 1). Digital Science. available at: https://doi.org/10.6084/m9.figshare.24428194.v1 . Accessed 10 March 2024

Dorch BF, Drachen TM, Ellegaard O (2015) The data sharing advantage in astrophysics. Proc Int Astron Union 11(A29A):172–175. https://doi.org/10.1017/S1743921316002696

Enwald H, Grigas V, Rudzioniene J, Kortelainen T (2022) Data sharing practices in open access mode: a study of the willingness to share data in different disciplines. Inform Res Int Electron J 27. https://doi.org/10.47989/irpaper932

Fox J, Pearce KE, Massanari AL, Riles JM, Szulc Ł, Ranjit YS, Gonzales LA (2021) Open science, closed doors? Countering marginalization through an agenda for ethical, inclusive research in communication. J Commun 71(5):764–784. https://doi.org/10.1093/joc/jqab029

General Office of the State Council of China (2018) “Scientific data management measures”, available at: http://www.gov.cn/home/2018-04/02/content_5279296.htm Accessed 11 June 2023

General Office of the State Council of China (2024) Opinions on building a more perfect system and mechanism for the market-oriented allocation of factors. available at: https://www.gov.cn/xinwen/2022-12/21/content_5732906.htm Accessed 15 March 2024

Genova F (2018) Data as a research infrastructure CDS, the Virtual Observatory, astronomy, and beyond. EPJ Web Conf 186:01001. https://doi.org/10.1051/epjconf/201818601001

Global Indigenous Data Alliance (2019) CARE Principles for Indigenous Data Governance. Available at: https://www.gida-global.org/care . Accessed 14 June 2024

Gray N, Mann RG, Morris D, Holliman M, Noddle K (2012) AstroDAbis: Annotations and cross-matches for remote catalogues. ASP Conf Ser 461:351–354. https://doi.org/10.48550/arXiv.1111.6116

Article   ADS   Google Scholar  

Gu J, Levin JS (2021) Tournament in academia: a comparative analysis of faculty evaluation systems in research universities in China and the USA. High Educ 81:897–915. https://doi.org/10.1007/s10734-020-00585-4

Guy LP, Bechtol K, Bellm E, Blum B, Graham ML, Ivezić Ž, … & Strauss M (2023) Rubin Observatory Plans for an Early Science Program. Available at: https://rtn-011.lsst.io/RTN-011.pdf Accessed 15 March 2024

Harris R, Baumann I (2015) Open data policies and satellite Earth observation. Space Policy 32:44–53. https://doi.org/10.1016/j.spacepol.2015.01.001

Heuritsch J (2023) The evaluation gap in astronomy—explained through a rational choice framework. Publications 11(2):33. https://doi.org/10.3390/publications11020033

Houtkoop BL, Chambers C, Macleod M, Bishop DVM, Nichols TE, Wagenmakers E-J (2018) Data sharing in psychology: a survey on barriers and preconditions. Adv Methods Pract Psychol Sci 1(1):70–85. https://doi.org/10.1177/2515245917751886

Huang Y, Cox AM, Sbaffi L (2021) Research data management policy and practice in Chinese university libraries. J Assoc Inf Sci Technol 72:493–506. https://doi.org/10.1002/asi.24413

Jin WY, Peng M (2021) The effects of social perception on moral judgment. Front Psychol 11:557216. https://doi.org/10.3389/fpsyg.2020.557216

Ju B, Kim Y (2019) The formation of research ethics for data sharing by biological scientists: an empirical analysis. Aslib J Inf Manag 71(5):583–600. https://doi.org/10.1108/AJIM-12-2018-0296

Kim Y, Zhang P (2015) Understanding data sharing behaviors of STEM researchers: The roles of attitudes, norms, and data repositories. Libr Inf Sci Res 37(3):189–200. https://doi.org/10.1016/j.lisr.2015.04.006

Kim Y, Stanton JM (2016) Institutional and individual factors affecting scientists’ data-sharing behaviors: a multilevel analysis. J Assoc Inf Sci Technol 67(4):776–799. https://doi.org/10.1002/asi.23424

Article   CAS   Google Scholar  

Koribalski BS (2019) Open astronomy and big data science. Proc Int Astron Union 15(S367):227–230. https://doi.org/10.1017/S1743921321000879

Kurata K, Matsubayashi M, Mine S (2017) Identifying the complex position of research data and data sharing among researchers in natural science. Sage Open 7(3):2158244017717301. https://doi.org/10.1177/21582440177173

Kurtz MJ, Eichhorn G, Accomazzi A, Grant C, Demleitner M, Murray SS (2004) Worldwide use and impact of the NASA Astrophysics Data System digital library. J Am Soc Inf Sci Technol 56(1):36–45. https://doi.org/10.1002/asi.20095

Lamprecht AL, Garcia L, Kuzak M, Martinez C, Arcila R, Martin Del Pico E, Dominguez Del Angel V, Van De Sandt S, Ison J, Martinez PA (2020) Towards FAIR principles for research software. Data Sci 3(1):37–59. https://doi.org/10.3233/DS-190026

Lee H, Reid E, Kim WG (2014) Understanding knowledge sharing in online travel communities: antecedents and the moderating effects of interaction modes. J Hospit Tour Res 38(2):222–242. https://doi.org/10.1177/1096348012451454

Lester DG, Martani A, Elger BS, Wangmo T (2021) Individual notions of fair data sharing from the perspectives of Swiss stakeholders. BMC Health Serv Res 21:1–12. https://doi.org/10.1186/s12913-021-06906-2

Li M (2021) “Nei Juan” in exam-oriented education in China. J Lit Art Stud 11(12):1028–1033. https://doi.org/10.17265/2159-5836/2021.12.015

Li C, Zhou Y, Zheng X, Zhang Z, Jiang L, Li Z, Wang P, Li J, Xu S, Wang Z (2022) Tracing the footsteps of open research data in China. Learn Publ 35:46–55. https://doi.org/10.1002/leap.1439

Liu J (2021) Data Ethics Behaviors and Norms of Researchers [Master, University of Chinese Academy of Sciences]. Available at: https://d.wanfangdata.com.cn/thesis/ChJUaGVzaXNOZXdTMjAyMzAxMTISCFkzODY0MDg4GghidjYyajZyNQ%3D%3D Accessed 11 March 2024

Pasquetto IV, Sands AE, Darch PT, Borgman CL (2016) Open Data in Scientific Settings: From Policy to Practice Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, California, USA. https://doi.org/10.1145/2858036.2858543

Pepe A, Goodman A, Muench A, Crosas M, Erdmann C (2014) How do astronomers share data? Reliability and persistence of datasets linked in AAS publications and a qualitative study of data practices among US astronomers. PLoS One 9(8):e104798. https://doi.org/10.1371/journal.pone.0104798

Article   ADS   PubMed   PubMed Central   Google Scholar  

Popkin G (2019) Data sharing and how it can benefit your scientific career. Nature 569(7756):445–447. https://doi.org/10.1038/d41586-019-01506-x

Rebull LM (2022) Real astronomy data for anyone: explore NASA’s IRSA. Phys Teach 60(1):72–73. https://doi.org/10.1119/10.0009117

Reichman OJ, Jones MB, Schildhauer MP (2011) Challenges and opportunities of open data in ecology. Science 331(6018):703–705. https://doi.org/10.1126/science.1197962 . PMID: 21311007

Sayogo DS, Pardo TA (2013) Exploring the determinants of scientific data sharing: Understanding the motivation to publish research data. Gov Inf Q 30:S19–S31. https://doi.org/10.1016/j.giq.2012.06.011

Science Data Bank (2024) Subject distribution of dataset published in Science Data Bank Available at: https://www.scidb.cn/en/list?searchList/ordernum=1 Accessed 24 March 2024

Serwadda D, Ndebele P, Grabowski MK, Bajunirwe F, Wanyenze RK (2018) Open data sharing and the Global South—Who benefits? Science 359(6376):642–643. https://doi.org/10.1126/science.aap8395

Stamm K, Lin L, Christidis P (2017) Career stages of health service psychologists. American Psychological Association Center for Workforce, Washington, DC. Available at: https://www.apa.org/workforce/publications/15-health-service-career/ Accessed 24 March 2024

Stahlman GR (2022) From nostalgia to knowledge: considering the personal dimensions of data lifecycles. J Assoc Inf Sci Technol 73(12):1692–1705. https://doi.org/10.1002/asi.24687

Tolle KM, Tansley DSW, Hey AJ (2011) The fourth paradigm: data-intensive scientific discovery. Proc IEEE 99(8):1334–1337. https://doi.org/10.1109/JPROC.2011.2155130

Thursby JG, Haeussler C, Thursby MC, Jiang L (2018) Prepublication disclosure of scientific results: Norms, competition, and commercial orientation. Sci Adv 4(5):eaar2133. https://doi.org/10.1126/sciadv.aar2133

Tu Z, Shen J (2023) Value of open research data: a systematic evaluation framework based on multi-stakeholder survey. Libr Inf Sci Res 45(4):101269. https://doi.org/10.1016/j.lisr.2023.101269

UNESCO (2021) UNESCO Recommendation on Open Science. UNESCO General Conference. In: France. https://doi.org/10.54677/MNMH8546

UK Research and Innovation (2016) Concordat on open research data. Available at: https://www.ukri.org/wp-content/uploads/2020/10/UKRI-020920-ConcordatonOpenResearchData.pdf . Accessed 23 March 2024

van Gend T, Zuiderwijk A (2023) Open research data: a case study into institutional and infrastructural arrangements to stimulate open research data sharing and reuse. J Librariansh Inf Sci 55(3):782–797. https://doi.org/10.1177/09610006221101200

Walsh CG, Xia W, Li M, Denny JC, Harris PA, Malin BA (2018) Enabling open-science initiatives in clinical psychology and psychiatry without sacrificing patients’ privacy: current practices and future challenges. Adv Methods Pract Psychol Sci 1(1):104–114. https://doi.org/10.1177/2515245917749652

Wang S, Kinoshita S, Yokoyama HM (2024) Write your paper on the motherland? Account Res 1–3. https://doi.org/10.1080/08989621.2024.2347398

Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, da Silva Santos LB, Bourne PE (2016) The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3(1):1–9. https://doi.org/10.1038/sdata.2016.18

Xu J, Chen D, He C, Zeng Y, Nicholas D, Wang Z (2020) How are the new wave of Chinese researchers shaping up in scholarly communication terms? Malays J Libr Inf Sci 25(3):49–70. https://doi.org/10.22452/mjlis.vol25no3.4

Xu W, Poole A (2023) ‘Academics without publications are just like imperial concubines without sons’: the ‘new times’ of Chinese higher education. J Educ Policy 1–18. https://doi.org/10.1080/02680939.2023.2288339

Zhang X, Reindl S, Tian H, Gou M, Song R, Zhao T, Jandrić P (2022) Open science in China: openness, economy, freedom & innovation. Educ Philos Theory 55(4):432–445. https://doi.org/10.1080/00131857.2022.2122440

Zhang L, Downs RR, Li J, Wen L, Li C (2021) A review of open research data policies and practices in China. Data Sci J. https://doi.org/10.5334/dsj-2021-003

Zuiderwijk A, Spiers H (2019) Sharing and re-using open data: a case study of motivations in astrophysics. Int J Inf Manag 49:228–241. https://doi.org/10.1016/j.ijinfomgt.2019.05.024

Zuiderwijk A, Türk BO, Brazier F (2024) Identifying the most important facilitators of open research data sharing and reuse in Epidemiology: a mixed-methods study. PloS One 19(2):e0297969. https://doi.org/10.1371/journal.pone.0297969

Article   CAS   PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

The authors acknowledge the support of the Beijing Municipal Social Science Foundation under Grant [No. 22ZXC008].

Author information

Authors and affiliations.

Department of Information Management, Peking University, Beijing, China

Jinya Liu & Huichuan Xia

National Science Library, Chinese Academy of Sciences, Beijing, China

Kunhua Zhao & Liping Gu

Department of Information Resource Management, School of Economics and Management, University of Chinese Academy of Sciences, Beijing, China

You can also search for this author in PubMed   Google Scholar

Contributions

JL: conceptualization, methodology, data collection, formal analysis, original draft, writing, and editing. KZ: review, data collection, and editing. LG: data collection; editing. HX: conceptualization; methodology; formal analysis; writing, editing, and paper finalization.

Corresponding author

Correspondence to Huichuan Xia .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This study was reviewed and approved by the Institutional Review Board of the Institute of Psychology, Chinese Academy of Sciences. All methods were carried out following the relevant guidelines and regulations. The ethical approval number of this study is H23162.

Informed consent

Informed consent is a critical part of ensuring that participants are fully aware of the nature of the research and their involvement in it. Thus, our informed consent involves adequate information about the purpose of the research, methods of participant involvement, the intended use of the results, rights as a participant, and any potential risks that were provided to the participants. Before we began our interviews, we clearly explained the content of our informed consent form to our participants, provided them with ample time to read it, and thoroughly addressed any questions they had regarding the informed consent form. All participants had carefully read and agreed to an informed consent.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Data set 1109, data set 1352, data set 37510, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Liu, J., Zhao, K., Gu, L. et al. To share or not to share, that is the question: a qualitative study of Chinese astronomers’ perceptions, practices, and hesitations about open data sharing. Humanit Soc Sci Commun 11 , 1063 (2024). https://doi.org/10.1057/s41599-024-03570-9

Download citation

Received : 16 November 2023

Accepted : 09 August 2024

Published : 22 August 2024

DOI : https://doi.org/10.1057/s41599-024-03570-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

what is interview schedule in qualitative research

IMAGES

  1. Interview Schedule

    what is interview schedule in qualitative research

  2. Interview Schedule Template For Qualitative Research

    what is interview schedule in qualitative research

  3. Interview Schedule Template For Qualitative Research

    what is interview schedule in qualitative research

  4. FREE 10+ Research Interview Schedule Templates in PDF

    what is interview schedule in qualitative research

  5. FREE 10+ Research Interview Schedule Templates in PDF

    what is interview schedule in qualitative research

  6. Interview Schedule Template Qualitative Research Card

    what is interview schedule in qualitative research

COMMENTS

  1. Chapter 11. Interviewing

    Qualitative Sociology 37(2):153-171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research! Pugh, Allison J. 2013.

  2. PDF Appendix 1: Semi-structured interview guide

    health research: a qualitative study protocol 2 Appendix 2: Participant Information Sheet Experiences with Methods for Identifying and Displaying Research Gaps We invite you to take part in our research study. Before you decide whether to participate, you should understand why the research is being done and what it will involve.

  3. Types of Interviews in Research

    An interview is a qualitative research method that relies on asking questions in order to collect data. Interviews involve two or more people, one of whom is the interviewer asking the questions. There are several types of interviews, often differentiated by their level of structure.

  4. PDF INTERVIEW SCHEDULE SAMPLE TEMPLATE

    Each interviewing schedule should have the following three major parts: (1) the opening; (2) the body; (3) the closing. The opening should always make the respondent/interviewee feel welcomed and relaxed. In addition, the opening should clearly indicate the objectives of the interview and make it clear what topic areas will be addressed.

  5. (PDF) How to Conduct an Effective Interview; A Guide to Interview

    Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...

  6. How to Carry Out Great Interviews in Qualitative Research

    A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom. There are three main types of qualitative research interview - structured, unstructured or semi-structured.

  7. How To Do Qualitative Interviews For Research

    5. Not keeping your golden thread front of mind. We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don't want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims.

  8. PDF Interviewing in Qualitative Research

    Qualitative interview is a broad term uniting semi-structured and unstructured interviews. Quali-tative interviewing is less structured and more likely to evolve as a natural conversation; it is of-ten conducted in the form of respondents narrating their personal experiences or life histories. Qualitative interviews can be part of ethnography ...

  9. Twelve tips for conducting qualitative research interviews

    Summary. The qualitative research interview is a powerful data-collection tool which affords researchers in medical education opportunities to explore unknown areas of education and practice within medicine. This paper articulates 12 tips for consideration when conducting qualitative research interviews, and outlines the qualitative research ...

  10. Interview Method In Psychology Research

    An interview schedule is a list of pre-planned, structured questions that have been prepared, to serve as a guide for interviewers, researchers and investigators in collecting information or data about a specific topic or issue. ... Qualitative Research in Psychology, 8(4), 333-353.

  11. Semi-Structured Interview

    A semi-structured interview is a data collection method that relies on asking questions within a predetermined thematic framework. However, the questions are not set in order or in phrasing. In research, semi-structured interviews are often qualitative in nature. They are generally used as an exploratory tool in marketing, social science ...

  12. Chapter 13: Interviews

    What are interviews? An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a ...

  13. PDF Strategies for Qualitative Interviews

    Gentle: lets people finish; gives them time to think; tolerates pauses. 5. Sensitive: listens attentively to what is said and how it is said; is empathetic in dealing with the interviewee. 6. Open: responds to what is important to interviewee and is flexible. 7. Steering: knows what he/she wants to find out. 8.

  14. Interview Research

    Interviews as a Method for Qualitative Research (video) This short video summarizes why interviews can serve as useful data in qualitative research. InterViews by Steinar Kvale Interviewing is an essential tool in qualitative research and this introduction to interviewing outlines both the theoretical underpinnings and the practical aspects of ...

  15. Qualitative Interviewing

    Qualitative interviewing is a foundational method in qualitative research and is widely used in health research and the social sciences. Both qualitative semi-structured and in-depth unstructured interviews use verbal communication, mostly in face-to-face interactions, to collect data about the attitudes, beliefs, and experiences of participants.

  16. Qualitative research method-interviewing and observation

    Interviewing. This is the most common format of data collection in qualitative research. According to Oakley, qualitative interview is a type of framework in which the practices and standards be not only recorded, but also achieved, challenged and as well as reinforced.[] As no research interview lacks structure[] most of the qualitative research interviews are either semi-structured, lightly ...

  17. (PDF) Interviewing in qualitative research

    The main purpose of the case study in this research is to test and get feedback on the established assessment model. Case study data collection used the one-on-one interview method by Ryan et al ...

  18. Structured Interview

    Revised on June 22, 2023. A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews. In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but ...

  19. Semi-Structured Interview: Explanation, Examples, & How-To

    A semi-structured interview is a qualitative research method used to gain an in-depth understanding of the respondent's feelings and beliefs on specific topics. As the interviewer prepares the questions ahead of time, they can adjust the order, skip any that are redundant, or create new ones. Additionally, the interviewer should be prepared to ...

  20. Qualitative interview schedule: first time point

    Appendix 1 Qualitative interview schedule: first time point. Evaluative stakeholder interview schedule (A) Preamble to interviews. Provide standard information form to read. Prompt for clarity/questions. ... This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be ...

  21. Interview Schedule: Definition, Types, Templates and Tips

    However, the interview schedule must have three major parts: 1. Opening. Some researchers call this stage the "warm-up", where the objective is to create an atmosphere that will accommodate the open and free flow of ideas between the interviewer and interviewee, whether it is one-on-one or in a group.

  22. Defining Qualitative Research: What Is It and How to Use

    Marketing Research Analyze in-depth interviews, focus groups, and other qualitative research. Financial Services Analyze financial interviews and drive smarter investment decisions; ... Qualitative research is fundamentally about understanding the experiences and perspectives of individuals in depth. It focuses on subjective narratives and ...

  23. PDF Qualitative interview schedule

    The following questions will be asked during the interview conducted with you. The questionnaire consists of four sections and all questions will ba asked during the interview. All interviews will be recorded with a voice recorder to ensure that the correct version of your interview is transcribed. Section A - Distance Education. 1.

  24. "Because people don't know what it is, they don't really know it exists

    Interview question and guide design (appendix 2, supplementary material) drew on the six qualitative and six quantitative research-based, validated published tools used to explore similar phenomena, particularly those of O'Hara , Ryder , L'Ecuyer and Schabmann et al. .

  25. Updating a conceptual model of effective symptom management in

    We continued interviews based on the interview schedule but without the use of prompt cards. EC is a female, non-clinical senior research fellow in palliative care. She has experience of qualitative interviews and led the development of the original HCP-based model of effective symptom management . Audio recordings were transcribed verbatim by ...

  26. To share or not to share, that is the question: a qualitative study of

    To address those research questions, we conducted a qualitative study comprising 14 semi-structured interviews and 136 open-ended survey responses with Chinese astronomers to understand their ...