Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Content Analysis | Guide, Methods & Examples

Content Analysis | Guide, Methods & Examples

Published on July 18, 2019 by Amy Luo . Revised on June 22, 2023.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding).  In both types, you categorize or “code” words, themes, and concepts within the texts and then analyze the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis, other interesting articles.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyze.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects or concepts in a set of historical or contemporary texts.

Quantitative content analysis example

To research the importance of employment issues in political campaigns, you could analyze campaign speeches for the frequency of terms such as unemployment , jobs , and work  and use statistical analysis to find differences over time or between candidates.

In addition, content analysis can be used to make qualitative inferences by analyzing the meaning and semantic relationship of words and concepts.

Qualitative content analysis example

To gain a more qualitative understanding of employment issues in political campaigns, you could locate the word unemployment in speeches, identify what other words or phrases appear next to it (such as economy,   inequality or  laziness ), and analyze the meanings of these relationships to better understand the intentions and targets of different campaigns.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analyzing the consequences of communication content, such as the flow of information or audience responses

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

content analysis for qualitative research

  • Unobtrusive data collection

You can analyze communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost – all you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions, leading to various types of research bias and cognitive bias .

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Example research question for content analysis

Is there a difference in how the US media represents younger politicians compared to older ones in terms of trustworthiness?

Next, you follow these five steps.

1. Select the content you will analyze

Based on your research question, choose the texts that you will analyze. You need to decide:

  • The medium (e.g. newspapers, speeches or websites) and genre (e.g. opinion pieces, political campaign speeches, or marketing copy)
  • The inclusion and exclusion criteria (e.g. newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small amount of texts that meet your criteria, you might analyze all of them. If there is a large volume of texts, you can select a sample .

2. Define the units and categories of analysis

Next, you need to determine the level at which you will analyze your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g. aged 30-40 ,  lawyer , parent ) or more conceptual (e.g. trustworthy , corrupt , conservative , family oriented ).

Your units of analysis are the politicians who appear in each article and the words and phrases that are used to describe them. Based on your research question, you have to categorize based on age and the concept of trustworthiness. To get more detailed data, you also code for other categories such as their political party and the marital status of each politician mentioned.

3. Develop a set of rules for coding

Coding involves organizing the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

In considering the category “younger politician,” you decide which titles will be coded with this category ( senator, governor, counselor, mayor ). With “trustworthy”, you decide which specific words or phrases related to trustworthiness (e.g. honest and reliable ) will be coded in this category.

4. Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti and Diction , which can help speed up the process of counting and categorizing words and phrases.

Following your coding rules, you examine each newspaper article in your sample. You record the characteristics of each politician mentioned, along with all words and phrases related to trustworthiness that are used to describe them.

5. Analyze the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context and audience of the texts.

Let’s say the results reveal that words and phrases related to trustworthiness appeared in the same sentence as an older politician more frequently than they did in the same sentence as a younger politician. From these results, you conclude that national newspapers present older politicians as more trustworthy than younger politicians, and infer that this might have an effect on readers’ perceptions of younger people in politics.

Prevent plagiarism. Run a free check.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Luo, A. (2023, June 22). Content Analysis | Guide, Methods & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/content-analysis/

Is this article helpful?

Amy Luo

Other students also liked

Qualitative vs. quantitative research | differences, examples & methods, descriptive research | definition, types, methods & examples, reliability vs. validity in research | difference, types and examples, what is your plagiarism score.

Logo for Open Educational Resources

Chapter 17. Content Analysis

Introduction.

Content analysis is a term that is used to mean both a method of data collection and a method of data analysis. Archival and historical works can be the source of content analysis, but so too can the contemporary media coverage of a story, blogs, comment posts, films, cartoons, advertisements, brand packaging, and photographs posted on Instagram or Facebook. Really, almost anything can be the “content” to be analyzed. This is a qualitative research method because the focus is on the meanings and interpretations of that content rather than strictly numerical counts or variables-based causal modeling. [1] Qualitative content analysis (sometimes referred to as QCA) is particularly useful when attempting to define and understand prevalent stories or communication about a topic of interest—in other words, when we are less interested in what particular people (our defined sample) are doing or believing and more interested in what general narratives exist about a particular topic or issue. This chapter will explore different approaches to content analysis and provide helpful tips on how to collect data, how to turn that data into codes for analysis, and how to go about presenting what is found through analysis. It is also a nice segue between our data collection methods (e.g., interviewing, observation) chapters and chapters 18 and 19, whose focus is on coding, the primary means of data analysis for most qualitative data. In many ways, the methods of content analysis are quite similar to the method of coding.

content analysis for qualitative research

Although the body of material (“content”) to be collected and analyzed can be nearly anything, most qualitative content analysis is applied to forms of human communication (e.g., media posts, news stories, campaign speeches, advertising jingles). The point of the analysis is to understand this communication, to systematically and rigorously explore its meanings, assumptions, themes, and patterns. Historical and archival sources may be the subject of content analysis, but there are other ways to analyze (“code”) this data when not overly concerned with the communicative aspect (see chapters 18 and 19). This is why we tend to consider content analysis its own method of data collection as well as a method of data analysis. Still, many of the techniques you learn in this chapter will be helpful to any “coding” scheme you develop for other kinds of qualitative data. Just remember that content analysis is a particular form with distinct aims and goals and traditions.

An Overview of the Content Analysis Process

The first step: selecting content.

Figure 17.2 is a display of possible content for content analysis. The first step in content analysis is making smart decisions about what content you will want to analyze and to clearly connect this content to your research question or general focus of research. Why are you interested in the messages conveyed in this particular content? What will the identification of patterns here help you understand? Content analysis can be fun to do, but in order to make it research, you need to fit it into a research plan.

New stories Blogs Comment posts Lyrics
Letters to editor Films Cartoons Advertisements
Brand packaging Logos Instagram photos Tweets
Photographs Graffiti Street signs Personalized license plates
Avatars (names, shapes, presentations) Nicknames Band posters Building names

Figure 17.1. A Non-exhaustive List of "Content" for Content Analysis

To take one example, let us imagine you are interested in gender presentations in society and how presentations of gender have changed over time. There are various forms of content out there that might help you document changes. You could, for example, begin by creating a list of magazines that are coded as being for “women” (e.g., Women’s Daily Journal ) and magazines that are coded as being for “men” (e.g., Men’s Health ). You could then select a date range that is relevant to your research question (e.g., 1950s–1970s) and collect magazines from that era. You might create a “sample” by deciding to look at three issues for each year in the date range and a systematic plan for what to look at in those issues (e.g., advertisements? Cartoons? Titles of articles? Whole articles?). You are not just going to look at some magazines willy-nilly. That would not be systematic enough to allow anyone to replicate or check your findings later on. Once you have a clear plan of what content is of interest to you and what you will be looking at, you can begin, creating a record of everything you are including as your content. This might mean a list of each advertisement you look at or each title of stories in those magazines along with its publication date. You may decide to have multiple “content” in your research plan. For each content, you want a clear plan for collecting, sampling, and documenting.

The Second Step: Collecting and Storing

Once you have a plan, you are ready to collect your data. This may entail downloading from the internet, creating a Word document or PDF of each article or picture, and storing these in a folder designated by the source and date (e.g., “ Men’s Health advertisements, 1950s”). Sølvberg ( 2021 ), for example, collected posted job advertisements for three kinds of elite jobs (economic, cultural, professional) in Sweden. But collecting might also mean going out and taking photographs yourself, as in the case of graffiti, street signs, or even what people are wearing. Chaise LaDousa, an anthropologist and linguist, took photos of “house signs,” which are signs, often creative and sometimes offensive, hung by college students living in communal off-campus houses. These signs were a focal point of college culture, sending messages about the values of the students living in them. Some of the names will give you an idea: “Boot ’n Rally,” “The Plantation,” “Crib of the Rib.” The students might find these signs funny and benign, but LaDousa ( 2011 ) argued convincingly that they also reproduced racial and gender inequalities. The data here already existed—they were big signs on houses—but the researcher had to collect the data by taking photographs.

In some cases, your content will be in physical form but not amenable to photographing, as in the case of films or unwieldy physical artifacts you find in the archives (e.g., undigitized meeting minutes or scrapbooks). In this case, you need to create some kind of detailed log (fieldnotes even) of the content that you can reference. In the case of films, this might mean watching the film and writing down details for key scenes that become your data. [2] For scrapbooks, it might mean taking notes on what you are seeing, quoting key passages, describing colors or presentation style. As you might imagine, this can take a lot of time. Be sure you budget this time into your research plan.

Researcher Note

A note on data scraping : Data scraping, sometimes known as screen scraping or frame grabbing, is a way of extracting data generated by another program, as when a scraping tool grabs information from a website. This may help you collect data that is on the internet, but you need to be ethical in how to employ the scraper. A student once helped me scrape thousands of stories from the Time magazine archives at once (although it took several hours for the scraping process to complete). These stories were freely available, so the scraping process simply sped up the laborious process of copying each article of interest and saving it to my research folder. Scraping tools can sometimes be used to circumvent paywalls. Be careful here!

The Third Step: Analysis

There is often an assumption among novice researchers that once you have collected your data, you are ready to write about what you have found. Actually, you haven’t yet found anything, and if you try to write up your results, you will probably be staring sadly at a blank page. Between the collection and the writing comes the difficult task of systematically and repeatedly reviewing the data in search of patterns and themes that will help you interpret the data, particularly its communicative aspect (e.g., What is it that is being communicated here, with these “house signs” or in the pages of Men’s Health ?).

The first time you go through the data, keep an open mind on what you are seeing (or hearing), and take notes about your observations that link up to your research question. In the beginning, it can be difficult to know what is relevant and what is extraneous. Sometimes, your research question changes based on what emerges from the data. Use the first round of review to consider this possibility, but then commit yourself to following a particular focus or path. If you are looking at how gender gets made or re-created, don’t follow the white rabbit down a hole about environmental injustice unless you decide that this really should be the focus of your study or that issues of environmental injustice are linked to gender presentation. In the second round of review, be very clear about emerging themes and patterns. Create codes (more on these in chapters 18 and 19) that will help you simplify what you are noticing. For example, “men as outdoorsy” might be a common trope you see in advertisements. Whenever you see this, mark the passage or picture. In your third (or fourth or fifth) round of review, begin to link up the tropes you’ve identified, looking for particular patterns and assumptions. You’ve drilled down to the details, and now you are building back up to figure out what they all mean. Start thinking about theory—either theories you have read about and are using as a frame of your study (e.g., gender as performance theory) or theories you are building yourself, as in the Grounded Theory tradition. Once you have a good idea of what is being communicated and how, go back to the data at least one more time to look for disconfirming evidence. Maybe you thought “men as outdoorsy” was of importance, but when you look hard, you note that women are presented as outdoorsy just as often. You just hadn’t paid attention. It is very important, as any kind of researcher but particularly as a qualitative researcher, to test yourself and your emerging interpretations in this way.

The Fourth and Final Step: The Write-Up

Only after you have fully completed analysis, with its many rounds of review and analysis, will you be able to write about what you found. The interpretation exists not in the data but in your analysis of the data. Before writing your results, you will want to very clearly describe how you chose the data here and all the possible limitations of this data (e.g., historical-trace problem or power problem; see chapter 16). Acknowledge any limitations of your sample. Describe the audience for the content, and discuss the implications of this. Once you have done all of this, you can put forth your interpretation of the communication of the content, linking to theory where doing so would help your readers understand your findings and what they mean more generally for our understanding of how the social world works. [3]

Analyzing Content: Helpful Hints and Pointers

Although every data set is unique and each researcher will have a different and unique research question to address with that data set, there are some common practices and conventions. When reviewing your data, what do you look at exactly? How will you know if you have seen a pattern? How do you note or mark your data?

Let’s start with the last question first. If your data is stored digitally, there are various ways you can highlight or mark up passages. You can, of course, do this with literal highlighters, pens, and pencils if you have print copies. But there are also qualitative software programs to help you store the data, retrieve the data, and mark the data. This can simplify the process, although it cannot do the work of analysis for you.

Qualitative software can be very expensive, so the first thing to do is to find out if your institution (or program) has a universal license its students can use. If they do not, most programs have special student licenses that are less expensive. The two most used programs at this moment are probably ATLAS.ti and NVivo. Both can cost more than $500 [4] but provide everything you could possibly need for storing data, content analysis, and coding. They also have a lot of customer support, and you can find many official and unofficial tutorials on how to use the programs’ features on the web. Dedoose, created by academic researchers at UCLA, is a decent program that lacks many of the bells and whistles of the two big programs. Instead of paying all at once, you pay monthly, as you use the program. The monthly fee is relatively affordable (less than $15), so this might be a good option for a small project. HyperRESEARCH is another basic program created by academic researchers, and it is free for small projects (those that have limited cases and material to import). You can pay a monthly fee if your project expands past the free limits. I have personally used all four of these programs, and they each have their pluses and minuses.

Regardless of which program you choose, you should know that none of them will actually do the hard work of analysis for you. They are incredibly useful for helping you store and organize your data, and they provide abundant tools for marking, comparing, and coding your data so you can make sense of it. But making sense of it will always be your job alone.

So let’s say you have some software, and you have uploaded all of your content into the program: video clips, photographs, transcripts of news stories, articles from magazines, even digital copies of college scrapbooks. Now what do you do? What are you looking for? How do you see a pattern? The answers to these questions will depend partially on the particular research question you have, or at least the motivation behind your research. Let’s go back to the idea of looking at gender presentations in magazines from the 1950s to the 1970s. Here are some things you can look at and code in the content: (1) actions and behaviors, (2) events or conditions, (3) activities, (4) strategies and tactics, (5) states or general conditions, (6) meanings or symbols, (7) relationships/interactions, (8) consequences, and (9) settings. Table 17.1 lists these with examples from our gender presentation study.

Table 17.1. Examples of What to Note During Content Analysis

What can be noted/coded Example from Gender Presentation Study
Actions and behaviors
Events or conditions
Activities
Strategies and tactics
States/conditions
Meanings/symbols
Relationships/interactions
Consequences
Settings

One thing to note about the examples in table 17.1: sometimes we note (mark, record, code) a single example, while other times, as in “settings,” we are recording a recurrent pattern. To help you spot patterns, it is useful to mark every setting, including a notation on gender. Using software can help you do this efficiently. You can then call up “setting by gender” and note this emerging pattern. There’s an element of counting here, which we normally think of as quantitative data analysis, but we are using the count to identify a pattern that will be used to help us interpret the communication. Content analyses often include counting as part of the interpretive (qualitative) process.

In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, “strategies and tactics” is a bit of a stretch here. In studies that are looking specifically at, say, policy implementation or social movements, this category will prove much more salient.

Another way to think about “what to look at” is to consider aspects of your content in terms of units of analysis. You can drill down to the specific words used (e.g., the adjectives commonly used to describe “men” and “women” in your magazine sample) or move up to the more abstract level of concepts used (e.g., the idea that men are more rational than women). Counting for the purpose of identifying patterns is particularly useful here. How many times is that idea of women’s irrationality communicated? How is it is communicated (in comic strips, fictional stories, editorials, etc.)? Does the incidence of the concept change over time? Perhaps the “irrational woman” was everywhere in the 1950s, but by the 1970s, it is no longer showing up in stories and comics. By tracing its usage and prevalence over time, you might come up with a theory or story about gender presentation during the period. Table 17.2 provides more examples of using different units of analysis for this work along with suggestions for effective use.

Table 17.2. Examples of Unit of Analysis in Content Analysis

Unit of Analysis How Used...
Words
Themes
Characters
Paragraphs
Items
Concepts
Semantics

Every qualitative content analysis is unique in its particular focus and particular data used, so there is no single correct way to approach analysis. You should have a better idea, however, of what kinds of things to look for and what to look for. The next two chapters will take you further into the coding process, the primary analytical tool for qualitative research in general.

Further Readings

Cidell, Julie. 2010. “Content Clouds as Exploratory Qualitative Data Analysis.” Area 42(4):514–523. A demonstration of using visual “content clouds” as a form of exploratory qualitative data analysis using transcripts of public meetings and content of newspaper articles.

Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15(9):1277–1288. Distinguishes three distinct approaches to QCA: conventional, directed, and summative. Uses hypothetical examples from end-of-life care research.

Jackson, Romeo, Alex C. Lange, and Antonio Duran. 2021. “A Whitened Rainbow: The In/Visibility of Race and Racism in LGBTQ Higher Education Scholarship.” Journal Committed to Social Change on Race and Ethnicity (JCSCORE) 7(2):174–206.* Using a “critical summative content analysis” approach, examines research published on LGBTQ people between 2009 and 2019.

Krippendorff, Klaus. 2018. Content Analysis: An Introduction to Its Methodology . 4th ed. Thousand Oaks, CA: SAGE. A very comprehensive textbook on both quantitative and qualitative forms of content analysis.

Mayring, Philipp. 2022. Qualitative Content Analysis: A Step-by-Step Guide . Thousand Oaks, CA: SAGE. Formulates an eight-step approach to QCA.

Messinger, Adam M. 2012. “Teaching Content Analysis through ‘Harry Potter.’” Teaching Sociology 40(4):360–367. This is a fun example of a relatively brief foray into content analysis using the music found in Harry Potter films.

Neuendorft, Kimberly A. 2002. The Content Analysis Guidebook . Thousand Oaks, CA: SAGE. Although a helpful guide to content analysis in general, be warned that this textbook definitely favors quantitative over qualitative approaches to content analysis.

Schrier, Margrit. 2012. Qualitative Content Analysis in Practice . Thousand Okas, CA: SAGE. Arguably the most accessible guidebook for QCA, written by a professor based in Germany.

Weber, Matthew A., Shannon Caplan, Paul Ringold, and Karen Blocksom. 2017. “Rivers and Streams in the Media: A Content Analysis of Ecosystem Services.” Ecology and Society 22(3).* Examines the content of a blog hosted by National Geographic and articles published in The New York Times and the Wall Street Journal for stories on rivers and streams (e.g., water-quality flooding).

  • There are ways of handling content analysis quantitatively, however. Some practitioners therefore specify qualitative content analysis (QCA). In this chapter, all content analysis is QCA unless otherwise noted. ↵
  • Note that some qualitative software allows you to upload whole films or film clips for coding. You will still have to get access to the film, of course. ↵
  • See chapter 20 for more on the final presentation of research. ↵
  • . Actually, ATLAS.ti is an annual license, while NVivo is a perpetual license, but both are going to cost you at least $500 to use. Student rates may be lower. And don’t forget to ask your institution or program if they already have a software license you can use. ↵

A method of both data collection and data analysis in which a given content (textual, visual, graphic) is examined systematically and rigorously to identify meanings, themes, patterns and assumptions.  Qualitative content analysis (QCA) is concerned with gathering and interpreting an existing body of material.    

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

Content Analysis

Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. As an example, researchers can evaluate language used within a news article to search for bias or partiality. Researchers can then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of surrounding the text.

Description

Sources of data could be from interviews, open-ended questions, field research notes, conversations, or literally any occurrence of communicative language (such as books, essays, discussions, newspaper headlines, speeches, media, historical documents). A single study may analyze various forms of text in its analysis. To analyze the text using content analysis, the text must be coded, or broken down, into manageable code categories for analysis (i.e. “codes”). Once the text is coded into code categories, the codes can then be further categorized into “code categories” to summarize data even further.

Three different definitions of content analysis are provided below.

Definition 1: “Any technique for making inferences by systematically and objectively identifying special characteristics of messages.” (from Holsti, 1968)

Definition 2: “An interpretive and naturalistic approach. It is both observational and narrative in nature and relies less on the experimental elements normally associated with scientific research (reliability, validity, and generalizability) (from Ethnography, Observational Research, and Narrative Inquiry, 1994-2012).

Definition 3: “A research technique for the objective, systematic and quantitative description of the manifest content of communication.” (from Berelson, 1952)

Uses of Content Analysis

Identify the intentions, focus or communication trends of an individual, group or institution

Describe attitudinal and behavioral responses to communications

Determine the psychological or emotional state of persons or groups

Reveal international differences in communication content

Reveal patterns in communication content

Pre-test and improve an intervention or survey prior to launch

Analyze focus group interviews and open-ended questions to complement quantitative data

Types of Content Analysis

There are two general types of content analysis: conceptual analysis and relational analysis. Conceptual analysis determines the existence and frequency of concepts in a text. Relational analysis develops the conceptual analysis further by examining the relationships among concepts in a text. Each type of analysis may lead to different results, conclusions, interpretations and meanings.

Conceptual Analysis

Typically people think of conceptual analysis when they think of content analysis. In conceptual analysis, a concept is chosen for examination and the analysis involves quantifying and counting its presence. The main goal is to examine the occurrence of selected terms in the data. Terms may be explicit or implicit. Explicit terms are easy to identify. Coding of implicit terms is more complicated: you need to decide the level of implication and base judgments on subjectivity (an issue for reliability and validity). Therefore, coding of implicit terms involves using a dictionary or contextual translation rules or both.

To begin a conceptual content analysis, first identify the research question and choose a sample or samples for analysis. Next, the text must be coded into manageable content categories. This is basically a process of selective reduction. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.

General steps for conducting a conceptual content analysis:

1. Decide the level of analysis: word, word sense, phrase, sentence, themes

2. Decide how many concepts to code for: develop a pre-defined or interactive set of categories or concepts. Decide either: A. to allow flexibility to add categories through the coding process, or B. to stick with the pre-defined set of categories.

Option A allows for the introduction and analysis of new and important material that could have significant implications to one’s research question.

Option B allows the researcher to stay focused and examine the data for specific concepts.

3. Decide whether to code for existence or frequency of a concept. The decision changes the coding process.

When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared.

When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text.

4. Decide on how you will distinguish among concepts:

Should text be coded exactly as they appear or coded as the same when they appear in different forms? For example, “dangerous” vs. “dangerousness”. The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes.

What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept? For example, “dangerous” vs. “the person is scary” vs. “that person could cause harm to me”. These word segments may not merit separate categories, due the implicit meaning of “dangerous”.

5. Develop rules for coding your texts. After decisions of steps 1-4 are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. The researcher can code for exactly what he/she wants to code. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.

6. Decide what to do with irrelevant information: should this be ignored (e.g. common English words like “the” and “and”), or used to reexamine the coding scheme in the case that it would add to the outcome of coding?

7. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program. When coding is done by hand, a researcher can recognize errors far more easily (e.g. typos, misspelling). If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. computer coding is most relevant for implicit information where category preparation is essential for accurate coding.

8. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted, or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information. Typically, general trends and patterns can be identified.

Relational Analysis

Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts.

To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis. Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.

There are three subcategories of relational analysis to choose from prior to going on to the general steps.

Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.

Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Text is defined as a string of words called a “window” that is scanned for the co-occurrence of concepts. The result is the creation of a “concept matrix”, or a group of interrelated co-occurring concepts that would suggest an overall meaning.

Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.

General steps for conducting a relational content analysis:

1. Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. 2. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. 3. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:

Strength of relationship: degree to which two or more concepts are related.

Sign of relationship: are concepts positively or negatively related to each other?

Direction of relationship: the types of relationship that categories exhibit. For example, “X implies Y” or “X occurs before Y” or “if X then Y” or if X is the primary motivator of Y.

4. Code the relationships: a difference between conceptual and relational analysis is that the statements or relationships between concepts are coded. 5. Perform statistical analyses: explore differences or look for relationships among the identified variables during coding. 6. Map out representations: such as decision mapping and mental models.

Reliability and Validity

Reliability : Because of the human nature of researchers, coding errors can never be eliminated but only minimized. Generally, 80% is an acceptable margin for reliability. Three criteria comprise the reliability of a content analysis:

Stability: the tendency for coders to consistently re-code the same data in the same way over a period of time.

Reproducibility: tendency for a group of coders to classify categories membership in the same way.

Accuracy: extent to which the classification of text corresponds to a standard or norm statistically.

Validity : Three criteria comprise the validity of a content analysis:

Closeness of categories: this can be achieved by utilizing multiple classifiers to arrive at an agreed upon definition of each specific category. Using multiple classifiers, a concept category that may be an explicit variable can be broadened to include synonyms or implicit variables.

Conclusions: What level of implication is allowable? Do conclusions correctly follow the data? Are results explainable by other phenomena? This becomes especially problematic when using computer software for analysis and distinguishing between synonyms. For example, the word “mine,” variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. Software can obtain an accurate count of that word’s occurrence and frequency, but not be able to produce an accurate accounting of the meaning inherent in each particular usage. This problem could throw off one’s results and make any conclusion invalid.

Generalizability of the results to a theory: dependent on the clear definitions of concept categories, how they are determined and how reliable they are at measuring the idea one is seeking to measure. Generalizability parallels reliability as much of it depends on the three criteria for reliability.

Advantages of Content Analysis

Directly examines communication using text

Allows for both qualitative and quantitative analysis

Provides valuable historical and cultural insights over time

Allows a closeness to data

Coded form of the text can be statistically analyzed

Unobtrusive means of analyzing interactions

Provides insight into complex models of human thought and language use

When done well, is considered a relatively “exact” research method

Content analysis is a readily-understood and an inexpensive research method

A more powerful tool when combined with other research methods such as interviews, observation, and use of archival records. It is very useful for analyzing historical material, especially for documenting trends over time.

Disadvantages of Content Analysis

Can be extremely time consuming

Is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation

Is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study

Is inherently reductive, particularly when dealing with complex texts

Tends too often to simply consist of word counts

Often disregards the context that produced the text, as well as the state of things after the text is produced

Can be difficult to automate or computerize

Textbooks & Chapters  

Berelson, Bernard. Content Analysis in Communication Research.New York: Free Press, 1952.

Busha, Charles H. and Stephen P. Harter. Research Methods in Librarianship: Techniques and Interpretation.New York: Academic Press, 1980.

de Sola Pool, Ithiel. Trends in Content Analysis. Urbana: University of Illinois Press, 1959.

Krippendorff, Klaus. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications, 1980.

Fielding, NG & Lee, RM. Using Computers in Qualitative Research. SAGE Publications, 1991. (Refer to Chapter by Seidel, J. ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’.)

Methodological Articles  

Hsieh HF & Shannon SE. (2005). Three Approaches to Qualitative Content Analysis.Qualitative Health Research. 15(9): 1277-1288.

Elo S, Kaarianinen M, Kanste O, Polkki R, Utriainen K, & Kyngas H. (2014). Qualitative Content Analysis: A focus on trustworthiness. Sage Open. 4:1-10.

Application Articles  

Abroms LC, Padmanabhan N, Thaweethai L, & Phillips T. (2011). iPhone Apps for Smoking Cessation: A content analysis. American Journal of Preventive Medicine. 40(3):279-285.

Ullstrom S. Sachs MA, Hansson J, Ovretveit J, & Brommels M. (2014). Suffering in Silence: a qualitative study of second victims of adverse events. British Medical Journal, Quality & Safety Issue. 23:325-331.

Owen P. (2012).Portrayals of Schizophrenia by Entertainment Media: A Content Analysis of Contemporary Movies. Psychiatric Services. 63:655-659.

Choosing whether to conduct a content analysis by hand or by using computer software can be difficult. Refer to ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’ listed above in “Textbooks and Chapters” for a discussion of the issue.

QSR NVivo:  http://www.qsrinternational.com/products.aspx

Atlas.ti:  http://www.atlasti.com/webinars.html

R- RQDA package:  http://rqda.r-forge.r-project.org/

Rolly Constable, Marla Cowell, Sarita Zornek Crawford, David Golden, Jake Hartvigsen, Kathryn Morgan, Anne Mudgett, Kris Parrish, Laura Thomas, Erika Yolanda Thompson, Rosie Turner, and Mike Palmquist. (1994-2012). Ethnography, Observational Research, and Narrative Inquiry. Writing@CSU. Colorado State University. Available at: https://writing.colostate.edu/guides/guide.cfm?guideid=63 .

As an introduction to Content Analysis by Michael Palmquist, this is the main resource on Content Analysis on the Web. It is comprehensive, yet succinct. It includes examples and an annotated bibliography. The information contained in the narrative above draws heavily from and summarizes Michael Palmquist’s excellent resource on Content Analysis but was streamlined for the purpose of doctoral students and junior researchers in epidemiology.

At Columbia University Mailman School of Public Health, more detailed training is available through the Department of Sociomedical Sciences- P8785 Qualitative Research Methods.

Join the Conversation

Have a question about methods? Join us on Facebook

content analysis for qualitative research

What Is Qualitative Content Analysis?

Qca explained simply (with examples).

By: Jenna Crosley (PhD). Reviewed by: Dr Eunice Rautenbach (DTech) | February 2021

If you’re in the process of preparing for your dissertation, thesis or research project, you’ve probably encountered the term “ qualitative content analysis ” – it’s quite a mouthful. If you’ve landed on this post, you’re probably a bit confused about it. Well, the good news is that you’ve come to the right place…

Overview: Qualitative Content Analysis

  • What (exactly) is qualitative content analysis
  • The two main types of content analysis
  • When to use content analysis
  • How to conduct content analysis (the process)
  • The advantages and disadvantages of content analysis

1. What is content analysis?

Content analysis is a  qualitative analysis method  that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants – this is called  unobtrusive  research.

In other words, with content analysis, you don’t necessarily need to interact with participants (although you can if necessary); you can simply analyse the data that they have already produced. With this type of analysis, you can analyse data such as text messages, books, Facebook posts, videos, and audio (just to mention a few).

The basics – explicit and implicit content

When working with content analysis, explicit and implicit content will play a role. Explicit data is transparent and easy to identify, while implicit data is that which requires some form of interpretation and is often of a subjective nature. Sounds a bit fluffy? Here’s an example:

Joe: Hi there, what can I help you with? 

Lauren: I recently adopted a puppy and I’m worried that I’m not feeding him the right food. Could you please advise me on what I should be feeding? 

Joe: Sure, just follow me and I’ll show you. Do you have any other pets?

Lauren: Only one, and it tweets a lot!

In this exchange, the explicit data indicates that Joe is helping Lauren to find the right puppy food. Lauren asks Joe whether she has any pets aside from her puppy. This data is explicit because it requires no interpretation.

On the other hand, implicit data , in this case, includes the fact that the speakers are in a pet store. This information is not clearly stated but can be inferred from the conversation, where Joe is helping Lauren to choose pet food. An additional piece of implicit data is that Lauren likely has some type of bird as a pet. This can be inferred from the way that Lauren states that her pet “tweets”.

As you can see, explicit and implicit data both play a role in human interaction  and are an important part of your analysis. However, it’s important to differentiate between these two types of data when you’re undertaking content analysis. Interpreting implicit data can be rather subjective as conclusions are based on the researcher’s interpretation. This can introduce an element of bias , which risks skewing your results.

Explicit and implicit data both play an important role in your content analysis, but it’s important to differentiate between them.

2. The two types of content analysis

Now that you understand the difference between implicit and explicit data, let’s move on to the two general types of content analysis : conceptual and relational content analysis. Importantly, while conceptual and relational content analysis both follow similar steps initially, the aims and outcomes of each are different.

Conceptual analysis focuses on the number of times a concept occurs in a set of data and is generally focused on explicit data. For example, if you were to have the following conversation:

Marie: She told me that she has three cats.

Jean: What are her cats’ names?

Marie: I think the first one is Bella, the second one is Mia, and… I can’t remember the third cat’s name.

In this data, you can see that the word “cat” has been used three times. Through conceptual content analysis, you can deduce that cats are the central topic of the conversation. You can also perform a frequency analysis , where you assess the term’s frequency in the data. For example, in the exchange above, the word “cat” makes up 9% of the data. In other words, conceptual analysis brings a little bit of quantitative analysis into your qualitative analysis.

As you can see, the above data is without interpretation and focuses on explicit data . Relational content analysis, on the other hand, takes a more holistic view by focusing more on implicit data in terms of context, surrounding words and relationships.

There are three types of relational analysis:

  • Affect extraction
  • Proximity analysis
  • Cognitive mapping

Affect extraction is when you assess concepts according to emotional attributes. These emotions are typically mapped on scales, such as a Likert scale or a rating scale ranging from 1 to 5, where 1 is “very sad” and 5 is “very happy”.

If participants are talking about their achievements, they are likely to be given a score of 4 or 5, depending on how good they feel about it. If a participant is describing a traumatic event, they are likely to have a much lower score, either 1 or 2.

Proximity analysis identifies explicit terms (such as those found in a conceptual analysis) and the patterns in terms of how they co-occur in a text. In other words, proximity analysis investigates the relationship between terms and aims to group these to extract themes and develop meaning.

Proximity analysis is typically utilised when you’re looking for hard facts rather than emotional, cultural, or contextual factors. For example, if you were to analyse a political speech, you may want to focus only on what has been said, rather than implications or hidden meanings. To do this, you would make use of explicit data, discounting any underlying meanings and implications of the speech.

Lastly, there’s cognitive mapping, which can be used in addition to, or along with, proximity analysis. Cognitive mapping involves taking different texts and comparing them in a visual format – i.e. a cognitive map. Typically, you’d use cognitive mapping in studies that assess changes in terms, definitions, and meanings over time. It can also serve as a way to visualise affect extraction or proximity analysis and is often presented in a form such as a graphic map.

Example of a cognitive map

To recap on the essentials, content analysis is a qualitative analysis method that focuses on recorded human artefacts . It involves both conceptual analysis (which is more numbers-based) and relational analysis (which focuses on the relationships between concepts and how they’re connected).

Need a helping hand?

content analysis for qualitative research

3. When should you use content analysis?

Content analysis is a useful tool that provides insight into trends of communication . For example, you could use a discussion forum as the basis of your analysis and look at the types of things the members talk about as well as how they use language to express themselves. Content analysis is flexible in that it can be applied to the individual, group, and institutional level.

Content analysis is typically used in studies where the aim is to better understand factors such as behaviours, attitudes, values, emotions, and opinions . For example, you could use content analysis to investigate an issue in society, such as miscommunication between cultures. In this example, you could compare patterns of communication in participants from different cultures, which will allow you to create strategies for avoiding misunderstandings in intercultural interactions.

Another example could include conducting content analysis on a publication such as a book. Here you could gather data on the themes, topics, language use and opinions reflected in the text to draw conclusions regarding the political (such as conservative or liberal) leanings of the publication.

Content analysis is typically used in projects where the research aims involve getting a better understanding of factors such as behaviours, attitudes, values, emotions, and opinions.

4. How to conduct a qualitative content analysis

Conceptual and relational content analysis differ in terms of their exact process ; however, there are some similarities. Let’s have a look at these first – i.e., the generic process:

  • Recap on your research questions
  • Undertake bracketing to identify biases
  • Operationalise your variables and develop a coding scheme
  • Code the data and undertake your analysis

Step 1 – Recap on your research questions

It’s always useful to begin a project with research questions , or at least with an idea of what you are looking for. In fact, if you’ve spent time reading this blog, you’ll know that it’s useful to recap on your research questions, aims and objectives when undertaking pretty much any research activity. In the context of content analysis, it’s difficult to know what needs to be coded and what doesn’t, without a clear view of the research questions.

For example, if you were to code a conversation focused on basic issues of social justice, you may be met with a wide range of topics that may be irrelevant to your research. However, if you approach this data set with the specific intent of investigating opinions on gender issues, you will be able to focus on this topic alone, which would allow you to code only what you need to investigate.

With content analysis, it’s difficult to know what needs to be coded  without a clear view of the research questions.

Step 2 – Reflect on your personal perspectives and biases

It’s vital that you reflect on your own pre-conception of the topic at hand and identify the biases that you might drag into your content analysis – this is called “ bracketing “. By identifying this upfront, you’ll be more aware of them and less likely to have them subconsciously influence your analysis.

For example, if you were to investigate how a community converses about unequal access to healthcare, it is important to assess your views to ensure that you don’t project these onto your understanding of the opinions put forth by the community. If you have access to medical aid, for instance, you should not allow this to interfere with your examination of unequal access.

You must reflect on the preconceptions and biases that you might drag into your content analysis - this is called "bracketing".

Step 3 – Operationalise your variables and develop a coding scheme

Next, you need to operationalise your variables . But what does that mean? Simply put, it means that you have to define each variable or construct . Give every item a clear definition – what does it mean (include) and what does it not mean (exclude). For example, if you were to investigate children’s views on healthy foods, you would first need to define what age group/range you’re looking at, and then also define what you mean by “healthy foods”.

In combination with the above, it is important to create a coding scheme , which will consist of information about your variables (how you defined each variable), as well as a process for analysing the data. For this, you would refer back to how you operationalised/defined your variables so that you know how to code your data.

For example, when coding, when should you code a food as “healthy”? What makes a food choice healthy? Is it the absence of sugar or saturated fat? Is it the presence of fibre and protein? It’s very important to have clearly defined variables to achieve consistent coding – without this, your analysis will get very muddy, very quickly.

When operationalising your variables, you must give every item a clear definition. In other words, what does it mean (include) and what does it not mean (exclude).

Step 4 – Code and analyse the data

The next step is to code the data. At this stage, there are some differences between conceptual and relational analysis.

As described earlier in this post, conceptual analysis looks at the existence and frequency of concepts, whereas a relational analysis looks at the relationships between concepts. For both types of analyses, it is important to pre-select a concept that you wish to assess in your data. Using the example of studying children’s views on healthy food, you could pre-select the concept of “healthy food” and assess the number of times the concept pops up in your data.

Here is where conceptual and relational analysis start to differ.

At this stage of conceptual analysis , it is necessary to decide on the level of analysis you’ll perform on your data, and whether this will exist on the word, phrase, sentence, or thematic level. For example, will you code the phrase “healthy food” on its own? Will you code each term relating to healthy food (e.g., broccoli, peaches, bananas, etc.) with the code “healthy food” or will these be coded individually? It is very important to establish this from the get-go to avoid inconsistencies that could result in you having to code your data all over again.

On the other hand, relational analysis looks at the type of analysis. So, will you use affect extraction? Proximity analysis? Cognitive mapping? A mix? It’s vital to determine the type of analysis before you begin to code your data so that you can maintain the reliability and validity of your research .

content analysis for qualitative research

How to conduct conceptual analysis

First, let’s have a look at the process for conceptual analysis.

Once you’ve decided on your level of analysis, you need to establish how you will code your concepts, and how many of these you want to code. Here you can choose whether you want to code in a deductive or inductive manner. Just to recap, deductive coding is when you begin the coding process with a set of pre-determined codes, whereas inductive coding entails the codes emerging as you progress with the coding process. Here it is also important to decide what should be included and excluded from your analysis, and also what levels of implication you wish to include in your codes.

For example, if you have the concept of “tall”, can you include “up in the clouds”, derived from the sentence, “the giraffe’s head is up in the clouds” in the code, or should it be a separate code? In addition to this, you need to know what levels of words may be included in your codes or not. For example, if you say, “the panda is cute” and “look at the panda’s cuteness”, can “cute” and “cuteness” be included under the same code?

Once you’ve considered the above, it’s time to code the text . We’ve already published a detailed post about coding , so we won’t go into that process here. Once you’re done coding, you can move on to analysing your results. This is where you will aim to find generalisations in your data, and thus draw your conclusions .

How to conduct relational analysis

Now let’s return to relational analysis.

As mentioned, you want to look at the relationships between concepts . To do this, you’ll need to create categories by reducing your data (in other words, grouping similar concepts together) and then also code for words and/or patterns. These are both done with the aim of discovering whether these words exist, and if they do, what they mean.

Your next step is to assess your data and to code the relationships between your terms and meanings, so that you can move on to your final step, which is to sum up and analyse the data.

To recap, it’s important to start your analysis process by reviewing your research questions and identifying your biases . From there, you need to operationalise your variables, code your data and then analyse it.

Time to analyse

5. What are the pros & cons of content analysis?

One of the main advantages of content analysis is that it allows you to use a mix of quantitative and qualitative research methods, which results in a more scientifically rigorous analysis.

For example, with conceptual analysis, you can count the number of times that a term or a code appears in a dataset, which can be assessed from a quantitative standpoint. In addition to this, you can then use a qualitative approach to investigate the underlying meanings of these and relationships between them.

Content analysis is also unobtrusive and therefore poses fewer ethical issues than some other analysis methods. As the content you’ll analyse oftentimes already exists, you’ll analyse what has been produced previously, and so you won’t have to collect data directly from participants. When coded correctly, data is analysed in a very systematic and transparent manner, which means that issues of replicability (how possible it is to recreate research under the same conditions) are reduced greatly.

On the downside , qualitative research (in general, not just content analysis) is often critiqued for being too subjective and for not being scientifically rigorous enough. This is where reliability (how replicable a study is by other researchers) and validity (how suitable the research design is for the topic being investigated) come into play – if you take these into account, you’ll be on your way to achieving sound research results.

One of the main advantages of content analysis is that it allows you to use a mix of quantitative and qualitative research methods, which results in a more scientifically rigorous analysis.

Recap: Qualitative content analysis

In this post, we’ve covered a lot of ground – click on any of the sections to recap:

If you have any questions about qualitative content analysis, feel free to leave a comment below. If you’d like 1-on-1 help with your qualitative content analysis, be sure to book an initial consultation with one of our friendly Research Coaches.

content analysis for qualitative research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

19 Comments

Abhishek

If I am having three pre-decided attributes for my research based on which a set of semi-structured questions where asked then should I conduct a conceptual content analysis or relational content analysis. please note that all three attributes are different like Agility, Resilience and AI.

Ofori Henry Affum

Thank you very much. I really enjoyed every word.

Janak Raj Bhatta

please send me one/ two sample of content analysis

pravin

send me to any sample of qualitative content analysis as soon as possible

abdellatif djedei

Many thanks for the brilliant explanation. Do you have a sample practical study of a foreign policy using content analysis?

DR. TAPAS GHOSHAL

1) It will be very much useful if a small but complete content analysis can be sent, from research question to coding and analysis. 2) Is there any software by which qualitative content analysis can be done?

Carkanirta

Common software for qualitative analysis is nVivo, and quantitative analysis is IBM SPSS

carmely

Thank you. Can I have at least 2 copies of a sample analysis study as my reference?

Yang

Could you please send me some sample of textbook content analysis?

Abdoulie Nyassi

Can I send you my research topic, aims, objectives and questions to give me feedback on them?

Bobby Benjamin Simeon

please could you send me samples of content analysis?

Obi Clara Chisom

Yes please send

Gaid Ahmed

really we enjoyed your knowledge thanks allot. from Ethiopia

Ary

can you please share some samples of content analysis(relational)? I am a bit confused about processing the analysis part

eeeema

Is it possible for you to list the journal articles and books or other sources you used to write this article? Thank you.

Upeksha Hettithanthri

can you please send some samples of content analysis ?

can you kindly send some good examples done by using content analysis ?

samuel batimedi

This was very useful. can you please send me sample for qualitative content analysis. thank you

Lawal Ridwan Olalekan

What a brilliant explanation! Kindly help with textbooks or blogs on the context analysis method such as discourse, thematic and semiotic analysis.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

content analysis for qualitative research

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Content Analysis | A Step-by-Step Guide with Examples

Published on 5 May 2022 by Amy Luo . Revised on 5 December 2022.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers, and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding). In both types, you categorise or ‘code’ words, themes, and concepts within the texts and then analyse the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyse.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects, or concepts in a set of historical or contemporary texts.

In addition, content analysis can be used to make qualitative inferences by analysing the meaning and semantic relationship of words and concepts.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group, or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analysing the consequences of communication content, such as the flow of information or audience responses

Prevent plagiarism, run a free check.

  • Unobtrusive data collection

You can analyse communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost. All you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions.

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Next, you follow these five steps.

Step 1: Select the content you will analyse

Based on your research question, choose the texts that you will analyse. You need to decide:

  • The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)
  • The criteria for inclusion (e.g., newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small number of texts that meet your criteria, you might analyse all of them. If there is a large volume of texts, you can select a sample .

Step 2: Define the units and categories of analysis

Next, you need to determine the level at which you will analyse your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g., aged 30–40, lawyer, parent) or more conceptual (e.g., trustworthy, corrupt, conservative, family-oriented).

Step 3: Develop a set of rules for coding

Coding involves organising the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

Step 4: Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti , and Diction , which can help speed up the process of counting and categorising words and phrases.

Step 5: Analyse the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context, and audience of the texts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Luo, A. (2022, December 05). Content Analysis | A Step-by-Step Guide with Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/content-analysis-explained/

Is this article helpful?

Amy Luo

Other students also liked

How to do thematic analysis | guide & examples, data collection methods | step-by-step guide & examples, qualitative vs quantitative research | examples & methods.

  • Privacy Policy

Research Method

Home » Content Analysis – Methods, Types and Examples

Content Analysis – Methods, Types and Examples

Table of Contents

Content Analysis

Content Analysis

Definition:

Content analysis is a research method used to analyze and interpret the characteristics of various forms of communication, such as text, images, or audio. It involves systematically analyzing the content of these materials, identifying patterns, themes, and other relevant features, and drawing inferences or conclusions based on the findings.

Content analysis can be used to study a wide range of topics, including media coverage of social issues, political speeches, advertising messages, and online discussions, among others. It is often used in qualitative research and can be combined with other methods to provide a more comprehensive understanding of a particular phenomenon.

Types of Content Analysis

There are generally two types of content analysis:

Quantitative Content Analysis

This type of content analysis involves the systematic and objective counting and categorization of the content of a particular form of communication, such as text or video. The data obtained is then subjected to statistical analysis to identify patterns, trends, and relationships between different variables. Quantitative content analysis is often used to study media content, advertising, and political speeches.

Qualitative Content Analysis

This type of content analysis is concerned with the interpretation and understanding of the meaning and context of the content. It involves the systematic analysis of the content to identify themes, patterns, and other relevant features, and to interpret the underlying meanings and implications of these features. Qualitative content analysis is often used to study interviews, focus groups, and other forms of qualitative data, where the researcher is interested in understanding the subjective experiences and perceptions of the participants.

Methods of Content Analysis

There are several methods of content analysis, including:

Conceptual Analysis

This method involves analyzing the meanings of key concepts used in the content being analyzed. The researcher identifies key concepts and analyzes how they are used, defining them and categorizing them into broader themes.

Content Analysis by Frequency

This method involves counting and categorizing the frequency of specific words, phrases, or themes that appear in the content being analyzed. The researcher identifies relevant keywords or phrases and systematically counts their frequency.

Comparative Analysis

This method involves comparing the content of two or more sources to identify similarities, differences, and patterns. The researcher selects relevant sources, identifies key themes or concepts, and compares how they are represented in each source.

Discourse Analysis

This method involves analyzing the structure and language of the content being analyzed to identify how the content constructs and represents social reality. The researcher analyzes the language used and the underlying assumptions, beliefs, and values reflected in the content.

Narrative Analysis

This method involves analyzing the content as a narrative, identifying the plot, characters, and themes, and analyzing how they relate to the broader social context. The researcher identifies the underlying messages conveyed by the narrative and their implications for the broader social context.

Content Analysis Conducting Guide

Here is a basic guide to conducting a content analysis:

  • Define your research question or objective: Before starting your content analysis, you need to define your research question or objective clearly. This will help you to identify the content you need to analyze and the type of analysis you need to conduct.
  • Select your sample: Select a representative sample of the content you want to analyze. This may involve selecting a random sample, a purposive sample, or a convenience sample, depending on the research question and the availability of the content.
  • Develop a coding scheme: Develop a coding scheme or a set of categories to use for coding the content. The coding scheme should be based on your research question or objective and should be reliable, valid, and comprehensive.
  • Train coders: Train coders to use the coding scheme and ensure that they have a clear understanding of the coding categories and procedures. You may also need to establish inter-coder reliability to ensure that different coders are coding the content consistently.
  • Code the content: Code the content using the coding scheme. This may involve manually coding the content, using software, or a combination of both.
  • Analyze the data: Once the content is coded, analyze the data using appropriate statistical or qualitative methods, depending on the research question and the type of data.
  • Interpret the results: Interpret the results of the analysis in the context of your research question or objective. Draw conclusions based on the findings and relate them to the broader literature on the topic.
  • Report your findings: Report your findings in a clear and concise manner, including the research question, methodology, results, and conclusions. Provide details about the coding scheme, inter-coder reliability, and any limitations of the study.

Applications of Content Analysis

Content analysis has numerous applications across different fields, including:

  • Media Research: Content analysis is commonly used in media research to examine the representation of different groups, such as race, gender, and sexual orientation, in media content. It can also be used to study media framing, media bias, and media effects.
  • Political Communication : Content analysis can be used to study political communication, including political speeches, debates, and news coverage of political events. It can also be used to study political advertising and the impact of political communication on public opinion and voting behavior.
  • Marketing Research: Content analysis can be used to study advertising messages, consumer reviews, and social media posts related to products or services. It can provide insights into consumer preferences, attitudes, and behaviors.
  • Health Communication: Content analysis can be used to study health communication, including the representation of health issues in the media, the effectiveness of health campaigns, and the impact of health messages on behavior.
  • Education Research : Content analysis can be used to study educational materials, including textbooks, curricula, and instructional materials. It can provide insights into the representation of different topics, perspectives, and values.
  • Social Science Research: Content analysis can be used in a wide range of social science research, including studies of social media, online communities, and other forms of digital communication. It can also be used to study interviews, focus groups, and other qualitative data sources.

Examples of Content Analysis

Here are some examples of content analysis:

  • Media Representation of Race and Gender: A content analysis could be conducted to examine the representation of different races and genders in popular media, such as movies, TV shows, and news coverage.
  • Political Campaign Ads : A content analysis could be conducted to study political campaign ads and the themes and messages used by candidates.
  • Social Media Posts: A content analysis could be conducted to study social media posts related to a particular topic, such as the COVID-19 pandemic, to examine the attitudes and beliefs of social media users.
  • Instructional Materials: A content analysis could be conducted to study the representation of different topics and perspectives in educational materials, such as textbooks and curricula.
  • Product Reviews: A content analysis could be conducted to study product reviews on e-commerce websites, such as Amazon, to identify common themes and issues mentioned by consumers.
  • News Coverage of Health Issues: A content analysis could be conducted to study news coverage of health issues, such as vaccine hesitancy, to identify common themes and perspectives.
  • Online Communities: A content analysis could be conducted to study online communities, such as discussion forums or social media groups, to understand the language, attitudes, and beliefs of the community members.

Purpose of Content Analysis

The purpose of content analysis is to systematically analyze and interpret the content of various forms of communication, such as written, oral, or visual, to identify patterns, themes, and meanings. Content analysis is used to study communication in a wide range of fields, including media studies, political science, psychology, education, sociology, and marketing research. The primary goals of content analysis include:

  • Describing and summarizing communication: Content analysis can be used to describe and summarize the content of communication, such as the themes, topics, and messages conveyed in media content, political speeches, or social media posts.
  • Identifying patterns and trends: Content analysis can be used to identify patterns and trends in communication, such as changes over time, differences between groups, or common themes or motifs.
  • Exploring meanings and interpretations: Content analysis can be used to explore the meanings and interpretations of communication, such as the underlying values, beliefs, and assumptions that shape the content.
  • Testing hypotheses and theories : Content analysis can be used to test hypotheses and theories about communication, such as the effects of media on attitudes and behaviors or the framing of political issues in the media.

When to use Content Analysis

Content analysis is a useful method when you want to analyze and interpret the content of various forms of communication, such as written, oral, or visual. Here are some specific situations where content analysis might be appropriate:

  • When you want to study media content: Content analysis is commonly used in media studies to analyze the content of TV shows, movies, news coverage, and other forms of media.
  • When you want to study political communication : Content analysis can be used to study political speeches, debates, news coverage, and advertising.
  • When you want to study consumer attitudes and behaviors: Content analysis can be used to analyze product reviews, social media posts, and other forms of consumer feedback.
  • When you want to study educational materials : Content analysis can be used to analyze textbooks, instructional materials, and curricula.
  • When you want to study online communities: Content analysis can be used to analyze discussion forums, social media groups, and other forms of online communication.
  • When you want to test hypotheses and theories : Content analysis can be used to test hypotheses and theories about communication, such as the framing of political issues in the media or the effects of media on attitudes and behaviors.

Characteristics of Content Analysis

Content analysis has several key characteristics that make it a useful research method. These include:

  • Objectivity : Content analysis aims to be an objective method of research, meaning that the researcher does not introduce their own biases or interpretations into the analysis. This is achieved by using standardized and systematic coding procedures.
  • Systematic: Content analysis involves the use of a systematic approach to analyze and interpret the content of communication. This involves defining the research question, selecting the sample of content to analyze, developing a coding scheme, and analyzing the data.
  • Quantitative : Content analysis often involves counting and measuring the occurrence of specific themes or topics in the content, making it a quantitative research method. This allows for statistical analysis and generalization of findings.
  • Contextual : Content analysis considers the context in which the communication takes place, such as the time period, the audience, and the purpose of the communication.
  • Iterative : Content analysis is an iterative process, meaning that the researcher may refine the coding scheme and analysis as they analyze the data, to ensure that the findings are valid and reliable.
  • Reliability and validity : Content analysis aims to be a reliable and valid method of research, meaning that the findings are consistent and accurate. This is achieved through inter-coder reliability tests and other measures to ensure the quality of the data and analysis.

Advantages of Content Analysis

There are several advantages to using content analysis as a research method, including:

  • Objective and systematic : Content analysis aims to be an objective and systematic method of research, which reduces the likelihood of bias and subjectivity in the analysis.
  • Large sample size: Content analysis allows for the analysis of a large sample of data, which increases the statistical power of the analysis and the generalizability of the findings.
  • Non-intrusive: Content analysis does not require the researcher to interact with the participants or disrupt their natural behavior, making it a non-intrusive research method.
  • Accessible data: Content analysis can be used to analyze a wide range of data types, including written, oral, and visual communication, making it accessible to researchers across different fields.
  • Versatile : Content analysis can be used to study communication in a wide range of contexts and fields, including media studies, political science, psychology, education, sociology, and marketing research.
  • Cost-effective: Content analysis is a cost-effective research method, as it does not require expensive equipment or participant incentives.

Limitations of Content Analysis

While content analysis has many advantages, there are also some limitations to consider, including:

  • Limited contextual information: Content analysis is focused on the content of communication, which means that contextual information may be limited. This can make it difficult to fully understand the meaning behind the communication.
  • Limited ability to capture nonverbal communication : Content analysis is limited to analyzing the content of communication that can be captured in written or recorded form. It may miss out on nonverbal communication, such as body language or tone of voice.
  • Subjectivity in coding: While content analysis aims to be objective, there may be subjectivity in the coding process. Different coders may interpret the content differently, which can lead to inconsistent results.
  • Limited ability to establish causality: Content analysis is a correlational research method, meaning that it cannot establish causality between variables. It can only identify associations between variables.
  • Limited generalizability: Content analysis is limited to the data that is analyzed, which means that the findings may not be generalizable to other contexts or populations.
  • Time-consuming: Content analysis can be a time-consuming research method, especially when analyzing a large sample of data. This can be a disadvantage for researchers who need to complete their research in a short amount of time.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Grounded Theory

Grounded Theory – Methods, Examples and Guide

MANOVA

MANOVA (Multivariate Analysis of Variance) –...

Discriminant Analysis

Discriminant Analysis – Methods, Types and...

Cluster Analysis

Cluster Analysis – Types, Methods and Examples

Correlation Analysis

Correlation Analysis – Types, Methods and...

Phenomenology

Phenomenology – Methods, Examples and Guide

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Numismatics
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Meta-Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Content Analysis

  • < Previous chapter
  • Next chapter >

Content Analysis

4 Qualitative Content Analysis

  • Published: November 2015
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter examines qualitative content analysis, a recent methodological innovation. Qualitative content analysis is defined and distinguished here from basic and interpretive approaches to content analysis. Qualitative content analysis is also distinguished from other qualitative research methods, though features and techniques overlap with other qualitative methods. Key differences in the predominant use of newly collected data and use of non-quantitative analysis techniques are detailed. Differences in epistemology and the role of researcher self-awareness and reflexivity are also discussed. Methods of graphic data presentation are illustrated. Three short exemplar studies using qualitative content analysis are described and examined. Qualitative content analysis is explored in detail in terms of its characteristic components: (1) the research purposes of content analysis, (2) target audiences, (3) epistemological issues, (4) ethical issues, (5) research designs, (6) sampling issues and methods, (7) collecting data, (8) coding and categorization methods, (9) data analysis methods, and (10) the role of researcher reflection.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
October 2022 200
November 2022 312
December 2022 217
January 2023 242
February 2023 314
March 2023 408
April 2023 358
May 2023 278
June 2023 161
July 2023 174
August 2023 230
September 2023 167
October 2023 220
November 2023 227
December 2023 206
January 2024 220
February 2024 452
March 2024 392
April 2024 452
May 2024 353
June 2024 219
July 2024 211
August 2024 179
September 2024 73
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Three approaches to qualitative content analysis

Affiliation.

  • 1 Fooyin University, Kaohsiung Hsien, Taiwan.
  • PMID: 16204405
  • DOI: 10.1177/1049732305276687

Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed, or summative. All three approaches are used to interpret meaning from the content of text data and, hence, adhere to the naturalistic paradigm. The major differences among the approaches are coding schemes, origins of codes, and threats to trustworthiness. In conventional content analysis, coding categories are derived directly from the text data. With a directed approach, analysis starts with a theory or relevant research findings as guidance for initial codes. A summative content analysis involves counting and comparisons, usually of keywords or content, followed by the interpretation of the underlying context. The authors delineate analytic procedures specific to each approach and techniques addressing trustworthiness with hypothetical examples drawn from the area of end-of-life care.

PubMed Disclaimer

Similar articles

  • Abstraction and interpretation during the qualitative content analysis process. Lindgren BM, Lundman B, Graneheim UH. Lindgren BM, et al. Int J Nurs Stud. 2020 Aug;108:103632. doi: 10.1016/j.ijnurstu.2020.103632. Epub 2020 May 15. Int J Nurs Stud. 2020. PMID: 32505813 Review.
  • Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Graneheim UH, Lundman B. Graneheim UH, et al. Nurse Educ Today. 2004 Feb;24(2):105-12. doi: 10.1016/j.nedt.2003.10.001. Nurse Educ Today. 2004. PMID: 14769454 Review.
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Seven steps for qualitative treatment in health research: the Clinical-Qualitative Content Analysis. Faria-Schützer DB, Surita FG, Alves VLP, Bastos RA, Campos CJG, Turato ER. Faria-Schützer DB, et al. Cien Saude Colet. 2021 Jan;26(1):265-274. doi: 10.1590/1413-81232020261.07622019. Epub 2019 Apr 11. Cien Saude Colet. 2021. PMID: 33533847
  • Qualitative research: a review of methods with use of examples from the total knee replacement literature. Beaton DE, Clark JP. Beaton DE, et al. J Bone Joint Surg Am. 2009 May;91 Suppl 3:107-12. doi: 10.2106/JBJS.H.01631. J Bone Joint Surg Am. 2009. PMID: 19411508 Review.
  • Perspectives on improving wound care for Aboriginal health workers in rural and remote communities in Queensland, Australia. King HJ, Whiteside EJ, Ward R, Kauter K, Byrne M, Horner V, Nutter H, Lea J. King HJ, et al. BMC Health Serv Res. 2024 Sep 10;24(1):1047. doi: 10.1186/s12913-024-11490-2. BMC Health Serv Res. 2024. PMID: 39256759 Free PMC article.
  • "The police came in white protective suits and with batons, it was pure disaster" - a multi-stakeholder perspective on infection control in reception centers for asylum seekers during the COVID-19 pandemic in Germany. Pacolli-Tabaku L, Führer A, Wahidie D, Trohl U, Yilmaz-Aslan Y, Brzoska P. Pacolli-Tabaku L, et al. BMC Public Health. 2024 Sep 9;24(1):2445. doi: 10.1186/s12889-024-19925-5. BMC Public Health. 2024. PMID: 39251939 Free PMC article.
  • Systemic innovation for operationalising bioeconomy: A qualitative content analysis. Carraresi L. Carraresi L. Heliyon. 2024 Aug 12;10(16):e35914. doi: 10.1016/j.heliyon.2024.e35914. eCollection 2024 Aug 30. Heliyon. 2024. PMID: 39247367 Free PMC article. Review.
  • Perspectives on virtual interviews and emerging technologies integration in family medicine residency programs: a cross-sectional survey study. Tolentino R, Rodriguez C, Hersson-Edery F, Lane J, Abbasgholizadeh Rahimi S. Tolentino R, et al. BMC Med Educ. 2024 Sep 9;24(1):975. doi: 10.1186/s12909-024-05874-5. BMC Med Educ. 2024. PMID: 39245713 Free PMC article.
  • Recommendations to address and research systemic bias in assessment: perspectives from directors of research in medical education. Chen F, O'Brien CL, Blanco MA, Huggett KN, Jeffe DB, Pusic MV, Brenner JM. Chen F, et al. Med Educ Online. 2024 Dec 31;29(1):2396166. doi: 10.1080/10872981.2024.2396166. Epub 2024 Sep 8. Med Educ Online. 2024. PMID: 39244774 Free PMC article.
  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.23(1); 2018 Feb

Logo of jrn

Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process

Qualitative content analysis consists of conventional, directed and summative approaches for data analysis. They are used for provision of descriptive knowledge and understandings of the phenomenon under study. However, the method underpinning directed qualitative content analysis is insufficiently delineated in international literature. This paper aims to describe and integrate the process of data analysis in directed qualitative content analysis. Various international databases were used to retrieve articles related to directed qualitative content analysis. A review of literature led to the integration and elaboration of a stepwise method of data analysis for directed qualitative content analysis. The proposed 16-step method of data analysis in this paper is a detailed description of analytical steps to be taken in directed qualitative content analysis that covers the current gap of knowledge in international literature regarding the practical process of qualitative data analysis. An example of “the resuscitation team members' motivation for cardiopulmonary resuscitation” based on Victor Vroom's expectancy theory is also presented. The directed qualitative content analysis method proposed in this paper is a reliable, transparent, and comprehensive method for qualitative researchers. It can increase the rigour of qualitative data analysis, make the comparison of the findings of different studies possible and yield practical results.

Introduction

Qualitative content analysis (QCA) is a research approach for the description and interpretation of textual data using the systematic process of coding. The final product of data analysis is the identification of categories, themes and patterns ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ; Zhang and Wildemuth, 2009 ). Researchers in the field of healthcare commonly use QCA for data analysis ( Berelson, 1952 ). QCA has been described and used in the first half of the 20th century ( Schreier, 2014 ). The focus of QCA is the development of knowledge and understanding of the study phenomenon. QCA, as the application of language and contextual clues for making meanings in the communication process, requires a close review of the content gleaned from conducting interviews or observations ( Downe-Wamboldt, 1992 ; Hsieh and Shannon, 2005 ).

QCA is classified into conventional (inductive), directed (deductive) and summative methods ( Hsieh and Shannon, 2005 ; Mayring, 2000 , 2014 ). Inductive QCA, as the most popular approach in data analysis, helps with the development of theories, schematic models or conceptual frameworks ( Elo and Kyngäs, 2008 ; Graneheim and Lundman, 2004 ; Vaismoradi et al., 2013 , 2016 ), which should be refined, tested or further developed by using directed QCA ( Elo and Kyngäs, 2008 ). Directed QCA is a common method of data analysis in healthcare research ( Elo and Kyngäs, 2008 ), but insufficient knowledege is available about how this method is applied ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ). This may hamper the use of directed QCA by novice qualitative researchers and account for a low application of this method compared with the inductive method ( Elo and Kyngäs, 2008 ; Mayring, 2000 ). Therefore, this paper aims to describe and integrate methods applied in directed QCA.

International databases such as PubMed (including Medline), Scopus, Web of Science and ScienceDirect were searched for retrieval of papers related to QCA and directed QCA. Use of keywords such as ‘directed content analysis’, ‘deductive content analysis’ and ‘qualitative content analysis’ led to 13,738 potentially eligible papers. Applying inclusion criteria such as ‘focused on directed qualitative content analysis’ and ‘published in peer-reviewed journals’; and removal of duplicates resulted in 30 papers. However, only two of these papers dealt with the description of directed QCA in terms of the methodological process. Ancestry and manual searches within these 30 papers revealed the pioneers of the description of this method in international literature. A further search for papers published by the method's pioneers led to four more papers and one monograph dealing with directed QCA ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1744987117741667-fig1.jpg

The search strategy for the identification of papers.

Finally, the authors of this paper integrated and elaborated a comprehensive and stepwise method of directed QCA based on the commonalities of methods discussed in the included papers. Also, the experiences of the current authors in the field of qualitative research were incorporated into the suggested stepwise method of data analysis for directed QCA ( Table 1 ).

The suggested steps for directed content analysis.

StepsReferences
Preparation phase
 1. Acquiring the necessary general skills ,
 2. Selecting the appropriate sampling strategyInferred by the authors of the present paper from
 3. Deciding on the analysis of manifest and/or latent content
 4. Developing an interview guideInferred by the authors of the present paper from
 5. Conducting and transcribing interviews ,
 6. Specifying the unit of analysis
 7. Being immersed in data
Organisation phase
 8. Developing a formative categorisation matrixInferred by the authors of the present paper from
 9. Theoretically defining the main categories and subcategories ,
 10. Determining coding rules for main categories
 11. Pre-testing the categorisation matrixInferred by the authors of the present paper from
 12. Choosing and specifying the anchor samples for each main category
 13. Performing the main data analysis , ,
 14. Inductive abstraction of main categories from preliminary codes
 15. Establishment of links between generic categories and main categoriesSuggested by the authors of the present paper
Reporting phase
 16. Reporting all steps of directed content analysis and findings ,

While the included papers about directed QCA were the most cited ones in international literature, none of them provided sufficient detail with regard to how to conduct the data analysis process. This might hamper the use of this method by novice qualitative researchers and hinder its application by nurse researchers compared with inductive QCA. As it can be seen in Figure 1 , the search resulted in 5 articles that explain DCA method. The following is description of the articles, along with their strengths and weaknesses. Authors used the strengths in their suggested method as mentioned in Table 1 .

The methods suggested for directed QCA in the international literature

The method suggested by hsieh and shannon (2005).

Hsieh and Shannon (2005) developed two strategies for conducting directed QCA. The first strategy consists of reading textual data and highlighting those parts of the text that, on first impression, appeared to be related to the predetermined codes dictated by a theory or prior research findings. Next, the highlighted texts would be coded using the predetermined codes.

As for the second strategy, the only difference lay in starting the coding process without primarily highlighting the text. In both analysis strategies, the qualitative researcher should return to the text and perform reanalysis after the initial coding process ( Hsieh and Shannon, 2005 ). However, the current authors believe that this second strategy provides an opportunity for recognising missing texts related to the predetermined codes and also newly emerged ones. It also enhances the trustworthiness of findings.

As an important part of the method suggested by Hsieh and Shannon (2005) , the term ‘code’ was used for the different levels of abstraction, but a more precise definition of this term seems to be crucial. For instance, they stated that ‘data that cannot be coded are identified and analyzed later to determine if they represent a new category or a subcategory of an existing code’ (2005: 1282).

It seems that the first ‘code’ in the above sentence indicates the lowest level of abstraction that could be achieved instantly from raw data. However, the ‘code’ at the end of the sentence refers to a higher level of abstraction, because it denotes to a category or subcategory.

Furthermore, the interchangeable and inconsistent use of the words ‘predetermined code’ and ‘category’ could be confusing to novice qualitative researchers. Moreover, Hsieh and Shannon (2005) did not specify exactly which parts of the text, whether highlighted, coded or the whole text, should be considered during the reanalysis of the text after initial coding process. Such a lack of specification runs the risk of missing the content during the initial coding process, especially if the second review of the text is restricted to highlighted sections. One final important omission in this method is the lack of an explicit description of the process through which new codes emerge during the reanalysis of the text. Such a clarification is crucial, because the detection of subtle links between newly emerging codes and the predetermined ones is not straightforward.

The method suggested by Elo and Kyngäs (2008)

Elo and Kyngäs (2008) suggested ‘structured’ and ‘unconstrained’ methods or paths for directed QCA. Accordingly, after determining the ‘categorisation matrix’ as the framework for data collection and analysis during the study process, the whole content would be reviewed and coded. The use of the unconstrained matrix allows the development of some categories inductively by using the steps of ‘grouping’, ‘categorisation’ and ‘abstraction’. The use of a structured method requires a structured matrix upon which data are strictly coded. Hypotheses suggested by previous studies often are tested using this method ( Elo and Kyngäs, 2008 ).

The current authors believe that the label of ‘data gathering by the content’ (p. 110) in the unconstrained matrix path can be misleading. It refers to the data coding step rather than data collection. Also, in the description of the structured path there is an obvious discrepancy with regard to the selection of the portions of the content that fit or do not fit the matrix: ‘… if the matrix is structured, only aspects that fit the matrix of analysis are chosen from the data …’; ‘… when using a structured matrix of analysis, it is possible to choose either only the aspects from the data that fit the categorization frame or, alternatively, to choose those that do not’ ( Elo and Kyngäs, 2008 : 111–112).

Figure 1 in Elo and Kyngäs's paper ( 2008 : 110) clearly distinguished between the structured and unconstrained paths. On the other hand, the first sentence in the above quotation clearly explained the use of the structured matrix, but it was not clear whether the second sentence referred to the use of the structured or unconstrained matrix.

The method suggested by Zhang and Wildemuth (2009)

Considering the method suggested by Hsieh and Shannon (2005) , Zhang and Wildemuth (2009) suggested an eight-step method as follows: (1) preparation of data, (2) definition of the unit of analysis, (3) development of categories and the coding scheme, (4) testing the coding scheme in a text sample, (5) coding the whole text, (6) assessment of the coding's consistency, (7) drawing conclusions from the coded data, and (8) reporting the methods and findings ( Zhang and Wildemuth, 2009 ). Only in the third step of this method, the description of the process of category development, did Zhang and Wildemuth (2009) briefly make a distinction between the inductive versus deductive content analysis methods. On first impression, the only difference between the two approaches seems to be the origin from which categories are developed. In addition, the process of connecting the preliminary codes extracted from raw data with predetermined categories is described. Furthermore, it is not clear whether this linking should be established from categories to primary codes, or vice versa.

The method suggested by Mayring ( 2000 , 2014 )

Mayring ( 2000 , 2014 ) suggested a seven-step method for directed QCA that distinctively differentiated between inductive and deductive methods as follows: (1) determination of the research question and theoretical background, (2) definition of the category system such as main categories and subcategories based on the previous theory and research, (3) establishing a guideline for coding, considering definitions, anchor examples and coding rules, (5) reading the whole text, determining preliminary codes, adding anchor examples and coding rules, (5) revision of the category and coding guideline after working through 10–50% of the data, (6) reworking data if needed, or listing the final category, and (7) analysing and interpreting based on the category frequencies and contingencies.

Mayring suggested that coding rules should be defined to distinctly assign the parts of the text to a particular category. Furthermore, indicating which concrete part of the text serves as typical examples, also known as ‘anchor samples’, and belongs to a particular category was recommended for describing each category ( Mayring, 2000 , 2014 ). The current authors believe that these suggestions help clarify directed QCA and enhance its trustworthiness.

But when the term ‘preliminary coding’ was used, Mayring ( 2000 , 2014 ) did not clearly clarify whether these codes are inductively or deductively created. In addition, Mayring was inclined to apply the quantitative approach implicitly in steps 5 and 7, which is incongruent with the qualitative paradigm. Furthermore, nothing was stated about the possibility of the development of new categories from the textual material: ‘… theoretical considerations can lead to a further categories or rephrasing of categories from previous studies, but the categories are not developed out of the text material like in inductive category formation …’ ( Mayring, 2014 : 97).

Integration and clarification of methods for directed QCA

Directed QCA took different paths when the categorisation matrix contained concepts with higher-level versus lower-level abstractions. In matrices with low abstraction levels, linking raw data to predetermined categories was not difficult, and suggested methods in international nursing literature seem appropriate and helpful. For instance, Elo and Kyngäs (2008) introduced ‘mental well-being threats’ based on the categories of ‘dependence’, ‘worries’, ‘sadness’ and ‘guilt’. Hsieh and Shannon (2005) developed the categories of ‘denial’, ‘anger’, ‘bargaining’, ‘depression’ and ‘acceptance’ when elucidating the stages of grief. Therefore, the low-level abstractions easily could link raw data to categories. The predicament of directed QCA began when the categorisation matrix contained the concepts with high levels of abstraction. The gap regarding how to connect the highly abstracted categories to the raw data should be bridged by using a transparent and comprehensive analysis strategy. Therefore, the authors of this paper integrated the methods of directed QCA outlined in the international literature and elaborated them using the phases of ‘preparation’, ‘organization’ and ‘reporting’ proposed by Elo and Kyngäs (2008) . Also, the experiences of the current authors in the field of qualitative research were incorporated into their suggested stepwise method of data analysis. The method was presented using the example of the “team members’ motivation for cardiopulmonary resuscitation (CPR)” based on Victor Vroom's expectancy theory ( Assarroudi et al., 2017 ). In this example, interview transcriptions were considered as the unit of analysis, because interviews are the most common method of data collection in qualitative studies ( Gill et al., 2008 ).

Suggested method of directed QCA by the authors of this paper

This method consists of 16 steps and three phases, described below: preparation phase (steps 1–7), organisation phase (steps 8–15), and reporting phase (step 16).

The preparation phase:

  • The acquisition of general skills . In the first step, qualitative researchers should develop skills including self-critical thinking, analytical abilities, continuous self-reflection, sensitive interpretive skills, creative thinking, scientific writing, data gathering and self-scrutiny ( Elo et al., 2014 ). Furthermore, they should attain sufficient scientific and content-based mastery of the method chosen for directed QCA. In the proposed example, qualitative researchers can achieve this mastery through conducting investigations in original sources related to Victor Vroom's expectancy theory. Main categories pertaining to Victor Vroom's expectancy theory were ‘expectancy’, ‘instrumentality’ and ‘valence’. This theory defined ‘expectancy’ as the perceived probability that efforts could lead to good performance. ‘Instrumentality’ was the perceived probability that good performance led to desired outcomes. ‘Valence’ was the value that the individual personally placed on outcomes ( Vroom, 1964 , 2005 ).
  • Selection of the appropriate sampling strategy . Qualitative researchers need to select the proper sampling strategies that facilitate an access to key informants on the study phenomenon ( Elo et al., 2014 ). Sampling methods such as purposive, snowball and convenience methods ( Coyne, 1997 ) can be used with the consideration of maximum variations in terms of socio-demographic and phenomenal characteristics ( Sandelowski, 1995 ). The sampling process ends when information ‘redundancy’ or ‘saturation’ is reached. In other words, it ends when all aspects of the phenomenon under study are explored in detail and no additional data are revealed in subsequent interviews ( Cleary et al., 2014 ). In line with this example, nurses and physicians who are the members of the CPR team should be selected, given diversity in variables including age, gender, the duration of work, number of CPR procedures, CPR in different patient groups and motivation levels for CPR.
  • Deciding on the analysis of manifest and/or latent content . Qualitative researchers decide whether the manifest and/or latent contents should be considered for analysis based on the study's aim. The manifest content is limited to the transcribed interview text, but latent content includes both the researchers' interpretations of available text, and participants' silences, pauses, sighs, laughter, posture, etc. ( Elo and Kyngäs, 2008 ). Both types of content are recommended to be considered for data analysis, because a deep understanding of data is preferred for directed QCA ( Thomas and Magilvy, 2011 ).
  • Developing an interview guide . The interview guide contains open-ended questions based on the study's aims, followed by directed questions about main categories extracted from the existing theory or previous research ( Hsieh and Shannon, 2005 ). Directed questions guide how to conduct interviews when using directed or conventional methods. The following open-ended and directed questions were used in this example: An open-ended question was ‘What is in your mind when you are called for performing CPR?’ The directed question for the main category of ‘expectancy’ could be ‘How does the expectancy of the successful CPR procedure motivate you to resuscitate patients?’
  • Conducting and transcribing interviews . An interview guide is used to conduct interviews for directed QCA. After each interview session, the entire interview is transcribed verbatim immediately ( Poland, 1995 ) and with utmost care ( Seidman, 2013 ). Two recorders should be used to ensure data backup ( DiCicco-Bloom and Crabtree, 2006 ). (For more details concerning skills required for conducting successful qualitative interviews, see Edenborough, 2002 ; Kramer, 2011 ; Schostak, 2005 ; Seidman, 2013 ).
  • Specifying the unit of analysis . The unit of analysis may include the person, a program, an organisation, a class, community, a state, a country, an interview, or a diary written by the researchers ( Graneheim and Lundman, 2004 ). The transcriptions of interviews are usually considered units of analysis when data are collected using interviews. In this example, interview transcriptions and filed notes are considered as the units of analysis.
  • Immersion in data . The transcribed interviews are read and reviewed several times with the consideration of the following questions: ‘Who is telling?’, ‘Where is this happening?’, ‘When did it happen?’, ‘What is happening?’, and ‘Why?’ ( Elo and Kyngäs, 2008 ). These questions help researchers get immersed in data and become able to extract related meanings ( Elo and Kyngäs, 2008 ; Elo et al., 2014 ).

The organisation phase:

The categorisation matrix of the team members' motivation for CPR.

Motivation for CPR
ExpectancyInstrumentalityValenceOther inductively emerged categories

CPR: cardiopulmonary resuscitation.

  • Theoretical definition of the main categories and subcategories . Derived from the existing theory or previous research, the theoretical definitions of categories should be accurate and objective ( Mayring, 2000 , 2014 ). As for this example, ‘expectancy’ as a main category could be defined as the “subjective probability that the efforts by an individual led to an acceptable level of performance (effort–performance association) or to the desired outcome (effort–outcome association)” ( Van Eerde and Thierry, 1996 ; Vroom, 1964 ).
  • – Expectancy in the CPR was a subjective probability formed in the rescuer's mind.
  • – This subjective probability should be related to the association between the effort–performance or effort–outcome relationship perceived by the rescuer.
  • The pre-testing of the categorisation matrix . The categorisation matrix should be tested using a pilot study. This is an essential step, particularly if more than one researcher is involved in the coding process. In this step, qualitative researchers should independently and tentatively encode the text, and discuss the difficulties in the use of the categorisation matrix and differences in the interpretations of the unit of analysis. The categorisation matrix may be further modified as a result of such discussions ( Elo et al., 2014 ). This also can increase inter-coder reliability ( Vaismoradi et al., 2013 ) and the trustworthiness of the study.
  • Choosing and specifying the anchor samples for each main category . An anchor sample is an explicit and concise exemplification, or the identifier of a main category, selected from meaning units ( Mayring, 2014 ). An anchor sample for ‘expectancy’ as the main category of this example could be as follows: ‘… the patient with advanced metastatic cancer who requires CPR … I do not envision a successful resuscitation for him.’

An example of steps taken for the abstraction of the phenomenon of expectancy (main category).

Meaning unitSummarised meaning unitPreliminary codeGroup of codesSubcategoryGeneric categoryMain category
The patient with advanced heart failure: I do not envisage a successful resuscitation for himNo expectation for the resuscitation of those with advanced heart failureCardiovascular conditions that decrease the chance of successful resuscitationEstimation of the functional capacity of vital organsScientific estimation of life capacityEstimation of the chances of successful CPRExpectancy
Patients are rarely resuscitated, especially those who experience a cardiogenic shock following a heart attackLow possibility of resuscitation of patients with a cardiogenic shock
When ventricular fibrillation is likely, a chance of resuscitation still exists even after performing CPR for 30 minutesThe higher chance of resuscitation among patients with ventricular fibrillationCardiovascular conditions that increase the chance of successful resuscitation
Patients with sudden cardiac arrest are more likely to be resuscitated through CPRThe higher chance of resuscitation among patients with sudden cardiac arrest
Estimation of the severity of the patient's complications
Estimation of remaining life span
Intuitive estimation of the chances of successful resuscitation
Uncertainty in the estimation
Time considerations in resuscitation
Estimation of self-efficacy

CPR: cardiopulmonary resuscitation

  • The inductive abstraction of main categories from preliminary codes . Preliminary codes are grouped and categorised according to their meanings, similarities and differences. The products of this categorisation process are known as ‘generic categories’ ( Elo and Kyngäs, 2008 ) ( Table 3 ).
  • The establishment of links between generic categories and main categories . The constant comparison of generic categories and main categories results in the development of a conceptual and logical link between generic and main categories, nesting generic categories into the pre-existing main categories and creating new main categories. The constant comparison technique is applied to data analysis throughout the study ( Zhang and Wildemuth, 2009 ) ( Table 3 ).

The reporting phase:

  • Reporting all steps of directed QCA and findings . This includes a detailed description of the data analysis process and the enumeration of findings ( Elo and Kyngäs, 2008 ). Findings should be systematically presented in such a way that the association between the raw data and the categorisation matrix is clearly shown and easily followed. Detailed descriptions of the sampling process, data collection, analysis methods and participants' characteristics should be presented. The trustworthiness criteria adopted along with the steps taken to fulfil them should also be outlined. Elo et al. (2014) developed a comprehensive and specific checklist for reporting QCA studies.

Trustworthiness

Multiple terms are used in the international literature regarding the validation of qualitative studies ( Creswell, 2013 ). The terms ‘validity’, ‘reliability’, and ‘generalizability’ in quantitative studies are equivalent to ‘credibility’, ‘dependability’, and ‘transferability’ in qualitative studies, respectively ( Polit and Beck, 2013 ). These terms, along with the additional concept of confirmability, were introduced by Lincoln and Guba (1985) . Polit and Beck added the term ‘authenticity’ to the list. Collectively, they are the different aspects of trustworthiness in all types of qualitative studies ( Polit and Beck, 2013 ).

To ehnance the trustworthiness of the directed QCA study, researchers should thoroughly delineate the three phases of ‘preparation’, ‘organization’, and ‘reporting’ ( Elo et al., 2014 ). Such phases are needed to show in detail how categories are developed from data ( Elo and Kyngäs, 2008 ; Graneheim and Lundman, 2004 ; Vaismoradi et al., 2016 ). To accomplish this, appendices, tables and figures may be used to depict the reduction process ( Elo and Kyngäs, 2008 ; Elo et al., 2014 ). Furthermore, an honest account of different realities during data analysis should be provided ( Polit and Beck, 2013 ). The authors of this paper believe that adopting this 16-step method can enhance the trustworthiness of directed QCA.

Directed QCA is used to validate, refine and/or extend a theory or theoretical framework in a new context ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ). The purpose of this paper is to provide a comprehensive, systematic, yet simple and applicable method for directed QCA to facilitate its use by novice qualitative researchers.

Despite the current misconceptions regarding the simplicity of QCA and directed QCA, knowledge development is required for conducting them ( Elo and Kyngäs, 2008 ). Directed QCA is often performed on a considerable amount of textual data ( Pope et al., 2000 ). Nevertheless, few studies have discussed the multiple steps need to be taken to conduct it. In this paper, we have integrated and elaborated the essential steps pointed to by international qualitative researchers on directed QCA such as ‘preliminary coding’, ‘theoretical definition’ ( Mayring, 2000 , 2014 ), ‘coding rule’, ‘anchor sample’ ( Mayring, 2014 ), ‘inductive analysis in directed qualitative content analysis’ ( Elo and Kyngäs, 2008 ), and ‘pretesting the categorization matrix’ ( Elo et al., 2014 ). Moreover, the authors have added a detailed discussion regarding ‘the use of inductive abstraction’ and ‘linking between generic categories and main categories’.

The importance of directed QCA is increased due to the development of knowledge and theories derived from QCA using the inductive approach, and the growing need to test the theories. Directed QCA proposed in this paper, is a reliable, transparent and comprehensive method that may increase the rigour of data analysis, allow the comparison of the findings of different studies, and yield practical results.

Abdolghader Assarroudi (PhD, MScN, BScN) is Assistant Professor in Nursing, Department of Medical‐Surgical Nursing, School of Nursing and Midwifery, Sabzevar University of Medical Sciences, Sabzevar, Iran. His main areas of research interest are qualitative research, instrument development study and cardiopulmonary resuscitation.

Fatemeh Heshmati Nabavi (PhD, MScN, BScN) is Assistant Professor in nursing, Department of Nursing Management, School of Nursing and Midwifery, Mashhad University of Medical Sciences, Mashhad, Iran. Her main areas of research interest are medical education, nursing management and qualitative study.

Mohammad Reza Armat (MScN, BScN) graduated from the Mashhad University of Medical Sciences in 1991 with a Bachelor of Science degree in nursing. He completed his Master of Science degree in nursing at Tarbiat Modarres University in 1995. He is an instructor in North Khorasan University of Medical Sciences, Bojnourd, Iran. Currently, he is a PhD candidate in nursing at the Mashhad School of Nursing and Midwifery, Mashhad University of Medical Sciences, Iran.

Abbas Ebadi (PhD, MScN, BScN) is professor in nursing, Behavioral Sciences Research Centre, School of Nursing, Baqiyatallah University of Medical Sciences, Tehran, Iran. His main areas of research interest are instrument development and qualitative study.

Mojtaba Vaismoradi (PhD, MScN, BScN) is a doctoral nurse researcher at the Faculty of Nursing and Health Sciences, Nord University, Bodø, Norway. He works in Nord’s research group ‘Healthcare Leadership’ under the supervision of Prof. Terese Bondas. For now, this team has focused on conducting meta‐synthesis studies with the collaboration of international qualitative research experts. His main areas of research interests are patient safety, elderly care and methodological issues in qualitative descriptive approaches. Mojtaba is the associate editor of BMC Nursing and journal SAGE Open in the UK.

Key points for policy, practice and/or research

  • In this paper, essential steps pointed to by international qualitative researchers in the field of directed qualitative content analysis were described and integrated.
  • A detailed discussion regarding the use of inductive abstraction, and linking between generic categories and main categories, was presented.
  • A 16-step method of directed qualitative content analysis proposed in this paper is a reliable, transparent, comprehensive, systematic, yet simple and applicable method. It can increase the rigour of data analysis and facilitate its use by novice qualitative researchers.

Declaration of conflicting interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

The author(s) received no financial support for the research, authorship, and/or publication of this article.

  • Assarroudi A, Heshmati Nabavi F, Ebadi A, et al.(2017) Professional rescuers' experiences of motivation for cardiopulmonary resuscitation: A qualitative study . Nursing & Health Sciences . 19(2): 237–243. [ PubMed ] [ Google Scholar ]
  • Berelson B. (1952) Content Analysis in Communication Research , Glenoce, IL: Free Press. [ Google Scholar ]
  • Cleary M, Horsfall J, Hayter M. (2014) Data collection and sampling in qualitative research: Does size matter? Journal of Advanced Nursing 70 ( 3 ): 473–475. [ PubMed ] [ Google Scholar ]
  • Coyne IT. (1997) Sampling in qualitative research.. Purposeful and theoretical sampling; merging or clear boundaries? Journal of Advanced Nursing 26 ( 3 ): 623–630. [ PubMed ] [ Google Scholar ]
  • Creswell JW. (2013) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 4th edn. Thousand Oaks, CA: SAGE Publications. [ Google Scholar ]
  • DiCicco-Bloom B, Crabtree BF. (2006) The qualitative research interview . Medical Education 40 ( 4 ): 314–321. [ PubMed ] [ Google Scholar ]
  • Downe-Wamboldt B. (1992) Content analysis: Method, applications, and issues . Health Care for Women International 13 ( 3 ): 313–321. [ PubMed ] [ Google Scholar ]
  • Edenborough R. (2002) Effective Interviewing: A Handbook of Skills and Techniques , 2nd edn. London: Kogan Page. [ Google Scholar ]
  • Elo S, Kyngäs H. (2008) The qualitative content analysis process . Journal of Advanced Nursing 62 ( 1 ): 107–115. [ PubMed ] [ Google Scholar ]
  • Elo S, Kääriäinen M, Kanste O, et al.(2014) Qualitative content analysis: A focus on trustworthiness . SAGE Open 4 ( 1 ): 1–10. [ Google Scholar ]
  • Gill P, Stewart K, Treasure E, et al.(2008) Methods of data collection in qualitative research: Interviews and focus groups . British Dental Journal 204 ( 6 ): 291–295. [ PubMed ] [ Google Scholar ]
  • Graneheim UH, Lundman B. (2004) Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness . Nurse Education Today 24 ( 2 ): 105–112. [ PubMed ] [ Google Scholar ]
  • Hsieh H-F, Shannon SE. (2005) Three approaches to qualitative content analysis . Qualitative Health Research 15 ( 9 ): 1277–1288. [ PubMed ] [ Google Scholar ]
  • Kramer EP. (2011) 101 Successful Interviewing Strategies , Boston, MA: Course Technology, Cengage Learning. [ Google Scholar ]
  • Lincoln YS, Guba EG. (1985) Naturalistic Inquiry , Beverly Hills, CA: SAGE Publications. [ Google Scholar ]
  • Mayring P. (2000) Qualitative Content Analysis . Forum: Qualitative Social Research 1 ( 2 ): Available at: http://www.qualitative-research.net/fqs-texte/2-00/02-00mayring-e.htm (accessed 10 March 2005). [ Google Scholar ]
  • Mayring P. (2014) Qualitative content analysis: Theoretical foundation, basic procedures and software solution , Klagenfurt: Monograph. Available at: http://nbn-resolving.de/urn:nbn:de:0168-ssoar-395173 (accessed 10 May 2015). [ Google Scholar ]
  • Poland BD. (1995) Transcription quality as an aspect of rigor in qualitative research . Qualitative Inquiry 1 ( 3 ): 290–310. [ Google Scholar ]
  • Polit DF, Beck CT. (2013) Essentials of Nursing Research: Appraising Evidence for Nursing Practice , 7th edn. China: Lippincott Williams & Wilkins. [ Google Scholar ]
  • Pope C, Ziebland S, Mays N. (2000) Analysing qualitative data . BMJ 320 ( 7227 ): 114–116. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1995) Sample size in qualitative research . Research in Nursing & Health 18 ( 2 ): 179–183. [ PubMed ] [ Google Scholar ]
  • Schostak J. (2005) Interviewing and Representation in Qualitative Research , London: McGraw-Hill/Open University Press. [ Google Scholar ]
  • Schreier M. (2014) Qualitative content analysis . In: Flick U. (ed.) The SAGE Handbook of Qualitative Data Analysis , Thousand Oaks, CA: SAGE Publications Ltd, pp. 170–183. [ Google Scholar ]
  • Seidman I. (2013) Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences , 3rd edn. New York: Teachers College Press. [ Google Scholar ]
  • Thomas E, Magilvy JK. (2011) Qualitative rigor or research validity in qualitative research . Journal for Specialists in Pediatric Nursing 16 ( 2 ): 151–155. [ PubMed ] [ Google Scholar ]
  • Vaismoradi M, Jones J, Turunen H, et al.(2016) Theme development in qualitative content analysis and thematic analysis . Journal of Nursing Education and Practice 6 ( 5 ): 100–110. [ Google Scholar ]
  • Vaismoradi M, Turunen H, Bondas T. (2013) Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study . Nursing & Health Sciences 15 ( 3 ): 398–405. [ PubMed ] [ Google Scholar ]
  • Van Eerde W, Thierry H. (1996) Vroom's expectancy models and work-related criteria: A meta-analysis . Journal of Applied Psychology 81 ( 5 ): 575. [ Google Scholar ]
  • Vroom VH. (1964) Work and Motivation , New York: Wiley. [ Google Scholar ]
  • Vroom VH. (2005) On the origins of expectancy theory . In: Smith KG, Hitt MA. (eds) Great Minds in Management: The Process of Theory Development , Oxford: Oxford University Press, pp. 239–258. [ Google Scholar ]
  • Zhang Y, Wildemuth BM. (2009) Qualitative analysis of content . In: Wildemuth B. (ed.) Applications of Social Research Methods to Questions in Information and Library Science , Westport, CT: Libraries Unlimited, pp. 308–319. [ Google Scholar ]

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

education-logo

Article Menu

content analysis for qualitative research

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

A qualitative-content-analytical approach to the quality of primary students’ questions: testing a competence level model and exploring selected influencing factors.

content analysis for qualitative research

1. Introduction

1.1. students’ questions: importance in educational contexts and research findings, 1.2. classification systems for assessing the quality of (students’) questions, 1.3. brinkmann’s competence level model for analyzing the level of abstraction of student’s questions, 1.4. aim of the study, 2. materials and methods, 2.1. research design and sample, 2.2. instruments and data analysis.

  • Prior knowledge [characteristics: not visible | visible]: Does the question reveal any prior knowledge that goes beyond everyday knowledge?
  • Focus of attention [characteristics: narrow | broad]: Is the focus of attention narrow or broad in terms of the expected response? Does the question relate to a specific detail (narrow focus of attention) or is it necessary to explore many partial aspects to answer it (broad focus of attention)?
  • Intention of conceptual understanding [characteristics: not visible | visible]: Does the question express the intention to fathom causes, discover connections, or understand modes of operation?
  • Philosophical horizon [characteristics: not visible | visible]: Is there a clear answer for this question? Does it touch on topics whose answers cannot be obtained from largely established bodies of knowledge? Do we have to struggle for our interpretative reality?

3.1. To What Extent Is Brinkmann’s Competence Level Model [ 26 ] Suitable for Analyzing Questions from a Different Sample? What Modifications Are Necessary? (RQ 1)

3.2. are there any indications of connections between the identified competence levels of the questions and the students’ grade level (rq 2).

  • Grade level 1 = 72 questions
  • Grade level 2 = 53 questions
  • Grade level 3 = 151 questions
  • Grade level 4 = 158 questions

3.3. Are There Any Indications of Connections between the Identified Competence Levels of the Questions and the Subject Matter? (RQ 3)

  • data set “Space_4-a” | grade level 4 | 0 questions
  • data set “Space_4-b” | grade level 4 | 33 questions
  • data set “Space_3” | grade level 3 | 67 questions
  • Brinkmann’s data set [ 26 ] | grade level 3 | 137 questions

4. Discussion

4.1. to what extent is brinkmann’s competence level model [ 26 ] suitable for analyzing questions from a different sample what modifications are necessary (rq 1), 4.2. are there any indications of connections between the identified competence levels of the questions and the students’ grade level (rq 2), 4.3. are there any indications of connections between the identified competence levels of the questions and the subject matter (rq 3), 4.4. further aspects of research design and methods, 4.5. implications, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

Data SetSubject MatterGrade LevelDesignation of the Data Set [*]Number of Questions
1Space4Space_4-a0
2Human senses3Human senses_35
3Human skeleton3Human skeleton_36
4Water4Water_48
5Stick insects2Stick insects_211
6Hedgehogs1Hedgehogs_1-b13
7Christmas3Christmas_314
8Animals2Animals_214
9Birds2Birds_218
10Fire3Fire_323
11Animals3Animals_323
12Volcanoes4Volcanoes_425
13Bats3Bats_326
14Mobility2Mobility_228
15Hedgehogs1Hedgehogs_1-a28
16Rome4Rome_428
17Electricity4Electricity_4-b31
18Animals1Animals_133
19Space4Space_4-b33
20Electricity4Electricity_4-a43
21Space3Space_367
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
1.1Quartet questions to capture the diversity of the world in a certain system of order (e.g., “How big is the earth?“)
not visiblenarrownot visiblenot visible
1.2Record questions to capture dimensions (superlatives) (e.g., “Which planet is the largest in the entire universe?”)
not visiblenarrownot visiblenot visible
1.3Questions about the geographical classification or spatial differentiation of one’s personal living environment (e.g., “Where are the airports in North Rhine-Westphalia?”)
not visiblenarrownot visiblenot visible
1.4Verification questions (e.g., “Is it possible to land on the sun?”)
not visiblenarrownot visiblenot visible
1.5Questions about names or linguistic derivations to expand knowledge of the world (e.g., “Why is the water called water?”)
not visiblenarrownot visiblenot visible
1.6 *Questions on the reconstruction of foreign or historical living environments based on categories of one’s personal living environment (e.g., “How did the Romans live?”)
not visiblenarrownot visiblenot visible
1.7 *Questions about (historical) events, personalities, facts or origins (e.g., “When was the war?”)
not visiblenarrownot visiblenot visible
1.8 *Questions with the intention of being able to (visually) imagine a concept or phenomenon (e.g., “What does a volcano look like?”)
not visiblenarrownot visiblenot visible
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
2.1Quartet questions for advanced learners (e.g., “How big are sunspots?”)
visiblenarrownot visiblenot visible
2.2Expert record questions (e.g., “What is the second most poisonous animal after the poison dart frog?”)
visiblebroadnot visiblenot visible
2.3Verification questions (e.g., “Does Uranus has a ring?”)
visiblenarrownot visiblenot visible
2.4Comparison questions to differentiate prior knowledge by comparing two elements (e.g., “Is the sun further away from our earth than the moon?”)
visiblenarrownot visiblenot visible
2.5Decision questions to differentiate prior knowledge against the background of possible cases/scenarios (e.g., “Is the moon light or dark?”)
visiblenarrownot visiblenot visible
2.6Definition questions to understand terms (e.g., “What exactly is a sickle?”)
visiblenarrownot visiblenot visible
2.7Time-and-space questions to further develop the ability to orient oneself in time (e.g., “When did the Middle Ages begin?”)
visiblenarrownot visiblenot visible
2.8Collection questions to gather the most diverse and comprehensive information possible on an aspect (e.g., “What are all the rivers in North Rhine-Westphalia called?”)
not visiblebroadnot visiblenot visible
2.9 *Question about (historical) events, personalities, facts, or origins (e.g., “How was Caesar killed?”)
visiblenarrownot visiblenot visible
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
3.1Why questions that have a generalizing character and are aimed at regularities (e.g., “Why does the moon always look different?”)
not visiblebroadvisiblenot visible
3.2How questions to break down modalities and modes of operation (e.g., “How did the sun come into being and how did the moon and the earth come into being?”)
not visiblebroadvisiblenot visible
3.3Questions about the nature of things (e.g., “What is the moon made of?”)
not visiblebroadvisiblenot visible
3.4Question about consequences (e.g., “What is the gravitational pull like when you fly over a planet?”)
not visiblebroadvisiblenot visible
3.5Verification questions (e.g., “Did the moon and the sun look different in the past?”)
not visiblebroadvisiblenot visible
3.6Time-and-space questions to expand orientation knowledge (e.g., “What have people traded with in the past?”)
not visiblebroadvisiblenot visible
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
4.1Why questions that have a generalizing character (e.g., “Why does the earth revolve around itself?”)
visiblebroadvisiblenot visible
4.2Questions to break down modalities and modes of operation (e.g., “How did the urexplosion go?”)
visiblebroadvisiblenot visible
4.3Decision questions (e.g., “Where is the moon? Behind or in front of the earth?”)
visiblebroadvisiblenot visible
4.4Expert verification questions (e.g., “Is one half dark because the sun doesn’t shine on it?”)
visiblebroadvisiblenot visible
4.5Expert definition questions to understand complex terms or phenomena (e.g., “What does light years mean?”)
visiblebroadvisiblenot visible
4.6Time-and-space questions regarding a complex phenomenon in connection with a temporal structure (e.g., “When is there always a new moon?”)
visiblebroadvisiblenot visible
4.7Consequence questions for advanced learners to better understand the course of a particular scenario (e.g., “If the sun ever explodes, how will it explode?”)
visiblebroadvisiblenot visible
Nr.Prior KnowledgeFocus of
Attention
Intention of
Conceptual
Understanding
Philosophical Horizon
5.1Questions based on understood technical terms requiring a complex conclusion to answer (e.g., “Why is oxygen only on Earth?”)
visiblebroadvisiblenot visible
5.2Questions about the meaning of the nature of the living environment that focus on the “why” of a phenomenon (e.g., “Why does a planet exist if you can’t stand on it?”)
visiblebroadvisiblevisible
5.3Questions from a particular perspective (future significance, evaluations, etc.) that seek clarity about connections or patterns of interpretation in order to understand and categorize processes (e.g., “What happens if the rainforest is destroyed?”)
visiblebroadvisiblenot visible
5.4Questions about the whence and whither of humankind or of a philosophical nature (e.g., “Will people live on the other planets in the future?”)
not visiblebroadvisiblevisible
  • Wu, L.; Liu, Y.; How, M.-L.; He, S. Investigating Student-Generated Questioning in a Technology-Enabled Elementary Science Classroom: A Case Study. Educ. Sci. 2023 , 13 , 158. [ Google Scholar ] [ CrossRef ]
  • Niegemann, H. Lernen und Fragen: Bilanz und Perspektiven der Forschung. Unterrichtswissenschaft 2004 , 32 , 345–356. [ Google Scholar ] [ CrossRef ]
  • Neber, H. Fragenstellen. In Handbuch Lernstrategien ; Mandl, H., Friedrich, H.F., Eds.; Hogrefe: Göttingen, Germany, 2006; pp. 50–58. ISBN 978-3-8017-1813-8. [ Google Scholar ]
  • Aflalo, E. Students generating questions as a way of learning. Act. Learn. High. Educ. 2021 , 22 , 63–75. [ Google Scholar ] [ CrossRef ]
  • Pallesen, H.; Hörnlein, M. Warum Schüler*innen keine Fragen stellen.: Unterricht zwischen Sozialisation zur Fraglosigkeit und Bildungsanspruch. In Kinderperspektiven im Unterricht: Zur Ambivalenz der Anschaulichkeit ; Rumpf, D., Winter, S., Eds.; Springer VS: Wiesbaden, Germany, 2019; pp. 3–10. ISBN 978-3-658-22432-5. [ Google Scholar ]
  • Kultusministerkonferenz. Empfehlungen zur Arbeit in der Grundschule. Available online: https://www.kmk.org/fileadmin/pdf/PresseUndAktuelles/2015/Empfehlung_350_KMK_Arbeit_Grundschule_01.pdf (accessed on 29 May 2024).
  • Department for Education. The National Curriculum in England: Key Stages 1 and 2 Framework Document. Available online: https://assets.publishing.service.gov.uk/media/5a81a9abe5274a2e8ab55319/PRIMARY_national_curriculum.pdf (accessed on 29 May 2024).
  • OECD. The Future of Education and Skills. Education. 2023. Available online: https://www.oecd.org/en/about/projects/future-of-education-and-skills-2030.html (accessed on 29 May 2024).
  • Lombardi, L.; Mednick, F.J.; Backer, F.D.; Lombaerts, K. Fostering Critical Thinking across the Primary School’s Curriculum in the European Schools System. Educ. Sci. 2021 , 11 , 505. [ Google Scholar ] [ CrossRef ]
  • Spencer, A.G.; Causey, C.B.; Ernest, J.M.; Barnes, G.F. Using Student Generated Questions to Foster Twenty-First Century Learning: International Collaboration in Uganda. Excell. Educ. J. 2020 , 9 , 57–84. [ Google Scholar ]
  • Chin, C.; Osborne, J. Students’ questions: A potential resource for teaching and learning science. Stud. Sci. Educ. 2008 , 44 , 1–39. [ Google Scholar ] [ CrossRef ]
  • Miller, S.; Brinkmann, V. SchülerInnenfragen im Mittelpunkt des Sachunterrichts. In Sachunterricht in der Grundschule entwickeln—Gestalten—Reflektieren ; Gläser, E., Schönknecht, G., Eds.; Grundschulverband: Frankfurt am Main, Germany, 2013; pp. 226–241. ISBN 9783941649095. [ Google Scholar ]
  • Schilling, Y.; Kuckuck, M. Das Anregen und Berücksichtigen von Schüler * innenfragen im Sachunterricht: Impulse für eine vielperspektivische Professionalisierungsgelegenheit im Studium. Widerstreit Sachunterricht 2024 , 28 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Schmeinck, D.; Kidman, G. The Integrated Nature of Geography Education in German and Australian Primary Schools. In Teaching Primary Geography: Setting the Foundation ; Kidman, G., Schmeinck, D., Eds.; Springer Nature AG: Cham, Switzerland, 2022; pp. 15–27. ISBN 978-3-030-99970-4. [ Google Scholar ]
  • Schomaker, C.; Tänzer, S. Sachunterrichtsdidaktik: Bestandsaufnahme und Forschungsperspektiven. In Lernen im Fach und über das Fach hinaus: Bestandsaufnahmen und Forschungsperspektiven aus 17 Fachdidaktiken im Vergleich , 2nd ed.; Rothgangel, M., Abraham, U., Bayrhuber, H., Frederking, V., Jank, W., Vollmer, H.J., Eds.; Waxmann: Münster, Germany; New York, NY, USA, 2021; pp. 363–390. ISBN 9783830993070. [ Google Scholar ]
  • Meschede, N.; Hartinger, A.; Möller, K. Sachunterricht in der Lehrerinnen- und Lehrerbildung: Rahmenbedingungen, Befunde und Perspektiven. In Handbuch Lehrerinnen- und Lehrerbildung ; Cramer, C., König, J., Rothland, M., Blömeke, S., Eds.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2020; pp. 541–548. ISBN 9783838554730. [ Google Scholar ]
  • Gesellschaft für Didaktik des Sachunterrichts. Perspektivrahmen Sachunterricht , 2nd ed.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2013; ISBN 978-3-7815-1992-3. [ Google Scholar ]
  • Schilling, Y.; Beudels, M.; Kuckuck, M.; Preisfeld, A. Sachunterrichtsbezogene Teilstudiengänge aus NRW auf dem Prüfstand: Eine Dokumentenanalyse der Bachelor-und Masterprüfungsordnungen. Herausford. Lehr. *Innenbildung 2021 , 4 , 178–195. [ Google Scholar ] [ CrossRef ]
  • Beudels, M.M.; Damerau, K.; Preisfeld, A. Effects of an Interdisciplinary Course on Pre-Service Primary Teachers’ Content Knowledge and Academic Self-Concepts in Science and Technology–A Quantitative Longitudinal Study. Educ. Sci. 2021 , 11 , 744. [ Google Scholar ] [ CrossRef ]
  • Kahlert, J.; Fölling-Albers, M.; Götz, M.; Hartinger, A.; Miller, S.; Wittkowske, S. (Eds.) Handbuch Didaktik des Sachunterrichts , 3rd ed.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2022; ISBN 978-3-8385-8801-8. [ Google Scholar ]
  • Peschel, M.; Mammes, I. Der Sachunterricht und die Didaktik des Sachunterrichts als besondere Herausforderung für die Professionalisierung von Grundschullehrkräften. In Professionalisierung von Grundschullehrkräften: Kontext, Bedingungen und Herausforderungen ; Mammes, I., Rotter, C., Eds.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2022; pp. 188–203. ISBN 978-3-7815-5949-3. [ Google Scholar ]
  • Schröer, F.; Tenberge, C. Theorien und Konzeptionen inklusiven Sachunterrichts. In Inklusive (Fach-)Didaktik in der Primarstufe: Ein Lehrbuch ; Dexel, T., Ed.; Waxmann: Münster, Germany; New York, NY, USA, 2022; pp. 158–185. ISBN 9783838556864. [ Google Scholar ]
  • Simon, T. Vielperspektivität und Partizipation als interdependente und konstitutive Merkmale einer inklusionsorientierten Sachunterrichtsdidaktik. In Ich und Welt verknüpfen: Allgemeinbildung, Vielperspektivität, Partizipation und Inklusion im Sachunterricht ; Siebach, M., Simon, J., Simon, T., Eds.; Schneider Verlag Hohengehren GmbH: Baltmannsweiler, Germany, 2019; pp. 66–76. ISBN 9783834019516. [ Google Scholar ]
  • Praetorius, A.-K.; Martens, M.; Brinkmann, M. Unterrichtsqualität aus Sicht der quantitativen und qualitativen Unterrichtsforschung. In Handbuch Schulforschung ; Hascher, T., Idel, T.-S., Helsper, W., Eds.; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2020; pp. 1–20. ISBN 978-3-658-24734-8. [ Google Scholar ]
  • Wuttke, E. Unterrichtskommunikation und Wissenserwerb: Zum Einfluss von Kommunikation auf den Prozess der Wissensgenerierung ; Lang: Frankfurt am Main, Germany, 2005; ISBN 3-631-53832-4. [ Google Scholar ]
  • Brinkmann, V. Fragen Stellen an die Welt: Eine Untersuchung zur Kompetenzentwicklung in Einem an den Schülerfragen Orientierten Sachunterricht ; Schneider Verlag Hohengehren: Baltmannsweiler, Germany, 2019; ISBN 9783834019233. [ Google Scholar ]
  • Chin, C.; Brown, D.E.; Bruce, B.C. Student-generated questions: A meaningful aspect of learning in science. Int. J. Sci. Educ. 2002 , 24 , 521–549. [ Google Scholar ] [ CrossRef ]
  • van der Meij, H.; Karabenick, S.A. The great divide between teacher and student questioning. In Strategic Help Seeking: Implications for Learning and Teaching ; Karabenick, S.A., Ed.; L. Erlbaum Associates: Mahwah, NJ, USA, 1998; pp. 195–218. ISBN 9780805823844. [ Google Scholar ]
  • Levin, A. Lernen durch Fragen: Wirkung von strukturierenden Hilfen auf das Generieren von Studierendenfragen als Begleitende Lernstrategie ; Waxmann: Münster, Germany, 2005; ISBN 9783830914730. [ Google Scholar ]
  • Otero, J.; Graesser, A.C. PREG: Elements of a Model of Question Asking. Cogn. Instr. 2001 , 19 , 143–175. [ Google Scholar ] [ CrossRef ]
  • Levin, A.; Arnold, K.-H. Aktives Fragenstellen im Hochschulunterricht: Effekte des Vorwissens auf den Lernerfolg. Unterrichtswissenschaft 2004 , 32 , 295–307. [ Google Scholar ] [ CrossRef ]
  • Aguiar, O.G.; Mortimer, E.F.; Scott, P. Learning from and responding to students’ questions: The authoritative and dialogic tension. J. Res. Sci. Teach. 2010 , 47 , 174–193. [ Google Scholar ] [ CrossRef ]
  • Ritz-Fröhlich, G. Kinderfragen im Unterricht ; Klinkhardt: Bad Heilbrunn, Germany, 1992; ISBN 3781507114. [ Google Scholar ]
  • Graesser, A.C.; Person, N.K. Question Asking During Tutoring. Am. Educ. Res. J. 1994 , 31 , 104–137. [ Google Scholar ] [ CrossRef ]
  • Moore, S.; Nguyen, H.A.; Bier, N.; Domadia, T.; Stamper, J. Assessing the Quality of Student-Generated Short Answer Questions Using GPT-3. In Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption ; Hilliger, I., Muñoz-Merino, P.J., Laet, T.d., Ortega-Arranz, A., Farrell, T., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 243–257. ISBN 978-3-031-16289-3. [ Google Scholar ]
  • Niegemann, H.; Stadler, S. Hat noch jemand eine Frage? Systematische Unterrichtsbeobachtung zu Häufigkeit und kognitivem Niveau von Fragen im Unterricht. Unterrichtswissenschaft 2001 , 29 , 171–192. [ Google Scholar ] [ CrossRef ]
  • Graesser, A.C.; Person, N.K.; Huber, J. Mechanisms that Generate Questions. In Questions and Information Systems ; Lauer, T.W., Peacock, E., Graesser, A.C., Eds.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1992; pp. 167–187. ISBN 9780805810189. [ Google Scholar ]
  • Scardamalia, M.; Bereiter, C. Text-Based and Knowledge-Based Questioning by Children. Cogn. Instr. 1992 , 9 , 177–199. [ Google Scholar ] [ CrossRef ]
  • Marton, F.; Booth, S. Learning and Awareness ; Routledge: New York, NY, USA, 1997; ISBN 9780805824551. [ Google Scholar ]
  • Creswell, J.W. Research design: Qualitative, Quantitative, and Mixed Methods Approaches , 3rd ed.; SAGE: Los Angeles, CA, USA, 2009; ISBN 9781412965569. [ Google Scholar ]
  • Pajo, B. Introduction to Research Methods: A Hands-on Approach , 2nd ed.; SAGE: Los Angeles, CA, USA, 2018; ISBN 9781483386959. [ Google Scholar ]
  • Statistisches Bundesamt. Statistischer Bericht. Allgemeinbildende Schulen. Available online: https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/Bildung-Forschung-Kultur/Schulen/Publikationen/Downloads-Schulen/statistischer-bericht-allgemeinbildende-schulen-2110100237005.xlsx?__blob=publicationFile (accessed on 29 May 2024).
  • Döring, N. Forschungsmethoden und Evaluation in den Sozial-und Humanwissenschaften , 6th ed.; Springer: Berlin/Heidelberg, Germany, 2023; ISBN 978-3-662-64762-2. [ Google Scholar ]
  • Ministerium für Schule und Bildung des Landes Nordrhein-Westfalen. Die Grundschule in Nordrhein-Westfalen. Informationen für Eltern. Available online: https://broschuerenservice.nrw.de/msb-duesseldorf/files?download_page=0&product_id=293&files=3/a/3a2910637f9ff37401346e40aea0aa5b.pdf (accessed on 29 May 2024).
  • Flick, U. (Ed.) The SAGE Handbook of Qualitative Research Design ; SAGE Publications Limited: London, UK, 2022; ISBN 9781529766943. [ Google Scholar ]
  • Kuckartz, U.; Rädiker, S. Qualitative Content Analysis: Methods, Practice and Software , 2nd ed.; SAGE: Los Angeles, CA, USA, 2023; ISBN 978-1-5296-0913-4. [ Google Scholar ]
  • Mayring, P. Einführung in die Qualitative Sozialforschung: Eine Anleitung zu Qualitativem Denken , 7th ed.; Beltz: Weinheim, Germany; Basel, Switzerland, 2023; ISBN 9783407296016. [ Google Scholar ]
  • Wirtz, M.A.; Caspar, F. Beurteilerübereinstimmung und Beurteilerreliabilität: Methoden zur Bestimmung und Verbesserung der Zuverlässigkeit von Einschätzungen mittels Kategoriensystemen und Ratingskalen ; Hogrefe: Göttingen, Germany; Bern, Switzerland; Toronto, ON, Canada; Seattle, DC, USA, 2002; ISBN 3801716465. [ Google Scholar ]
  • Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977 , 33 , 159. [ Google Scholar ] [ CrossRef ]
  • Vogl, S. Quantifizierung. Köln Z Soziol 2017 , 69 , 287–312. [ Google Scholar ] [ CrossRef ]
  • Kuckartz, U. Mixed Methods: Methodologie, Forschungsdesigns und Analyseverfahren ; Springer VS: Wiesbaden, Germany, 2014; ISBN 978-3-531-93267-5. [ Google Scholar ]
  • Kelle, U.; Erzberger, C. Qualitative und quantitative Methoden: Kein Gegensatz. In Qualitative Forschung: Ein Handbuch , 14th ed.; Flick, U., Kardorff, E., von Steinke, I., Eds.; Rowohlts Enzyklopädie im Rowohlt Taschenbuch Verlag: Reinbek bei Hamburg, Germany, 2022; pp. 299–309. ISBN 9783499556289. [ Google Scholar ]
  • Kaptein, M.; van den Heuvel, E. Statistics for Data Scientists: An Introduction to Probability, Statistics, and Data Analysis ; Springer International Publishing: Cham, Switzerland, 2022; ISBN 978-3-030-10531-0. [ Google Scholar ]
  • Raithel, J. Quantitative Forschung: Ein Praxiskurs , 2nd ed.; VS Verlag für Sozialwissenschaften: Wiesbaden, Germany, 2008; ISBN 978-3-531-91148-9. [ Google Scholar ]
  • Cohen, J. Statistical Power Analysis. Curr Dir Psychol Sci 1992 , 1 , 98–101. [ Google Scholar ] [ CrossRef ]
  • Schilling, Y.; Molitor, A.-L.; Ritter, R.; Schellenbach-Zell, J. Anregung von Wissensvernetzung bei Lehramtsstudierenden mithilfe von Core Practices. In Vernetzung von Wissen bei Lehramtsstudierenden—Eine Black-Box für die Professionalisierungsforschung? Wehner, A., Masanek, N., Hellmann, K., Grospietsch, F., Heinz, T., Glowinski, I., Eds.; Klinkhardt: Bad Heilbrunn, Germany, 2024. [ Google Scholar ]
  • Krauss, S. Expertise-Paradigma in der Lehrerinnen- und Lehrerbildung. In Handbuch Lehrerinnen- und Lehrerbildung ; Cramer, C., König, J., Rothland, M., Blömeke, S., Eds.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2020; pp. 154–162. ISBN 9783838554730. [ Google Scholar ]
  • Gruber, H.; Stöger, H. Experten-Novizen-Paradigma. In Unterrichtsgestaltung als Gegenstand der Wissenschaft ; Kiel, E., Ed.; Schneider Hohengehren: Baltmannsweiler, Germany, 2011; pp. 247–264. ISBN 9783834008923. [ Google Scholar ]
  • Helmke, A. Unterrichtsqualität und Professionalisierung: Diagnostik von Lehr-Lern-Prozessen und evidenzbasierte Unterrichtsentwicklung ; Klett Kallmeyer: Hannover, Germany, 2022; ISBN 9783772716850. [ Google Scholar ]
  • Prengel, A. Didaktische Diagnostik als Element alltäglicher Lehrarbeit—“Formatives Assessment” im inklusiven Unterricht. In Diagnostik im Kontext inklusiver Bildung: Theorien, Ambivalenzen, Akteure, Konzepte ; Amrhein, B., Ed.; Verlag Julius Klinkhardt: Bad Heilbrunn, Germany, 2016; pp. 49–63. ISBN 9783781554610. [ Google Scholar ]
  • Ernst, K. Den Fragen der Kinder nachgehen. Die Grund. 1996 , 98 , 7–11. [ Google Scholar ]
  • Brinkmann, V. “Werden die Pflanzen trotzdem angebaut, auch wenn es der Umwelt schadet, sie zu pflegen?”—Schülerfragen zum Thema Landwirtschaft. In Landwirtschaft im Sachunterricht: Mehr als ein Ausflug auf den Bauernhof?! Schneider, K., Queisser, U., Eds.; wbv Media GmbH & Co. KG: Bielefeld, Germany, 2022; pp. 53–73. ISBN 9783763967209. [ Google Scholar ]
  • Mueller, R.G.W. Making Them Fit: Examining Teacher Support for Student Questioning. Soc. Stud. Res. Pract. 2016 , 11 , 40–55. [ Google Scholar ] [ CrossRef ]
  • Rothstein, D.; Santana, L. Make Just One Change: Teach Students to Ask Their Own Questions ; Harvard Education Press: Cambridge, MA, USA, 2011; ISBN 9781612500997. [ Google Scholar ]
  • Godinho, S.; Wilson, J. Helping Your Pupils to Ask Questions ; Routledge: London, UK, 2016; ISBN 9780415447270. [ Google Scholar ]

Click here to enlarge figure

Question“How Many Spines Do Hedgehogs Get?”
Distinguishing CriteriaCharacteristic
Prior knowledge☒ not visible☐ visible
Focus of attention☒ narrow☐ broad
Intention of conceptual understanding☒ not visible☐ visible
Philosophical horizon☒ not visible☐ visible
Competence level☐ 0☒ 1☐ 2☐ 3☐ 4☐ 5
Question type1.1: Quartet questions to capture the diversity of the world in a certain system of order
Question“What Is a Faraday Cage?”
Distinguishing CriteriaCharacteristic
Prior knowledge☐ not visible☒ visible
Focus of attention☐ narrow☒ broad
Intention of conceptual understanding☐ not visible☒ visible
Philosophical horizon☒ not visible☐ visible
Competence level☐ 0☐ 1☐ 2☐ 3☒ 4☐ 5
Question type4.5: Expert definition questions to understand complex terms or phenomena
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Schilling, Y.; Hillebrand, L.; Kuckuck, M. A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors. Educ. Sci. 2024 , 14 , 1003. https://doi.org/10.3390/educsci14091003

Schilling Y, Hillebrand L, Kuckuck M. A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors. Education Sciences . 2024; 14(9):1003. https://doi.org/10.3390/educsci14091003

Schilling, Yannick, Leonie Hillebrand, and Miriam Kuckuck. 2024. "A Qualitative-Content-Analytical Approach to the Quality of Primary Students’ Questions: Testing a Competence Level Model and Exploring Selected Influencing Factors" Education Sciences 14, no. 9: 1003. https://doi.org/10.3390/educsci14091003

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Library Support for Qualitative Research

  • Data Analysis
  • Types of Interviews
  • Recruiting & Engaging Participants
  • Interview Questions
  • Conducting Interviews
  • Recording & Transcription

QDA Software

Coding and themeing the data, data visualization, testing or generating theories.

  • Managing Interview Data
  • Finding Extant Interviews
  • Past Workshops on Interview Research
  • Methodological Resources
  • Remote & Virtual Fieldwork
  • Data Management & Repositories
  • Campus Access
  • Free download available for Harvard Faculty of Arts and Sciences (FAS) affiliates
  • Desktop access at Lamont Library Media Lab, 3rd floor
  • Desktop access at Harvard Kennedy School Library (with HKS ID)
  • Remote desktop access for Harvard affiliates from  IQSS Computer Labs . Email them at  [email protected] and ask for a new lab account and remote desktop access to NVivo.
  • Virtual Desktop Infrastructure (VDI) access available to Harvard T.H. Chan School of Public Health affiliates.

Qualitative data analysis methods should flow from, or align with, the methodological paradigm chosen for your study, whether that paradigm is interpretivist, critical, positivist, or participative in nature (or a combination of these). Some established methods include Content Analysis, Critical Analysis, Discourse Analysis, Gestalt Analysis, Grounded Theory Analysis, Interpretive Analysis, Narrative Analysis, Normative Analysis, Phenomenological Analysis, Rhetorical Analysis, and Semiotic Analysis, among others. The following resources should help you navigate your methodological options and put into practice methods for coding, themeing, interpreting, and presenting your data.

  • Users can browse content by topic, discipline, or format type (reference works, book chapters, definitions, etc.). SRM offers several research tools as well: a methods map, user-created reading lists, a project planner, and advice on choosing statistical tests.  
  • Abductive Coding: Theory Building and Qualitative (Re)Analysis by Vila-Henninger, et al.  The authors recommend an abductive approach to guide qualitative researchers who are oriented towards theory-building. They outline a set of tactics for abductive analysis, including the generation of an abductive codebook, abductive data reduction through code equations, and in-depth abductive qualitative analysis.  
  • Analyzing and Interpreting Qualitative Research: After the Interview by Charles F. Vanover, Paul A. Mihas, and Johnny Saldana (Editors)   Providing insight into the wide range of approaches available to the qualitative researcher and covering all steps in the research process, the authors utilize a consistent chapter structure that provides novice and seasoned researchers with pragmatic, "how-to" strategies. Each chapter author introduces the method, uses one of their own research projects as a case study of the method described, shows how the specific analytic method can be used in other types of studies, and concludes with three questions/activities to prompt class discussion or personal study.   
  • "Analyzing Qualitative Data." Theory Into Practice 39, no. 3 (2000): 146-54 by Margaret D. LeCompte   This article walks readers though rules for unbiased data analysis and provides guidance for getting organized, finding items, creating stable sets of items, creating patterns, assembling structures, and conducting data validity checks.  
  • "Coding is Not a Dirty Word" in Chapter 1 (pp. 1–30) of Enhancing Qualitative and Mixed Methods Research with Technology by Shalin Hai-Jew (Editor)   Current discourses in qualitative research, especially those situated in postmodernism, represent coding and the technology that assists with coding as reductive, lacking complexity, and detached from theory. In this chapter, the author presents a counter-narrative to this dominant discourse in qualitative research. The author argues that coding is not necessarily devoid of theory, nor does the use of software for data management and analysis automatically render scholarship theoretically lightweight or barren. A lack of deep analytical insight is a consequence not of software but of epistemology. Using examples informed by interpretive and critical approaches, the author demonstrates how NVivo can provide an effective tool for data management and analysis. The author also highlights ideas for critical and deconstructive approaches in qualitative inquiry while using NVivo. By troubling the positivist discourse of coding, the author seeks to create dialogic spaces that integrate theory with technology-driven data management and analysis, while maintaining the depth and rigor of qualitative research.   
  • The Coding Manual for Qualitative Researchers by Johnny Saldana   An in-depth guide to the multiple approaches available for coding qualitative data. Clear, practical and authoritative, the book profiles 32 coding methods that can be applied to a range of research genres from grounded theory to phenomenology to narrative inquiry. For each approach, Saldaña discusses the methods, origins, a description of the method, practical applications, and a clearly illustrated example with analytic follow-up. Essential reading across the social sciences.  
  • Flexible Coding of In-depth Interviews: A Twenty-first-century Approach by Nicole M. Deterding and Mary C. Waters The authors suggest steps in data organization and analysis to better utilize qualitative data analysis technologies and support rigorous, transparent, and flexible analysis of in-depth interview data.  
  • From the Editors: What Grounded Theory is Not by Roy Suddaby Walks readers through common misconceptions that hinder grounded theory studies, reinforcing the two key concepts of the grounded theory approach: (1) constant comparison of data gathered throughout the data collection process and (2) the determination of which kinds of data to sample in succession based on emergent themes (i.e., "theoretical sampling").  
  • “Good enough” methods for life-story analysis, by Wendy Luttrell. In Quinn N. (Ed.), Finding culture in talk (pp. 243–268). Demonstrates for researchers of culture and consciousness who use narrative how to concretely document reflexive processes in terms of where, how and why particular decisions are made at particular stages of the research process.   
  • The Ethnographic Interview by James P. Spradley  “Spradley wrote this book for the professional and student who have never done ethnographic fieldwork (p. 231) and for the professional ethnographer who is interested in adapting the author’s procedures (p. iv) ... Steps 6 and 8 explain lucidly how to construct a domain and a taxonomic analysis” (excerpted from book review by James D. Sexton, 1980). See also:  Presentation slides on coding and themeing your data, derived from Saldana, Spradley, and LeCompte Click to request access.  
  • Qualitative Data Analysis by Matthew B. Miles; A. Michael Huberman   A practical sourcebook for researchers who make use of qualitative data, presenting the current state of the craft in the design, testing, and use of qualitative analysis methods. Strong emphasis is placed on data displays matrices and networks that go beyond ordinary narrative text. Each method of data display and analysis is described and illustrated.  
  • "A Survey of Qualitative Data Analytic Methods" in Chapter 4 (pp. 89–138) of Fundamentals of Qualitative Research by Johnny Saldana   Provides an in-depth introduction to coding as a heuristic, particularly focusing on process coding, in vivo coding, descriptive coding, values coding, dramaturgical coding, and versus coding. Includes advice on writing analytic memos, developing categories, and themeing data.   
  • "Thematic Networks: An Analytic Tool for Qualitative Research." Qualitative Research : QR, 1(3), 385–405 by Jennifer Attride-Stirling Details a technique for conducting thematic analysis of qualitative material, presenting a step-by-step guide of the analytic process, with the aid of an empirical example. The analytic method presented employs established, well-known techniques; the article proposes that thematic analyses can be usefully aided by and presented as thematic networks.  
  • Using Thematic Analysis in Psychology by Virginia Braun and Victoria Clark Walks readers through the process of reflexive thematic analysis, step by step. The method may be adapted in fields outside of psychology as relevant. Pair this with One Size Fits All? What Counts as Quality Practice in Reflexive Thematic Analysis? by Virginia Braun and Victoria Clark

Data visualization can be employed formatively, to aid your data analysis, or summatively, to present your findings. Many qualitative data analysis (QDA) software platforms, such as NVivo , feature search functionality and data visualization options within them to aid data analysis during the formative stages of your project.

For expert assistance creating data visualizations to present your research, Harvard Library offers Visualization Support . Get help and training with data visualization design and tools—such as Tableau—for the Harvard community. Workshops and one-on-one consultations are also available.

The quality of your data analysis depends on how you situate what you learn within a wider body of knowledge. Consider the following advice:

A good literature review has many obvious virtues. It enables the investigator to define problems and assess data. It provides the concepts on which percepts depend. But the literature review has a special importance for the qualitative researcher. This consists of its ability to sharpen his or her capacity for surprise (Lazarsfeld, 1972b). The investigator who is well versed in the literature now has a set of expectations the data can defy. Counterexpectational data are conspicuous, readable, and highly provocative data. They signal the existence of unfulfilled theoretical assumptions, and these are, as Kuhn (1962) has noted, the very origins of intellectual innovation. A thorough review of the literature is, to this extent, a way to manufacture distance. It is a way to let the data of one's research project take issue with the theory of one's field.

- McCracken, G. (1988), The Long Interview, Sage: Newbury Park, CA, p. 31

Once you have coalesced around a theory, realize that a theory should  reveal  rather than  color  your discoveries. Allow your data to guide you to what's most suitable. Grounded theory  researchers may develop their own theory where current theories fail to provide insight.  This guide on Theoretical Models  from Alfaisal University Library provides a helpful overview on using theory.

If you'd like to supplement what you learned about relevant theories through your coursework and literature review, try these sources:

  • Annual Reviews   Review articles sum up the latest research in many fields, including social sciences, biomedicine, life sciences, and physical sciences. These are timely collections of critical reviews written by leading scientists.  
  • HOLLIS - search for resources on theories in your field   Modify this example search by entering the name of your field in place of "your discipline," then hit search.  
  • Oxford Bibliographies   Written and reviewed by academic experts, every article in this database is an authoritative guide to the current scholarship in a variety of fields, containing original commentary and annotations.  
  • ProQuest Dissertations & Theses (PQDT)   Indexes dissertations and masters' theses from most North American graduate schools as well as some European universities. Provides full text for most indexed dissertations from 1990-present.  
  • Very Short Introductions   Launched by Oxford University Press in 1995, Very Short Introductions offer concise introductions to a diverse range of subjects from Climate to Consciousness, Game Theory to Ancient Warfare, Privacy to Islamic History, Economics to Literary Theory.
  • << Previous: Recording & Transcription
  • Next: Managing Interview Data >>

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.

content analysis for qualitative research

Extract insights from Interviews. At Scale.

Qualitative content analysis coding: a step-by-step guide.

Insight7

Home » Qualitative Content Analysis Coding: A Step-by-Step Guide

Textual Data Analysis serves as a powerful tool for understanding what lies beneath the surface of communication. In today’s information-driven world, organizations generate vast amounts of textual data, often stemming from customer interactions or internal discussions. Effectively analyzing this data can provide critical insights that drive decision-making and improve strategies.

This process involves systematically coding qualitative content to identify patterns, themes, and trends. By breaking down text into manageable parts, researchers can gain a deeper understanding of sentiments and motivations. Ultimately, mastering textual data analysis is essential for any organization aiming to convert raw data into actionable information, fostering data-informed decision-making across various domains.

Understanding Qualitative Content Analysis Coding

Qualitative content analysis coding is a systematic approach to interpreting textual data. This method enables researchers to dissect large volumes of text, identifying patterns, themes, and meaning in qualitative information. By organizing and coding this data, researchers can unearth insights that inform decision-making, enhance understanding, and illuminate complex social phenomena.

In this process, researchers typically follow several key steps. First, they familiarize themselves with the textual data, reading and re-reading to understand its context deeply. Next, they begin coding by tagging segments of text with labels. These codes can be descriptive or analytical, facilitating deeper analysis. As patterns emerge, researchers can categorize these codes into themes, ultimately leading to a more nuanced understanding of the collected data. By utilizing qualitative content analysis coding, researchers can transform raw data into actionable insights, driving effective conclusions and informed strategies.

Setting Up for Textual Data Analysis

Preparing for textual data analysis is crucial for conducting effective qualitative content analysis. Start by gathering the textual data you need. This may include interviews, focus group discussions, or other written material relevant to your study. Once you have collected the data, ensure it is well-organized and formatted for analysis, allowing you easy access during the coding process.

Next, develop a coding framework. This consists of categories and subcategories that represent the themes or patterns you intend to study. You can create this framework based on preliminary readings of your text or established theories relevant to your research. Engaging other team members during this phase can provide additional perspectives and help refine your categories.

Overall, setting up for textual data analysis involves careful preparation of your data and thoughtful planning of your coding strategy. These initial steps are fundamental in guiding your analysis and ensuring meaningful insights emerge from your research.

Choosing Your Textual Data Sources

When choosing your textual data sources, consider the type of information that aligns with your research objectives. Start by identifying the context and subject matter relevant to your analysis. Primary sources such as interviews, focus groups, or raw transcripts can provide rich insights directly from participants. Alternatively, secondary sources like articles, reports, or social media discussions can offer broader perspectives on your topic of interest. Each source has its strengths and limitations, making careful selection crucial.

Next, assess the reliability and authenticity of your chosen data sources. Reliable sources contribute to the credibility of your findings, so prioritize those backed by established research or expert opinions. It’s also important to consider the diversity of your texts, as this can enrich your analysis and provide multiple viewpoints. By making informed choices about your textual data sources, you lay a solid foundation for a thorough and meaningful qualitative content analysis.

Defining Your Research Questions

Defining your research questions is crucial for effective qualitative content analysis coding. Start by determining the specific objectives of your study. These objectives will guide the development of your research questions, ensuring they are clear and focused. Well-defined questions lead to meaningful data that can be analyzed to yield valuable insights.

Consider the broader context of your research. What are the key themes or issues you wish to explore? Ideally, your questions should address these themes directly. Aim for open-ended inquiries that allow for the exploration of various perspectives. This approach maximizes the potential of your textual data analysis, as it encourages a deeper understanding of the content. Remember, the clarity of your research questions significantly impacts the reliability and validity of your findings.

Steps in Qualitative Content Analysis Coding

Qualitative content analysis coding involves a systematic approach that helps researchers interpret textual data effectively. To begin, it’s essential to define your research questions clearly, as this will guide your entire coding process. Next, familiarize yourself with the textual data, identifying key themes and patterns that emerge during your initial review.

After thoroughly understanding the data, you can start the coding process. This involves categorizing the data based on identified themes. Use a coding framework to ensure consistency, and apply codes to segments of text that reflect the themes of your research. Subsequently, review the coded data regularly to refine your categories and ensure accuracy. Finally, synthesize your findings by analyzing the data through the lens of your research questions, allowing you to draw comprehensive conclusions. These steps in qualitative content analysis coding will lead to insightful understanding and clearer interpretation of your textual data analysis efforts.

Initial Steps in Textual Data Analysis Coding

Initial steps in textual data analysis coding lay the foundation for effective qualitative content analysis. Begin by immersing yourself in the textual data, familiarizing yourself with its nuances and themes. This comprehensive understanding becomes pivotal as you transition into the coding phase, where you categorize segments of text based on emerging patterns and insights.

Next, develop a coding scheme that reflects your fundamental themes. This scheme acts as a map, guiding your analysis by ensuring a structured approach. As you code the data, continuously compare new insights with existing themes to maintain relevance. This iterative process enhances the reliability of your analysis while allowing flexibility for new concepts to emerge. By following these steps, you position your qualitative content analysis for success, generating meaningful insights that truly reflect the depth of your data.

Familiarizing Yourself with the Data

Familiarizing yourself with the data is a crucial initial step in the qualitative content analysis process. It involves delving into the textual data to understand its context, themes, and nuances. Start by reading through the data thoroughly, absorbing the material rather than rushing through it. This immersion helps you grasp key concepts and identify recurring patterns that will inform your analysis later. It's essential to maintain an open mind, as initial perceptions may shift once you engage deeply with the content.

To facilitate this understanding, consider the following key points:

  • Read with Purpose : Focus on extracting meaning rather than merely scanning for keywords.
  • Identify Key Themes : Take note of significant ideas that emerge throughout the text.
  • Contextual Understanding : Grasp the circumstances surrounding the data to enhance your analysis.
  • Documentation : Annotate your findings to reference them easily during the coding process.

Engaging with the data in this structured manner will enhance your ability to code and interpret the findings effectively as you move forward.

Creating a Preliminary Codebook

Creating a preliminary codebook is an essential step in textual data analysis. This document acts as your guide, detailing the codes or themes you intend to use throughout your qualitative content analysis. Start by reviewing your data thoroughly and identifying initial categories that emerge. These categories should be broad enough to encompass various responses yet specific enough to provide meaningful insights.

Next, organize these codes into a structured format within the codebook. This may include definitions, examples from the data, and inclusion/exclusion criteria for each code. Creating a codebook not only helps establish consistency in coding but also serves as a reference point for your analysis team. Refining the codebook is an ongoing process; as you delve deeper into the data, new themes may arise, necessitating revisions. Remember, the quality of your textual data analysis hinges on a well-constructed preliminary codebook.

Advanced Coding Techniques for Textual Data Analysis

Advanced coding techniques for textual data analysis empower researchers to derive meaningful insights from qualitative data. Effective coding strategies can significantly enhance the analysis process, allowing for the identification of themes, patterns, and trends within large volumes of text. Researchers can adopt several advanced techniques to improve their engagement with textual data, enabling a more comprehensive interpretation of qualitative content.

Theme-Based Coding : This involves identifying broad themes within the data to categorize lighter nuances and variations.

In Vivo Coding : This technique uses the exact phrases and terms from participants, ensuring authenticity and grounding the analysis in the original context.

Deductive Coding : In this approach, pre-defined categories based on existing theories or literature guide the coding process, helping to confirm or challenge prior understanding.

Digital Text Analysis : Utilizing software tools to automate the identification of prominent themes can streamline the process, making data analysis faster and less prone to human error.

By mastering these techniques, researchers can enhance their qualitative content analysis, transforming raw textual data into actionable insights that inform decision-making and strategy development.

Applying Codes to Textual Data

To apply codes to textual data, begin by carefully reading through the content to identify key themes and patterns. This initial stage involves highlighting significant phrases or statements that resonate with your research questions. Understanding the context and the intent behind these phrases helps in coding them accurately.

Next, assign specific codes to these highlighted texts. Codes serve as labels that categorize chunks of data, simplifying the subsequent analysis. You might consider using a mix of deductive and inductive coding strategies. Deductive coding applies existing theories or frameworks to guide your analysis, while inductive coding allows themes to emerge naturally from the data itself.

Lastly, refine your codes through continuous comparison and adjustment. This iterative process ensures that the coding framework evolves alongside your understanding of the data, ultimately promoting a thorough analysis. By being systematic in applying codes, your textual data analysis becomes more meaningful and insightful.

Reviewing and Refining Codes

Reviewing and refining codes is a critical step in the process of qualitative content analysis coding. After initial coding, it is essential to revisit your codes systematically to ensure they accurately represent the underlying data. This review helps you identify any duplicates or gaps in your coding scheme. As you assess your codes, consider whether they adequately capture the themes and patterns that emerge from your textual data analysis.

Next, refining codes involves adjusting them based on insights gained during the review. You might merge similar codes or split complex ones into more specific categories. Continuous iteration enhances the depth and clarity of your analysis. The objective is to create a nuanced coding framework that reflects the richness of your data. By consistently reviewing and refining your codes, you enhance the reliability and validity of your findings, ultimately leading to more actionable insights in your research.

Conclusion: Refining Your Skills in Textual Data Analysis

Refining your skills in textual data analysis is an ongoing journey that can lead to profound insights. As you engage with qualitative content analysis, practice becomes essential. Each coded document offers an opportunity to enhance your understanding, enabling you to draw clearer conclusions from complex data sets.

To further develop your capabilities, consider collaborating with peers and sharing your findings. This exchange of ideas not only enriches your perspective but also fosters a deeper comprehension of analyzable content. Ultimately, refining these skills will empower you to extract valuable insights that drive informed decisions in your field. Embrace the learning process as a pathway to success in textual data analysis.

Turn interviews into actionable insights

On this Page

The Process of Business Research: Step-by-Step Instructions

You may also like, steps for coding qualitative data: a complete guide.

Insight7

Types of Coding in Thematic Analysis: Best Practices

How to use structural coding in qualitative research.

Unlock Insights from Interviews 10x faster

content analysis for qualitative research

  • Request demo
  • Get started for free

Socio-technical Grounded Theory for Qualitative Data Analysis

  • First Online: 10 June 2024

Cite this chapter

content analysis for qualitative research

  • Rashina Hoda   ORCID: orcid.org/0000-0001-5147-8096 2  

Qualitative data analysis forms the heart of socio-technical grounded theory (STGT). In this chapter, we will start by learning about the basics of qualitative data analysis and preparing for qualitative data analysis as it applies in STGT, including the mindset, approach, and worldview applied to analysis. This is followed by a visual notes-taking analogy to help explain the general idea of qualitative data analysis using STGT procedures. Then we will learn about the STGT procedure of open coding using hashtags, the zoom out–zoom in technique, and how to draw out analytical and socio-technical codes as well as considering options and revising codes. Next, we will learn about the procedure of constant comparison through which data is condensed and raised in levels of abstraction and the procedure of memoing to draw insights and relationships. This is followed by tips for data analysis , including tips for creating codes, coding for richness, context filling, role of research paradigm, and improving theoretical sensitivity. Next, we will consider team-based coding , including guidelines for achieving alignment in a coding team. Finally, the chapter ends with explaining the expected outcomes of applying STGT for data analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Berntzen, M., Stray, V., Moe, N. B. & Hoda, R. (2023). Responding to change over time: A longitudinal case study on changes in coordination mechanisms in large-scale agile. Empirical Software Engineering, 28 (5), 114.

Article   Google Scholar  

Brown, C. B. (2002). Acompañar: A grounded theory of developing, maintaining and assessing relevance in professional helping . University of Houston.

Google Scholar  

Chan, Y.-C. & Hauser, E. (2023). Understanding reactions in human-robot encounters with autonomous quadruped robots, Proceedings of the Association for Information Science and Technology, 60 (1) 86–97.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . Sage.

Charmaz, K. (2014). Constructing grounded theory . Sage.

Clarke, V. and Braun, V. (2013). Successful qualitative research: A practical guide for beginners. In Successful qualitative research (pp. 1–400).

Dey, I. (1999). Grounding grounded theory: guidelines for grounded theory inquiry.

Feitelson, D., Mizrahi, A., Noy, N., Shabat, A. B., Eliyahu, O., & Sheffer, R. (2020). How developers choose names, IEEE Transactions on Software Engineering, 48 (1), 37–52.

Glaser, B. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory . Sociology Press.

Glaser, B. G. (2009). The novice GT researcher. The Grounded Theory Review, 8 (2), 1–21.

Glaser, B. G. and Strauss, A. L. (2017). Discovery of grounded theory: Strategies for qualitative research . Routledge (First Published 1967).

Graetsch, U. M., Khalajzadeh, H., Shahin, M., Hoda, R. & Grundy, J. (2023). Dealing with data challenges when delivering data-intensive software solutions. IEEE Transactions on Software Engineering . https://doi.org/10.1109/TSE.2023.3291003

Guba, E. G. & Lincoln, Y. S. (1994). Competing paradigms in qualitative research, Handbook of Qualitative Research, 2 (163–194), 105.

Haggag, O., Hoda, R., & Grundy, J. (2024). Towards enhancing mobile app reviews: A structured approach to user review entry, analysis and verification. In In the proceedings of Evaluation of Novel Approaches to Software Engineering (ENASE) .

Hidellaarachchi, D., Grundy, J., Hoda, R., & Mueller, I. (2023). The influence of human aspects on requirements engineering-related activities: Software practitioners’ perspective, ACM Transactions on Software Engineering and Methodology, 32 (5), 1–37.

Hoda, R. (2022). Socio-technical grounded theory for software engineering. IEEE Transactions on Software Engineering, 48 (10), 3808–3832.

Hoda, R., Noble, J., & Marshall, S. (2011). The impact of inadequate customer collaboration on self-organizing agile teams. Information and Software Technology, 53 (5), 521–534.

Hoda, R., Noble, J., & Marshall, S. (2012). Self-organizing roles on agile software development teams. IEEE Transactions on Software Engineering, 39 (3), 422–444.

Hollway, W., & Jefferson, T. (2012). Doing qualitative research differently: A psychosocial approach . Sage.

Li, Z. Z., Wang, H., Gasevic, D., Yu, J., & Liu, J. K. (2023). Enhancing blockchain adoption through tailored software engineering: An industrial-grounded study in education credentialing, Distributed Ledger Technologies: Research and Practice, 2 (4), 1–24.

Madampe, K., Hoda, R., & Grundy, J. (2021). A faceted taxonomy of requirements changes in agile contexts. IEEE Transactions on Software Engineering, 48 (10), 3737–3752.

Madampe, K., Hoda, R., & Grundy, J. (2022). The emotional roller coaster of responding to requirements changes in software engineering. IEEE Transactions on Software Engineering, 49 (3), 1171–1187.

Madampe, K., Hoda, R., & Grundy, J. (2023). A framework for emotion-oriented requirements change handling in agile software engineering. IEEE Transactions on Software Engineering . https://doi.org/10.1109/TSE.2023.3253145

Maruping, L. M., Zhang, X., & Venkatesh, V. (2009). Role of collective ownership and coding standards in coordinating expertise in software project teams. European Journal of Information Systems, 18 (4), 355–371.

Masood, Z., Hoda, R., & Blincoe, K. (2021). What drives and sustains self-assignment in agile teams. IEEE Transactions on Software Engineering, 48 (9), 3626–3639.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 . https://doi.org/10.1177/1609406919899220

Pant, A., Hoda, R., Spiegler, S. V., Tantithamthavorn, C., & Turhan, B. (2023). Ethics in the age of AI: An analysis of AI practitioners’ awareness and challenges. ACM Transactions on Software Engineering and Methodology . https://doi.org/10.1145/3635715

Strauss, A., & Corbin, J. (1990). Basics of qualitative research . Sage Publications.

Vidich, A. J., & Lyman, S. M. (2000). Qualitative methods: Their history in sociology and anthropology. Handbook of Qualitative Research, 2 , 37–84.

Wang, W., Khalajzadeh, H., Grundy, J., & Madugalla, A. (2023). Adaptive user interfaces for software supporting chronic diseases. 2023 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) (pp. 215–218). IEEE.

Download references

Author information

Authors and affiliations.

Faculty of Information Technology, Monash University, Melbourne, VIC, Australia

Rashina Hoda

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Hoda, R. (2024). Socio-technical Grounded Theory for Qualitative Data Analysis. In: Qualitative Research with Socio-Technical Grounded Theory. Springer, Cham. https://doi.org/10.1007/978-3-031-60533-8_10

Download citation

DOI : https://doi.org/10.1007/978-3-031-60533-8_10

Published : 10 June 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-60532-1

Online ISBN : 978-3-031-60533-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 14, Issue 9
  • Barriers and facilitators to implementing imaging-based diagnostic artificial intelligence-assisted decision-making software in hospitals in China: a qualitative study using the updated Consolidated Framework for Implementation Research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-2349-8775 Xiwen Liao 1 , 2 ,
  • Chen Yao 1 , 2 ,
  • http://orcid.org/0000-0002-4991-0158 Feifei Jin 3 , 4 ,
  • Jun Zhang 5 ,
  • Larry Liu 6 , 7
  • 1 Peking University First Hospital , Beijing , China
  • 2 Clinical Research Institute, Institute of Advanced Clinical Medicine , Peking University , Beijing , China
  • 3 Trauma Medicine Center , Peking University People's Hospital , Beijing , China
  • 4 Key Laboratory of Trauma treatment and Neural Regeneration, Peking University , Ministry of Education , Beijing , China
  • 5 MSD R&D (China) Co., Ltd , Beijing , China
  • 6 Merck & Co Inc , Rahway , New Jersey , USA
  • 7 Weill Cornell Medical College , New York City , New York , USA
  • Correspondence to Chen Yao; yaochen.pucri{at}foxmail.com

Objectives To identify the barriers and facilitators to the successful implementation of imaging-based diagnostic artificial intelligence (AI)-assisted decision-making software in China, using the updated Consolidated Framework for Implementation Research (CFIR) as a theoretical basis to develop strategies that promote effective implementation.

Design This qualitative study involved semistructured interviews with key stakeholders from both clinical settings and industry. Interview guide development, coding, analysis and reporting of findings were thoroughly informed by the updated CFIR.

Setting Four healthcare institutions in Beijing and Shanghai and two vendors of AI-assisted decision-making software for lung nodules detection and diabetic retinopathy screening were selected based on purposive sampling.

Participants A total of 23 healthcare practitioners, 6 hospital informatics specialists, 4 hospital administrators and 7 vendors of the selected AI-assisted decision-making software were included in the study.

Results Within the 5 CFIR domains, 10 constructs were identified as barriers, 8 as facilitators and 3 as both barriers and facilitators. Major barriers included unsatisfactory clinical performance (Innovation); lack of collaborative network between primary and tertiary hospitals, lack of information security measures and certification (outer setting); suboptimal data quality, misalignment between software functions and goals of healthcare institutions (inner setting); unmet clinical needs (individuals). Key facilitators were strong empirical evidence of effectiveness, improved clinical efficiency (innovation); national guidelines related to AI, deployment of AI software in peer hospitals (outer setting); integration of AI software into existing hospital systems (inner setting) and involvement of clinicians (implementation process).

Conclusions The study findings contributed to the ongoing exploration of AI integration in healthcare from the perspective of China, emphasising the need for a comprehensive approach considering both innovation-specific factors and the broader organisational and contextual dynamics. As China and other developing countries continue to advance in adopting AI technologies, the derived insights could further inform healthcare practitioners, industry stakeholders and policy-makers, guiding policies and practices that promote the successful implementation of imaging-based diagnostic AI-assisted decision-making software in healthcare for optimal patient care.

  • Clinical Decision-Making
  • Implementation Science
  • Information technology

Data availability statement

Data are available on reasonable request. Study protocol and interview transcripts are available on request by contacting the corresponding author.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2024-084398

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

Used the updated Consolidated Framework for Implementation Research to systematically identify barriers and facilitators.

Conducted semistructured interviews with a wide range of key stakeholders, both from clinical settings and industry.

Potential generalisability limitations due to purposive sampling of artificial intelligence software and the cluster of healthcare institutions and study participants in big cities.

The inclusion of perspectives from patients should be addressed in future research.

Introduction

Clinical decision-making (CDM) is a challenging and complex process. 1 Effective and informed CDM requires a delicate balance between the best available evidence, environmental and organisational factors, knowledge of the patient and comprehensive professional capabilities, such as clinical skills and experiences. 2 3 However, the ability of healthcare professionals to make such decision is often restricted by the dynamic and uncertain nature of clinical practices. 3 In response, decision-making tools have been developed to enhance and streamline CDM for optimal healthcare outcomes. Traditional decision-making tools heavily depend on computerised clinical knowledge bases, supporting CDM by matching individual patient data with the knowledge base to provide patient-specific assessments or recommendations. 4 5 However, relying solely on knowledge-based tools has become insufficient to fulfil the growing need for accessible, efficient and personalised healthcare services, due to its inherent limitations such as time-consuming processes, disruptions in routine clinical workflow and challenges in constructing complex queries. 6 7

Non-knowledge-based decision support tools, on the other hand, harness artificial intelligence (AI) algorithms to analyse large and complex datasets and learn continuously for more accurate and individualised recommendations. 5 8 AI technology has been rapidly advancing since 2000, unleashing substantial potential to revolutionise the conventional CDM process and driving a fundamental shift in healthcare paradigm. 9 The development and extensive growth of clinical real-world data (RWD) have made integrating AI technology into the healthcare sector a priority for both the healthcare industry and regulatory agencies.

The US Food and Drug Administration (FDA) has well recognised the use of AI techniques combined with clinical RWD for both drug and medical device development. In drug development, the FDA has reported a marked increase in the number of drug and biological application submissions with AI components across different stages of life cycle. 10 AI algorithms have been actively integrated into biomarker discovery, eligible population identification and prescreening, clinical drug repurposing, and adverse event (AE) detection, ranging from drug discovery and premarket clinical studies to postmarket safety surveillance. 11 Particularly, the number of studies on AE detection with the use of natural language processing increased from 1 between 2005 and 2018 to 32 between 2017 and 2020. 11

In addition to the pharmaceutical industry, AI medical devices, whether intended for decision-making or other purposes, have experienced rapid development between 2000 and 2018. 12 During this period, the growing sophistication of imaging medical supplies, such as CT scanners, has led to an exponential increase in the volume of high-dimensional imaging data. This surge has gradually shifted the focus of AI medical devices towards imaging-based diagnostic decision-making software, making lung nodule detection and diabetic retinopathy screening popular areas of research. 13 These AI systems, classified as software as medical devices, are designed to support diagnostic decision-making using clinical RWD, particularly imaging data generated by medical devices, leveraging AI technology to perform functions independently of hardware medical devices. 14–16 Notably, the FDA’s approval of the first imaging-based AI-assisted medical device for detecting diabetes-related eye diseases in 2018 marked major progress towards the implementation of imaging-based diagnostic AI-assisted decision-making software. 17 In China, a major milestone was achieved with the approval of the first AI-related software for coronary artery fractional flow reserve by the National Medical Products Administration (NMPA) in 2020, 18 highlighting the continuous development and integration of AI technologies in healthcare practices. Following this breakthrough, AI-assisted decision-making software has experienced rising popularity in China. The number of regulatory approvals for AI software expanded from 1 in 2020 to 62 in 2022. 12 Key functions included disease identification, lesion segmentation, and risk prediction, and risk classification, covering therapeutic areas from cardiovascular diseases to various types of cancers. 12 19

Evidence from randomised controlled trials has demonstrated the safety and effectiveness of AI-assisted decision-making software across various therapeutic areas. For disease detection, these tools have facilitated the early identification of patients with low ejection fraction, improved detection rates for actionable lung nodules and increased the identification of easily missed polyps. 20–23 Furthermore, AI software has significantly reduced diagnostic times compared with senior consultants in diagnosing childhood cataracts, improving clinical efficacy. 24 In disease management, AI-assisted decision-making software has decreased treatment delays for cardiovascular diseases and lowered in-hospital mortality for severe sepsis, 25 26 contributing to improved healthcare quality. Additionally, studies have indicated that AI tools are effective across diverse populations, significantly enhancing follow-up rates for diabetic retinopathy in paediatric populations and improving referral adherence in adult patients within low-resource settings. 27 28 However, a priori scoping review found a disparity between substantial research investment and limited real-world clinical implementation. 19 Further, employing stratified cluster sampling from six provinces across China, a study revealed that only 23.75% of the surveyed hospitals had implemented AI-assisted decision-making software. 29 Accordingly, existing literature has emphasised the deficiency in implementation science expertise for understanding AI implementation efforts in clinical settings. 30 To bridge the gap, the current study aimed to explore the barriers and facilitators of implementing existing imaging-based diagnostic AI-assisted decision-making software in China through qualitative interviews, using the updated Consolidated Framework for Implementation Research (CFIR). The key strength of using the updated CFIR lied in its adaptability to capture both the breadth and depth of qualitative data, as well as its applicability to explore the implementation of technology across various healthcare settings, further enhancing the rigour and comprehensiveness. 31 32 In addition, the study provided tailored implementation strategies to address key barriers, exploiting the full potential of imaging-based diagnostic AI-assisted decision-making software for the improved quality of care in China.

Innovation selection

A recent scoping review identified imaging-based diagnostic decision-making software using medical imaging data as the predominant AI-assisted decision-making software in China, especially those designed for the lung nodules detection and diabetic retinopathy screening. 19 To manage the increasing submissions in these areas, the Center for Medical Device Evaluation of NMPA issued two corresponding guidelines in 2022, delineating specific regulatory requirements. 33 34 Therefore, the current study purposively chose to investigate AI-assisted decision-making software for lung nodules and diabetic retinopathy screening, serving as two representative imaging-based diagnostic AI applications. Two lists were obtained from the NMPA database, from which one software for lung nodules and another for diabetic retinopathy were ultimately selected ( online supplemental tables 1 and 2 ). The vendors were selected through convenience sampling and their voluntary consent to participate. Characteristics of the selected AI-assisted decision-making software are shown in table 1 .

Supplemental material

  • View inline

Characteristics of the selected AI-assisted decision-making software

Study setting and participants

With the aim of reflecting diversified perspectives from a wide range of stakeholders, participants from both clinical settings and industry were included. Specifically, clinical stakeholders consisted of three different roles, including healthcare practitioners, hospital informatics specialists and hospital administrators. Industry stakeholders were vendors of the selected AI-assisted decision-making software, which was further divided into three subroles, including data scientists, database experts and algorithm engineers. While the perspectives of patients are valuable, the current study aimed to gather in-depth insights regarding practical and systemic challenges from stakeholders who are directly involved in the implementation, deployment and development of the selected AI-assisted decision-making software. The included stakeholders possessed either the operational or technical knowledge necessary to identify specific barriers and facilitators related to the implementation of the selected AI software in clinical settings. Thus, patients were not included as a stakeholder group in this study.

The selection of study participants involved a two-stage process, wherein initial screening occurred at the institutional level, followed by the individual-level selection. For clinical stakeholders, two lists of hospitals that implemented the selected AI-assisted decision-making software were acquired from the corresponding software vendor. Employing stratified purposive sampling, one tertiary hospital and one primary or secondary hospital were selected from each of the lists, respectively. All selected healthcare institutions were located in big cities, including the Cancer Hospital of the Chinese Academy of Medical Sciences, Beijing Hospital, Beijing Shuili Hospital and the community healthcare centre of Qingpu, Shanghai. The snowball sampling technique was subsequently used to select study participants by asking the Scientific Research Division to identify relevant clinical department and related healthcare practitioners, health informatics specialists and hospital administrators. 35 Additionally, healthcare practitioners were stratified by their professional titles, including junior, intermediate and senior. A similar two-stage process was applied to select stakeholders related to AI-assisted software vendors. CY was responsible to contact the hospitals and AI-assisted software vendors for study participation and interview arrangements.

Eligibility criteria for participant selection were as follows:

Inclusion criteria

Participants from clinical settings should either have user experience with the selected AI-assisted decision-making software or have experience in deploying or managing such software at the hospital level.

Participants from the industry should have working experience in the development of AI-assisted CDM software.

Participants should be formal staff at the stakeholder’s institution.

Participants should be at least 18 years old.

Participants should be able to sign the informed consent form voluntarily.

Exclusion criteria

Participants were excluded from the study if:

Participants could not sign the informed consent form.

Participants could not provide at least 15 min for the interview.

No sensitive information was collected during the interviews. To maintain confidentiality, all qualitative data were anonymised, and each participant who signed the informed consent voluntarily was assigned a unique identification number. Deidentified transcriptions and audio recordings were stored securely on a protected research drive with access restricted to the research team. The research data will be destroyed after 5 years of the study’s conclusion.

Theoretical framework: the CFIR

The CFIR, a well-established conceptual framework in implementation science, was originally developed in 2009 to systematically assess complex and multilevel implementation contexts for the identification of determinants impacting the successful implementation of an innovation. 36 The original CFIR is an exhaustive and standardised meta-theoretical framework synthesised from 19 pre-existing implementation theories, models and frameworks, which was modified in 2022 in response to user feedback and critiques. 37 The updated CFIR consists of 48 constructs and 19 subconstructs across 5 domains including innovation, outer setting, inner setting, individual and implementation process. 37 It provided a fundamental structure for the exploratory evaluation of the barriers and facilitators to implementing imaging-based diagnostic AI-assisted decision-making software in China. Studies employing the CFIR in healthcare settings extensively explored various technological areas, including the implementation of electronic health record systems, 38 telemedicine 39 and various innovative tools, such as the frailty identification tool and decision-support systems in emergency care settings. 40 41 The wide adoption of the CFIR across diverse healthcare contexts emphasised its value in capturing the complex dynamics involved in implementing technology innovations. The thorough and flexible application of the updated CFIR in data collection, analysis and reporting within the current study aimed to increase study efficiency, produce generalisable research findings to inform AI implementation practice and build a scientifically sound evidence base for tailoring implementation strategies to address key barriers.

Data collection procedures

Semistructured in-person interviews were conducted. Study-related data were collected subsequent to obtaining informed consent from the participants. Guided by the updated CFIR, different interview guides were developed for four distinct stakeholder roles, including healthcare practitioners, hospital informatics specialists, hospital administrators and vendors of AI-assisted decision-making software ( online supplemental appendix 1 ). The interview guides were designed specifically to elicit participants’ perspectives, experiences and insights in implementing or delivering AI-assisted decision-making software (CY, XL and FJ). Prior to initiating data collection, the interview guides were pilot tested with four non-study participants to ensure clarity and reliability. Necessary modifications were made based on the feedback. The interviews were conducted by two interviewers with extensive training and experience in qualitative interviews (XL and FJ). Interview time, location, stakeholder role and basic demographic information were collected. Interviews continued until constructs of the updated CFIR were adequately represented in the data, indicating data saturation. 42

Data analysis

The interviews were audio recorded, transcribed verbatim in Chinese and coded independently by two coders (XL and FJ). Deductive content analysis was primarily used for data analysis. As a systematic and objective qualitative approach, content analysis is used to describe and quantify phenomena by deriving replicable and reliable inferences from qualitative data within relevant context. 43 For deductive content analysis, data were coded based on existing theories or framework defined a priori. However, the current study allowed new themes that did not fit into any of the pre-existing CFIR constructs to emerge through inductive analysis of the data.

Steps of deductive data analysis were as follows 44 :

Selecting the unit of analysis

Each interview was selected as unit of analysis, wherein conversational turns that contributed to the understanding of research questions were identified as meaning units. A turn consisted of an uninterrupted segment, which could be a single word or a few sentences. Following independent transcription of the audio recordings by two coders (XL and FJ), CY reviewed and compared the transcriptions, finalising the transcript to be analysed. To be immersed in the data, two coders (XL and FJ) engaged in a thorough reading of the transcripts and made relevant annotations.

Developing structured codebook

Before coding, a standardised, publicly available codebook template based on the original CFIR was employed and adapted to the study context collectively (XL, FJ and CY) ( online supplemental appendix 2 ). 45 Adaptations were multifaceted, which included aligning the original CFIR domains and constructs with those of the updated CFIR, tailoring language specific to the implementation of imaging-based diagnostic AI-assisted decision-making software in China, refining operational definitions and developing eligibility criteria for each construct.

Data coding

Two coders (XL and FJ), who were trained rigorously in using the codebook, performed coding independently. To ensure reliability, 10% of the transcripts were randomly selected for pilot testing. Two coders independently applied the codebook to generate preliminary codes for each meaning unit and subsequently categorised them within the updated CFIR framework. On completion, a group discussion with CY or JZ was warranted, which involved a comprehensive review and comparison of coding discrepancies to ensure consistency in the interpretation and categorisation of units. Disagreements were resolved through consensus, and any necessary adjustments to the operational definitions and eligibility criteria were promptly and appropriately made.

The main coding process was then structured into several iterative rounds to ensure coding consistency. In each round, individual coders were responsible to code four distinct transcripts individually and a fifth transcript collaboratively, addressing any inconsistencies through comprehensive discussion until a consensus was reached. ATLAS.ti (V.23.1.1) was used to identify, label and categorise themes and patterns within the qualitative data. 46 Additionally, it facilitated data management, ensuring the storage, systematic organisation and retrieval of interview transcripts.

Reporting the data by category

Identified categories across the five domains of the updated CFIR were reported descriptively with direct quotes from participants.

Trustworthiness

The study employed several methodological strategies to ensure rigour and reliability. Multiple data sources and perspectives were incorporated to achieve triangulation, including distinct stakeholders directly involved in the implementation, deployment and development of the selected AI-assisted decision-making software. Throughout the data coding process, peer debriefing was employed. Two coders independently analysed the transcripts and collaboratively discussed the interpretations and coding decisions to reach a consensus. Moreover, an audit trail was conducted to ensure transparency, with thorough documentation of study processes such as the prespecified research protocol, deidentified transcriptions, informed consent forms, interview codebooks, typed notes, audio recordings and analyses of qualitative data. External audits were further performed to validate credibility. Experts independent of the study reviewed the study protocol, interview guides and findings, providing objective suggestions.

Patient and public involvement

There was no patient or public involvement in this study. Participants were only invited to participate in qualitative interviews.

Characteristics of study setting and participants

Interviews were conducted between May and August 2023. Table 2 provides an overview of the characteristics of the selected healthcare institutions. A total of 43 participants were invited for study enrolment, and 40 (93.0%) agreed to participate, including 23 healthcare practitioners, 6 hospital informatics specialists, 4 hospital administrators and 7 vendors of the selected AI-assisted decision-making software ( table 3 ). Non-participants included two senior healthcare practitioners and one vendor of AI-assisted decision-making software. Most participants held at least a master’s degree, and 57.5% of them were male.

Characteristics of the selected healthcare institutions

Demographic characteristics of study participants

Barriers and facilitators to implementing imaging-based diagnostic AI-assisted decision-making software

Among the 48 CFIR constructs and 19 subconstructs, 21 of them across 5 domains were found to be relevant in the context of implementing imaging-based diagnostic AI-assisted decision-making software in China ( figure 1 ). Specifically, 10 were identified as barriers, 8 as facilitators and 3 as both barriers and facilitators ( tables 4 and 5 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Identified CFIR constructs and their impact on the implementation of imaging-based diagnostic AI-assisted decision-making software in China. ‘−’ indicated barriers; ‘+’ indicated facilitators. AI, artificial intelligence; CFIR, Consolidated Framework for Implementation Research.

Barriers to implementing imaging-based diagnostic AI-assisted decision-making software using the updated CFIR

Facilitators to implementing imaging-based diagnostic AI-assisted decision-making software using the updated CFIR

Innovation evidence base (+): strong empirical evidence of effectiveness

The innovation evidence base was suggested to be a key determinant facilitating implementation. Participants in this study reported evidence supporting that the clinical performance of AI software was comparable to or even surpassed that of human beings, further leading to decreased diagnostic time, reduced risk of medical errors and enhanced patient outcomes. As the AI software supported healthcare practitioners in making critical judgements regarding patient care, the robust clinical findings of efficacy and accuracy were instrumental in fostering trust and acceptance among participants towards implementation.

Before we started using the AI software in our department, I checked out articles published in some highly respected peer-reviewed journals. They reported that clinical performance of AI was comparable to human performance. This encourages me to start using the software.—intermediate clinician

Innovation relative advantage (±)

Improved clinical efficiency (+).

One of the crucial benefits gained by using AI-assisted decision-making software was the improved clinical efficacy. Some key functions of AI-assisted software included the detection of anomalies and lesions at risk, automated volumetric measurements and classification of disease severity. Study participants noted that the average interpretation time for a human reader was markedly longer than that of AI-assisted software alone or concurrent reading with the software, regardless of the level of clinical experience and complexity of diseases.

The AI software makes decisions very quickly, as compared to my decision-making time. It greatly supports my clinical judgement and improves routine clinical efficacy.—junior clinician

Unsatisfactory clinical performance (−)

However, the real-world clinical performance of the AI-assisted decision-making software remained suboptimal. Despite a strong evidence base, compromised accuracies, high false-positive rates, overestimation of lesion size and misclassification of lesion types were commonly reported by study participants. The participating healthcare practitioners highlighted the need to improve clinical performance of the AI-assisted software, particularly under complex real-world clinical conditions.

In my daily practice, I consider my clinical judgment as the gold standard. The AI software’s performance, especially in distinguishing between part-solid and solid lung nodules, doesn’t meet my expectations. The performance of the software should be improved for better usability.—intermediate clinician

Innovation adaptability (–): lack of adaptability for generated report

The generation of a diagnostic report was recognised as a pivotal function of AI-assisted decision-making software, providing diagnostic recommendations based on the analysis of patient data. The direct and automatic integration of findings into the diagnostic report was a time-saving aspect for healthcare practitioners in terms of medical documentation. However, study participants perceived that diagnostic reports generated by the AI software lacked customisation options necessary to align with the standard documentation practices of healthcare practitioners. This limitation, along with the software’s insufficient flexibility to fully comply with the hospital’s documentation standards, hindered its seamless incorporation into clinical workflow.

Personally, I don’t use the reports generated by the software. The automatically generated repots don’t align with my documentation style or the hospital’s requirement, and it doesn’t allow me to change any elements within the report. I prefer to write the reports by myself.—senior clinician

Innovation trialability (+): AI software trialability

The ability to test or experiment with AI-assisted decision-making software before full implementation was determined as a pivotal factor facilitating successful implementation. Trialability allowed participating healthcare practitioners to assess the AI software on a smaller scale, supporting their familiarity with the new innovation. More importantly, a trial period enabled evaluation of the software’s compatibility with existing workflows, identification of potential implementation barriers, and assessment of the clinical performance and reliability in real-world settings.

As we prepared for the official implementation, our department pilot tested the AI software for several weeks. This allowed me to personally experience the software, making comparisons with our standard clinical practices.—intermediate clinician

Innovation complexity (+): easiness of use

The perceived easiness of use promoted successful integration of AI-assisted decision-making software into healthcare institutions. According to the participants, the AI software featured user-friendly interfaces designed in a straightforward manner for easy navigation. In addition, the AI software generated automated, clear and comprehensible output to support medical decisions, contributing to a smooth learning curve that was conducive to quick adoption and acceptance of study participants.

The good thing is that AI software is straightforward and easy to use. I learned how to use it with minimal hassle because it provided clear and understandable output with just a few mouse clicks—intermediate clinician

Innovation cost (−): financial burden of AI software

Currently, the cost of AI-assisted decision-making software is not covered by any insurance plans. Healthcare institutions sometimes face financial constraints when acquiring the software and managing ongoing maintenance costs. Participated hospital administrators, especially those from primary and secondary hospitals, expressed the need to reallocate budgetary resources from other areas, such as staff resources and infrastructure, to accommodate the high cost associated with AI software. The perceived lack of cost-effectiveness discouraged further investment.

The insurance plans don’t cover the cost of AI software now, and patients are not paying for it either. Cost-effectiveness is one of our top priorities, and we won’t spend a lot money on the software.—hospital administrator

Outer setting

Partnerships and connections (−): lack of a collaborative network between primary/secondary and tertiary hospitals.

AI-assisted decision-making software was valuable in early disease detection and intervention. However, study participants from primary care reported that the absence of partnerships and communication channels with tertiary hospitals created challenges for patients diagnosed with diseases. These challenges included delays in receiving informed referrals to tertiary hospitals, potentially resulting in late medical intervention and discontinuity in care. Further, the lack of established connections impeded the sharing and exchange of patient data between hospitals. Tertiary hospitals received incomplete or insufficient patient profiles from primary hospitals, contributing to an inadequate understanding of the patient’s condition and history. In such scenarios, patients might undergo redundant diagnostic tests at different facilities, leading to both patient inconvenience and increased healthcare costs.

For the efficient and effective utilisation of AI-assisted decision-making software in healthcare settings and optimal patient care, participants highlighted the importance of establishing a mechanism to refer and follow up with patients who have positive or indeterminate disease findings from primary hospitals to tertiary hospitals.

Patients diagnosed at our hospital with positive or indeterminate results usually need to be referred to a tertiary or specialized hospital for further treatment. However, ensuring patient compliance is a challenge. Partnering with those hospitals and establishing some referral and follow-up mechanisms will be beneficial.—intermediate clinician

Policies and laws (±)

National guidelines related to ai (+).

With the rapid advancements in AI technology, China released a series of national policies and guidelines to rigorously promote the interdisciplinary integration of AI into healthcare sector. 47–49 In response, clinical institutions took necessary steps forward, proactively incorporating AI-assisted decision-making software into conventional healthcare practices. To date, well-established regulatory frameworks clearly outlined and regulated the development, approval and classification of AI-assisted decision-making software as a medical device. Compliance with these regulations increased the confidence of study participants in the implementation of AI-assisted software.

We decided to bring this software in our hospital because our country is promoting the widespread adoption of AI, and it’s also the trend across different economic sectors nationwide. There are several national guidelines supporting its development and use in healthcare system, which increased our confidence in implementing the software.—hospital administrator

Lack of information security measures and certification (−)

Conversely, to ensure data security and protect patient privacy, legislation such as the cybersecurity law mandated a multilevel protection scheme. In accordance, the ‘Information Security Technology—Baseline for Classified Protection of Cybersecurity’ defined four levels of security requirements, which provided baseline guidance for securing platforms and systems handling personal information. Information systems in healthcare institutions must comply with level 3 standards, the highest for non-banking institutions, given the sensitivity of patient electronic data. Consequently, participating vendors of AI software seeking collaboration with hospitals were required to have robust information security measures and level 3 security certification as prerequisites to fulfil safety obligations. The absence of such measures and certification, not uncommon among innovative technology companies, posed barriers to successful implementation.

In order to ensure the confidentiality of patient electronic data and comply with cybersecurity protection requirements, our hospital can’t implement AI decision-making software without robust data security measures or Level 3 security certification.—hospital informatics specialist

External pressure–market pressure (+): deployment of AI software in peer hospitals

Study participants noted that the implementation of AI-assisted decision-making software in peer hospitals, such as those within the same academic affiliation, fostered a competitive atmosphere and exerted a form of peer pressure, facilitating its widespread implementation. This was especially evident in the context of China’s medical informatisation development, where hospitals without AI software implementation felt compelled to stay competitive with their peers to gain a strategic advantage.

We’ve learned that some peer hospitals have already been using such software for quite some time. We are late adopters.—hospital informatics specialist

Inner setting

Relational connections (−): lack of collaboration between specialised and non-specialised clinical departments.

When patient care involved multiple clinical departments, ambiguity arose regarding the authorisation of reports generated by AI-assisted decision-making software. These reports were intended to complement the clinical decisions made by human clinicians who ultimately held the responsibility. However, non-specialists faced challenges in endorsing automated reports due to differences in clinical expertise, varying criteria for report validation, and concerns regarding liability. On the other hand, specialists often regarded AI-generated reports as less reliable than their own specialised assessments, potentially leading to reluctance in signing the reports.

Study participants emphasised the importance of establishing an interdepartmental collaborative network between specialty and non-specialty clinical departments and providing clear definitions of the roles and responsibilities within these departments to address this barrier.

As doctors without specialized expertise in ophthalmology, my colleague and I may not be authorized to sign the clinical report produced by the AI software. A collaborative network or mechanism with the department of ophthalmology will be helpful.—intermediate clinician

Communications (+): regular communication channels within department

Participants suggested that establishing regular communication channels, like weekly meetings, ensured that all members of the clinical department stayed informed about AI-assisted decision-making software. It also provided a platform for educational opportunities, such as workshops, to keep healthcare practitioners well informed and up to date. Open communication effectively addressed concerns and questions related to the implementation of AI software, fostering confidence and competence among healthcare practitioners.

I’m glad that I have the opportunity to discuss personal experience with my colleague during our weekly meetings, where case studies are shared and insights are exchanged. The open dialogue enhances our knowledge and improves my proficiency and confidence.—junior clinician

Compatibility (±)

Suboptimal data quality (−).

The effective performance of AI-assisted decision-making software relied on the availability of high-quality and accurate source data. In the process of software development, machine learning and deep learning algorithms used to analyse and interpret imaging data were trained with dataset that underwent meticulous cleaning and curation, ensuring the removal of poor-quality data containing imaging noise and artefacts before analysis. However, in real-world clinical settings, various factors, like equipment limitations, patient motion and varying proficiency of technicians, potentially introduced imperfections in imaging data. As a result, participants pointed out that AI-assisted software was not highly compatible with and adequately trained on data collected during routine clinical practice.

The real-world imaging quality is often less than optimal, which can lead to inaccuracy or failure of AI diagnosis. We can’t always ask the patient to redo the examination for better quality data, in consideration of their time and healthcare cost.—senior clinician

Integration of AI software into existing hospital systems (+)

In contrast, the integration of AI-assisted software into established hospital systems, such as the picture archiving and communication system (PACS), streamlined clinical workflows and facilitated the effective implementation. The compatibility with PACS enabled interoperability between AI-assisted software and healthcare information systems, providing healthcare practitioners with a familiar working environment and mitigating interruptions in workflow.

Our AI software integrates with the PACS. Clinicians don’t have to learn a new standalone system; instead, they can access AI-generated insights directly within their existing PACS environment, minimizing any disruptions to their workflow.—vendor of AI-assisted software

Mission alignment (−): misalignment between software functions and goals of healthcare institution

A misalignment between functions of AI-assisted decision-making software and the core hospital missions, especially for comprehensive tertiary hospitals, was revealed by study participants. Currently, diagnostic AI-assisted software predominantly supports the diagnosis of general and non-complicated diseases, which divergess from the main strategic objectives of tertiary hospitals dedicated to managing complex medical conditions and delivering high-level care through specialised expertise. Alternatively, AI software appeared more suitable for primary hospitals, where it could be used for general disease diagnosis and population-level screening. Tertiary hospitals prioritised other initiatives perceived to be more critical to their mission. This prioritisation further contributed to the reluctance among healthcare practitioners to embrace AI-assisted software, as they identified the introduction of AI software as a distraction from the hospital’s core mission.

At times, it’s difficult for us to establish collaborations with high-level tertiary hospitals. These hospitals often have highly experienced clinicians, focusing on the improvement of care quality for complex diseases and rare conditions. They have the perception that our AI software may not perform well in their setting. Instead, they suggest that our software may be better suited for primary hospitals where initial diagnoses take place.—vendor of AI-assisted decision-making software

Available resources–materials and equipment (–): lack of necessary medical supplies

The availability of essential medical supplies was integral to the successful implementation of AI-assisted decision-making software that relied on medical imaging data as the primary data source for accurate assessment. In primary and secondary hospitals, where resources were relatively scarce, the limited access to equipment, like CT scanners, hindered the implementation of AI-assisted software.

The implementation of AI decision-making software is not possible at hospitals without necessary medical supplies like CT scanners. —hospital informatics specialist

Access to knowledge and information (–): lack of adequate training

For advanced technologies like AI-assisted CDM software, participated healthcare practitioners sometimes lacked the necessary knowledge and information required for effective use. Inadequate training possibly contributed to a reluctance to adopt the technology, due to unfamiliarity with the software’s complete functionalities and challenges in its practical application.

I believe that I haven’t received thorough training on using the AI software. In fact, I’ve explored it on my own, and I’m not completely aware of all its functions.—intermediate clinician

Individuals

High-level and mid-level leaders (+), engagement of hospital administrator (+).

Participants in the study indicated that effective implementation of AI-assisted decision-making software was facilitated by the hospital’s active leadership engagement and promotional initiatives from the hospital to the department level. Hospital administrators took proactive steps to align the AI-assisted software with the institution’s long-term strategic goals through the initiation and oversight of pilot programmes. The endorsement and active support at the hospital level greatly fostered a collaborative environment among the clinical department, information technology department and vendor of AI-assisted decision-support software, positioning the AI-assisted software as an integral component of the hospital.

Strong support from the top level, especially from our hospital administrators, really makes a difference in introducing AI software and running it smoothly. They ensure its fit with the hospital through a pilot program and rigorously and effectively promote multi-stakeholder communication and collaboration.—hospital informatics specialist

Engagement of department head (+)

At the departmental level, leaders, such as department heads, who supported AI-assisted software, actively championed its implementation. They cultivated an atmosphere of support and knowledge-sharing within the department through the organisation of workshops and seminars, stressing the prospective clinical benefits of implementation. Beyond intradepartmental communication, they facilitated efficient interdepartmental communication with the information technology department to ensure seamless integration of AI-assisted decision-making software into the real-world clinical setting.

Our department head actively supports the implementation of AI software by integrating discussions about relevant knowledge and experiences into our weekly meetings, shedding light on the potential clinical benefits. In fact, they play a very important role in facilitating the integration of AI software into the existing PACS, making the entire implementation process much more efficient and effective.—junior clinician

Need (–): unmet clinical needs

Study participants revealed that AI-assisted decision-making software failed to meet the diverse clinical needs of healthcare practitioners. Currently, the underlying AI algorithm was predominantly designed and trained to address general and non-personalised clinical needs. Clinicians perceived AI-assisted software as insufficient in cases that were complex and multifaceted, requiring a comprehensive approach and an in-depth understanding of the patient’s medical history. Incorporating customisation options, enhancing adaptability in AI algorithms and demonstrating a commitment to ongoing improvement were essential to ensure that AI-assisted decision-making software aligned with the disparate needs of healthcare practitioners across various specialties and clinical settings.

The AI software we have is good for the basics, but we definitely expect more. Currently, its functions are too simplified, and it struggles in tricky and complex situations where you need a deep dive into the patient’s history.—senior clinician

Capability (–): incompetence in understanding AI reasoning mechanism

Participated healthcare practitioners faced challenges in implementing AI-assisted decision-making software in clinical practice due to a limited capability in understanding AI algorithms. The deficiency in necessary knowledge and expertise led to difficulties in comprehending the rationale behind the AI’s recommendations and decisions. This lack of clarity contributed to a lack of trust and reluctance towards the implementation of AI-assisted software. Participants suggested that addressing case-specific reasoning and providing global transparency, such as the algorithm’s functionality, strengths and limitations, would be helpful in opening the ‘black box’ of AI technology.

I sometimes find it hard to trust and embrace the software’s recommendations. I struggle with the complexity of the underlying rationale, since the software provides recommendations based on these algorithms. It’s not clear to me what’s inside the black box, like how it works, what its weakness and strengths are, etc. Clarifications on those factors would be helpful.—senior clinician

Implementation process

Engaging–innovation recipient (+): involvement of clinicians.

It was reported that active engagement substantially facilitated the implementation of AI-assisted decision-making software, particularly through the active involvement of key stakeholders during the process of implementation. In specific, the pilot testing phase was conducted to collect valuable insights and suggestions provided by users, determining limitations and identifying areas for improvement that closely aligned with their clinical needs and workflow. The healthcare practitioners, on the other hand, were empowered by actively shaping the software’s functionality and streamlining the process of implementation.

During the pilot testing phase, we collaborated with the entire department to answer any questions and gather suggestions. This active engagement of the clinicians was helpful not only for us to continuously improve the software, but also for the clinicians to feel involved and make an impact; it’s a win-win situation.—vendor of AI-assisted decision-making software

Reflecting and evaluating (–): lack of feedback incorporation

Reflecting and evaluating were central components of the continuous feedback-driven improvements that promoted the seamless integration of AI-assisted decision-making software into clinical settings. However, study participants noted that their suggestions and qualitative feedback, shared during the pilot testing phase, were not adequately reflected and implemented for process enhancement. Furthermore, there was a notable absence of quantitative assessment of the clinical performance of the AI-assisted software following its implementation. The absence of informative reflection on provided feedback and a structured evaluation process contributed to unaddressed challenges and the frustration of participating healthcare practitioners who felt that their inputs were not sufficiently valued.

My colleague and I provided feedback and suggestions about this AI software during the pilot testing phase, but we see no corresponding actions taken by the vendors, which is disappointing.—intermediate clinician I don’t think there is any systematic evaluation mechanism related to the clinical performance of AI software at our hospital. It is, however, important to periodically and systematically evaluate the performance of the software to make it more accurate and usable.—hospital informatics specialist

Comparison of stakeholder perspectives

It should be noted that the perceptions regarding the selected AI-assisted decision-making software varied considerably among different stakeholder roles. Recognising these unique perspectives is essential to the development of effective implementation strategies that address the varied concerns and priorities of each stakeholder group.

Clinicians, as the primary users, juxtaposed the potential benefits and limitations of the software. Junior clinicians, who have limited clinical experiences, generally held positive attitudes towards the implementation, highlighting the software’s ability to support clinical judgement and enhance routine clinical efficacy. While recognising the value of AI implementation, intermediate clinicians, who used the software more insightfully, gained practical perspectives and emphasised the need for strong interdepartmental collaboration, adequate training and referral mechanisms to tertiary or specialised hospitals for patients with positive or indeterminate disease findings. Senior clinicians provided the most critical feedback, expecting higher standard and improved performance in clinical effectiveness, reliability and transparency, particularly in complex clinical scenarios. On the other hand, hospital administrators focused on financial implications, like cost-effectiveness and budgetary constraints, and informatics specialists highlighted the importance of robust information security measures. Moreover, the selected vendors underscored the necessity of aligning the AI functions with the mission of the healthcare institution to ensure successful implementation.

To the best of our knowledge, this study was the first qualitative assessment that leveraged a well-established implementation framework to systematically guide the identification of barriers and facilitators of AI-assisted decision-making software in China’s healthcare system. The implementation of AI-assisted decision-making software in clinical practice is characterised by the inherent complexity and dynamic nature of both AI technology and healthcare environment. Previous literature attempted to synthesise and understand relevant determinants, with minimal application of theories or frameworks in implementation science, particularly in developing countries. 30 50 The use of the updated CFIR played a fundamental role in understanding the context of implementation and establishing a strategic roadmap, consistently and efficiently producing collective and generalisable knowledge for the development of context-specific implementation strategies tailored to China’s healthcare system. 51 The dynamic and continuous interaction among the five domains of the updated CFIR collectively shaped the outcome and effectiveness of imaging-based diagnostic AI-assisted software implementation. 36 52 The current study validated several barriers identified in prior research across diverse clinical settings, including suboptimal clinical performance, 53–55 compromised RWD quality, 56–58 insufficient training, 54 59 60 deficit in transparency and trust, 60–62 financial constraint, 54 57 insufficiency of necessary equipment, 59 60 and limited interdepartmental communication. 54 More importantly, study findings contributed novel insights to the continuous exploration of the implementation of imaging-based diagnostic AI-assisted decision-making software from the unique perspective of China’s healthcare system, establishing a theoretical foundation to guide the development of practical recommendations and implementation strategies for future improvement efforts.

Given the different perspectives of various stakeholder roles, the prioritisation of barriers, as well as the feasibility and cost-effectiveness of recommendations, the following three barriers and their corresponding suggestions were discussed in further detail ( figure 2 ).

Barriers and suggestions for implementing imaging-based diagnostic AI-assisted decision-making software in China. AI, artificial intelligence.

Barrier: misalignment between software functions and goals of healthcare institutions.

Suggestion: shift the focus of imaging-based diagnostic AI-assisted decision-making software implementation towards primary and secondary healthcare settings, where the AI software’s strengths in diagnosing generalised and non-complex conditions can be leveraged effectively.

The AI-assisted decision-making software has been disproportionately implemented in tertiary hospitals in China. 29 However, a notable misalignment between the functionality of imaging-based diagnostic AI-assisted decision-making software and the strategic goals of tertiary hospitals was found in the current study. Specifically, the implementation of AI-assisted decision-making software demonstrated its effectiveness in diagnosing generalised and non-complex medical conditions. Tertiary hospitals, in contrast, mainly served as hubs that provide specialised and advanced healthcare services, particularly for complex medical conditions. Despite the great potential of AI technologies to revolutionise healthcare, it has become evident that the complexity of conditions, frequently encountered by high-level healthcare institutions, has not been adequately addressed by existing AI competency. 55 60 Given the pivotal role that tertiary hospitals play in China’s healthcare system, it is necessary for imaging-based diagnostic AI-assisted decision-making software to further advance to meet the multifaceted clinical needs of tertiary hospitals in the near future. On the other hand, to effectively promote the implementation of existing imaging-based diagnostic AI-assisted software, a shift in the focus of implementation towards the primary or secondary level of healthcare, such as primary hospitals, physical examination centres or secondary hospitals, would offer a more cohesive fit. This shift would create a more suitable context to effectively implement imaging-based diagnostic AI-assisted decision-making software, leveraging its strengths while accommodating the unique challenges faced by healthcare institutes at the primary or secondary level. Primary healthcare in China typically addresses a broader spectrum of clinical needs and medical cases. However, it is often not the initial point of medical contact due to the suboptimal quality of care. 63 With substantial disparities between primary and tertiary care, residents in China perceived primary healthcare as of poor quality, as reflected in a low doctor-to-patient ratio in tertiary care. 64 65 Various contributing factors were reported, including insufficient knowledge among healthcare professionals, a gap between knowledge and practice, disproportionate distribution of health workforce and inadequate continuity of care across the entire healthcare system. 63 66 Implementing imaging-based diagnostic AI-assisted decision-making software at the primary level, aligning its functionality with the overarching goals of primary care, holds promise in addressing these challenges and bridging gaps, thereby potentially diverting patients with common medical needs towards primary healthcare facilities. AI technologies have the potential to facilitate a diagnosis at least equivalent to that of an intermediate-level clinician, complement clinical expertise and optimise medical resource allocation, enhancing early disease detection and ultimately promoting the quality of patient care across the healthcare hierarchy in China. 67–69

Barrier: lack of a collaborative network between primary/secondary and tertiary hospitals.

Suggestion: establish an integrated healthcare ecosystem driven by a hub-and-spoke model to promote the sharing of clinical data and improve patient referrals, ensuring seamless coordination between primary/secondary and tertiary healthcare institutions.

As mentioned above, one of the key challenges in China’s healthcare system lied in the fragmentation of healthcare delivery. The integration of imaging-based diagnostic AI-assisted decision-making software into primary care stressed the absence of a comprehensive collaborative network connecting primary and tertiary healthcare institutions. This deficiency exacerbated inefficiencies in patient referrals, with positive or indeterminate AI-assisted diagnoses at primary hospitals not being effectively referred to tertiary hospitals. As a result, patients could experience delays in receiving specialised treatment. Furthermore, the scattered and isolated electronic medical systems in China posed substantial challenges to joint healthcare initiatives. 63 Sharing and transfer of clinical data related to disease diagnosis were hindered due to the heterogeneity of systems, potentially leading to unnecessary and duplicated medical examinations. To address this issue, establishing an integrated healthcare ecosystem driven by the hub-and-spoke model would be a promising solution to promote the sharing of clinical data and medical knowledge, as well as facilitate best medical practices. This model, when applied in healthcare system, enhances peripheral services by connecting them with resource-replete centres. 70 In this context, basic medical needs are met through spokes, like primary healthcare institutions while medical resources and investments are centralised at hubs, such as tertiary healthcare institutions. The utilisation of imaging-based diagnostic AI-assisted decision-making software in a hub-and-spoke network of stroke care showed improved clinical efficiency, including decreased time to notification and transfer time between spokes to hubs, leading to a shorter length of stay. 71 72 Currently, a similar network tailored to the healthcare system in China has yet to be implemented, and its potential clinical benefits remain unclear. To fully leverage the capabilities of imaging-based diagnostic AI-assisted decision-making software, it is essential to seamlessly refer patients with positive or indeterminate diagnoses at spoke sites to the hub sites for specialised care, minimising delays in early treatment.

Barrier: lack of information security measures and certification.

Suggestion: establish an independent information platform with robust data security measures to ensure the protection of clinical data privacy and facilitate the integration and data exchange across primary/secondary and tertiary healthcare institutions.

The cybersecurity and data protection regulations in China are undergoing rapid changes, positioning it as one of the most stringent globally. The ‘Information security technology—baseline for classified protection of cybersecurity (GB/T 22239-2019)’, jointly issued by the State Market Regulatory Administration and Standardization Administration of China, came into effect in December 2019. 73 The standard defined level 1 to level 4 security requirements and specified the baseline guidance for information security technology to protect information platforms and systems responsible for collecting, storing, transmitting and processing personal information. 73 As China’s healthcare informatisation continues to advance, the security of hospital information infrastructures is becoming increasingly critical, given that any disruptions could have substantial consequences for individuals and society as a whole. To address this concern, the National Health Commission released the ‘Guidelines for Information Security Level Protection in the Healthcare Industry’, stipulating that core systems in healthcare institutions should adhere to level 3 information security protection standards, the highest level for non-banking institutions. 74 Level 3 security protection mainly covers 5 aspects of security technical requirements and 5 aspects of security management requirements. The multidimensional assessment involves 73 categories, with nearly 300 specific requirements, covering aspects such as information protection, security auditing, communication confidentiality, etc. 73 While AI technology software may not be explicitly covered by this specific requirement, it is often a mandatory administrative step for healthcare institutions, particularly tertiary hospitals, to request a level 3 security certification to ensure the protection of clinical data privacy during the integration and data exchange of AI software. As an integrated approach addressing the unique challenges faced by China’s healthcare system, an independent information platform with robust data security measures or level 3 security qualification to facilitate the implementation of imaging-based diagnostic AI-assisted decision-making software should be established. This platform acts as a vital link connecting the AI software, primary and tertiary healthcare institutions. It is designed to collect basic demographic information and medical history while transmitting deidentified imaging data to AI-assisted software for an initial diagnosis in primary care settings. In cases where patients receive positive or indeterminate reports, referral to the collaborated tertiary hospital within the hub-and-spoke network is warranted. Relevant clinical information, including collected demographic data, medical history, clinical reports and referral forms, is seamlessly transferred to enhance overall efficiency. The successful establishment of this platform requires multistakeholder engagement for proficient and collective design and management, addressing interinstitutional data sharing, security and governance challenges stemming from legal, technical and operational requirements. 70 75

Implications for policy-makers and healthcare practitioners

Globally, there are marked differences in the implementation of AI in healthcare systems. These variations are associated with factors including the type of AI software, healthcare infrastructure, existing policies and technological advancements. Despite ongoing criticism and multiple implementation challenges, AI-assisted decision-making software and health information technologies have demonstrated substantial potential for enhancing diagnostic procedures in primary care, especially with strong regulatory support, in both resource-rich and under-resourced settings. 76–79 Primary care is an ideal setting for AI tools to improve clinical efficiency and reduce medical errors due to its role in managing a large number of patients and making decisions under uncertainty. 76

In Germany, the Federal Ministry of Health has been proactive in supporting AI integration in healthcare. The ‘Smart Physician Portal for Patients with Unclear Disease’ project provided ongoing support to general practitioners (GPs) using an AI-based tool to diagnose uncertain cases. 80 This user-centred decision support tool was design specifically to address GP’s essential clinical needs through interviews and workshops, ensuring a seamless fit into the routine workflow while allowing for more efficient patient diagnosis. Similarly, the National Health Service in the UK has actively incorporated AI technologies into primary care through the use of Babylon’s Triage and Diagnostic system streamlining the diagnostic process. 81 Furthermore, the European Union’s 2021 Proposal sought to establish a global standard for safe, reliable and ethical AI by creating a comprehensive legal framework designed to enhance trust and encourage broad implementation, ensuring that the AI systems are both technically and ethically sound. 82 83 Therefore, regulatory support to increase trust, robust technical infrastructure, strong ethical standards and user-centred design are crucial for the extensive integration of AI in healthcare, ultimately improving patient outcomes and clinical efficiency.

In contrast, China faces particular difficulties due to its large and heterogeneous healthcare system, regional disparities in healthcare infrastructure and rapidly evolving regulatory environment. The successful integration of AI is hindered by the disparity in healthcare resources and the critical need for interoperability among various hospital information systems, especially in primary care settings. In particular, primary healthcare facilities in rural areas, constrained by financial and resource limitations, often lack access to the advanced AI technologies that are more steadily available in tertiary hospitals in big cities. 84 More importantly, while electronic medical record (EMR) systems are widely adopted in primary hospitals, their average level and functionality are typically lower as than those in tertiary hospitals. 85 According to the National Health Commission of the People’s Republic of China, as of 2022, the average level of EMR systems in the tertiary public hospitals was 4.0, indicating a medium stage of EMR development that enabled basic clinical decision support. 86 However, to fully facilitate intelligent clinical decision support, EMR systems need to reach at least level 5, posing an even greater challenge to the systematic integration of AI. 87

Financial incentives and policy support for EMR infrastructure facilitating the use of AI in primary healthcare settings could drive broader implementation and improve care quality. Guidelines for the thorough assessment of AI-assisted decision-making software for cost-effectiveness, efficacy and safety are also urgently needed. 88 To improve care coordination across different levels of healthcare institutions, policies should also support collaborative networks and data-sharing platforms. To increase healthcare practitioners’ familiarity with and confidence in AI-assisted decision-making software, major implementation barriers must be addressed and overall trust in AI technologies must be increased through thorough training and continued regulatory support.

Strengths and limitations

The current study has several strengths and limitations. To ensure scientific rigour and validity, the updated CFIR was thoroughly employed to guide the design of interview guides through the reporting of results. Although primarily descriptive, the study extended beyond identifying barriers and facilitators by providing practical suggestions tailored to China’s healthcare system. In order to capture a broad spectrum of perspectives, a wide range of key stakeholders, ranging from healthcare practitioners to industry vendors, were involved, allowing for a qualitative exploration of various roles and the provision of comprehensive insights. However, the inclusion of perspectives from patients should be warranted in future research, particularly through doctor–patient shared decision-making. Given the extensive impact of AI technology on the professional autonomy of healthcare practitioners, existing literature suggested a negative perception among patients towards physicians using AI-assisted software. 89 90 While the current study specifically focused on two representative AI applications, namely imaging-based diagnostic AI-assisted decision-making software for lung nodules and diabetic retinopathy, study findings were further generalised to the general diagnostic AI-assisted decision-making software using medical imaging as source data. Despite employing the updated CFIR as a systematic approach to understanding barriers and facilitators in the implementation process for enhanced generalisability, it is important to acknowledge potential variations across different types of diagnostic AI software. The current study might not fully capture certain software-specific differences and contextual factors associated with implementation. Moreover, purposive sampling was adopted, and all selected healthcare institutions were located in well-resourced areas, potentially leading to limited generalisability of findings beyond the selected healthcare institutions and software. The results should be interpreted considering this context, emphasising the strong need for cross-comparisons of the findings and the validation of recommendations in other settings, particularly in rural areas.

The rapid advancement of AI techniques is fuelling a global shift in the conventional medical decision-making paradigm. By using the updated CFIR, the current study contributed to a comprehensive understanding of the barriers and facilitators in implementing imaging-based diagnostic AI-assisted decision-making software in China’s evolving healthcare landscape. The findings served as a solid theoretical foundation, providing a possible roadmap for future efforts aimed at optimising the effective implementation of imaging-based diagnostic AI-assisted decision-making software. The tangible suggestions could further inform healthcare practitioners, industry stakeholders and policy-makers in both China and other developing countries, facilitating the unleashing of the full potential of imaging-based diagnostic AI-assisted decision-making software for optimal patient care.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

This study involves human participants and was approved by the Institutional Review Board of Peking University (IRB00001052-22138). Participants gave informed consent to participate in the study before taking part.

Acknowledgments

The authors thank all individuals who took the time to participate in the interviews and those who provided constructive suggestions on the manuscript.

  • Karimi S , et al
  • Greenes RA , et al
  • Winters-Miner LA ,
  • Bolding PS ,
  • Hilbe JM , et al
  • Delaney B ,
  • Kostopoulou O
  • Meunier PY ,
  • Raynaud C ,
  • Guimaraes E , et al
  • Sutton RT ,
  • Pincock D ,
  • Baumgart DC , et al
  • Administration USFaD
  • Hogan W , et al
  • Center for Medical Device Evaluation N
  • Technology CAoIaC
  • Administration NMP
  • Zhang J , et al
  • Zhou J , et al
  • Kim J , et al
  • Berzin TM , et al
  • Rushlow DR ,
  • Inselman JW , et al
  • Liu Z , et al
  • Shimabukuro DW ,
  • Barton CW ,
  • Feldman MD , et al
  • Chang C-H , et al
  • Liu TYA , et al
  • Mathenge W ,
  • Whitestone N ,
  • Nkurikiye J , et al
  • Genchev GZ , et al
  • Chomutare T ,
  • Tejedor M ,
  • Svenning TO , et al
  • Stevenson F ,
  • Lau R , et al
  • Richardson JE ,
  • Abramson EL ,
  • Pfoh ER , et al
  • Damschroder LJ ,
  • Keith RE , et al
  • Reardon CM ,
  • Widerquist MAO , et al
  • Acharya S ,
  • Van Citters AD ,
  • Scalia P , et al
  • Fujimori R ,
  • Soeno S , et al
  • Sampalli T , et al
  • Saunders B ,
  • Kingstone T , et al
  • Krippendorff K
  • Research TCFfI
  • Council GOotS
  • Wang D , et al
  • Hagedorn HJ
  • Yankey N , et al
  • Schwartz JM ,
  • Rossetti SC , et al
  • Hehakaya C ,
  • Ranschaert ER , et al
  • Romero-Brufau S ,
  • Boyum P , et al
  • Abramoff MD , et al
  • de Bruin JS ,
  • Hersch F , et al
  • Borges do Nascimento IJ ,
  • Abdulazeem H ,
  • Vasanthan LT , et al
  • Zhang Z , et al
  • Petitgand C ,
  • Motulsky A ,
  • Denis JL , et al
  • Tanguay-Sela M ,
  • Benrimoh D ,
  • Popescu C , et al
  • Huang J , et al
  • Li X , et al
  • Hao Y , et al
  • Krumholz HM ,
  • Yip W , et al
  • Jiang Y , et al
  • Zhang XL , et al
  • Fortenberry JL
  • Hassan AE ,
  • Ringheanu VM ,
  • Rabah RR , et al
  • Elijovich L ,
  • Dornbos Iii D ,
  • Nickele C , et al
  • Center NHCSI
  • Middendorf M ,
  • Heintzman J , et al
  • Miyagami T ,
  • Kunitomo K , et al
  • Cossy-Gantner A ,
  • Germann S , et al
  • Koçyiğit Burunkaya D , et al
  • Gomez-Cabello CA ,
  • Pressman S , et al
  • Schütze D ,
  • Neff MC , et al
  • Middleton K , et al
  • Thornton J ,
  • Wu S , et al
  • Wei X , et al
  • China NHCotPsRo
  • Lorenzini G ,
  • Arbelaez Ossa L ,
  • Shaw DM , et al
  • Shaffer VA ,
  • Probst CA ,
  • Merkle EC , et al

Contributors CY, JZ and LL designed the study and developed the eligibility criteria. CY contacted the respondents. CY, XL and FJ designed the interview guides. JZ and LL reviewed and made critical comments. XL and FJ conducted interviews and analysed the data. JZ and CY contributed to the review of qualitative analysis through discussion with XL and FJ. XL completed the first draft of the manuscript. CY, JZ and LL reviewed and revised the manuscript. All authors read and approved the final manuscript. CY is the guarantor, has full access to all data in the study and has final responsibility for the decision to submit for publication.

Funding This work was supported by Merck Sharp & Dohme, a subsidiary of Merck & Co., Rahway, New Jersey, USA. The sponsor participated in the design and development of the study, as well as the revision and editing of this manuscript.

Competing interests JZ is an employee of MSD R&D (China). LL is affiliated with Merck Sharp & Dohme, a subsidiary of Merck & Co., Rahway, New Jersey, USA, which funded the study and monitored the conduct of the study.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

  • Open access
  • Published: 13 September 2024

Physicians and nurses experiences of providing care to patients within a mobile care unit – a qualitative interview study

  • Christofer Teske 1 , 2 ,
  • Ghassan Mourad 1 &
  • Micha Milovanovic 1 , 3  

BMC Health Services Research volume  24 , Article number:  1065 ( 2024 ) Cite this article

Metrics details

Introduction

There is a growing need for alternative forms of care to address citizen demands and ensure a competent healthcare workforce across municipalities and regions. One of these forms of care is the use of mobile care units. The aim of the current study was to describe physicians and nurses experiences of providing care to patients within a mobile care unit in Sweden.

Data were collected between March 2022 and January 2023 through qualitative interviews with 14 physicians and nurses employed in various mobile care units in different regions in Sweden. These interviews were transcribed verbatim and subjected to content analysis, with the study adhering to the Standards for Reporting Qualitative Research (SRQR).

The analysis resulted in two main categories: “Unlocking the potential of mobile care”, and “The challenges of moving hospitals to patients’ homes”; and seven subcategories. The respondents viewed mobile care at home as highly advantageous, positively impacting both patients and caregivers. They believed their contributions enhanced patients’ well-being, fostering a welcoming atmosphere. They also noted receiving more quality time for each patient, enabling thorough assessments, and promoting a person-centered approach, which resulted in more gratifying mutual relationships. However, they experienced that mobile care also had challenges such as geographical limitations, limited opening hours and logistical complexity, which can lead to less equitable and efficient care.

Conclusions

Physicians and nurses in mobile care units emphasized positive outcomes, contributing to patient well-being through a person-centered approach. They highlighted increased quality time, comprehensive assessments, and overall satisfaction, praising the mobile care unit’s unique continuity for enhancing safety and fostering meaningful relationships in the patient’s home environment. In order for mobile care to develop and become a natural part of healthcare, challenges such as geographical limitations and logistics need to be addressed.

Peer Review reports

Shifts in population demographics and the present structure of the healthcare system prompt inquiries about the optimal care for frail older people [ 1 , 2 , 3 , 4 ]. The multifaceted health conditions and diverse requirements of these individuals result in increased healthcare appointments and recurrent hospital stays, putting pressure on the current health infrastructure [ 5 , 6 ]. In Sweden, the state oversees general healthcare policy, with the Inspectorate for Health and Care supervising. While regions ensure that all citizens have access to quality care, municipalities look after long-term health and social care for the frail older people,. Primary care serves as the initial point of contact in the healthcare system, providing basic services either at facilities or at homes. They also guide patients to the appropriate level of care as required [ 7 ].

The transition towards accessible and qualitative healthcare is underway in municipalities and regions [ 8 ]. This transition is important because some individuals may experience problems accessing healthcare due to of distance, severe illness and immobility [ 9 , 10 , 11 ]. This change, however, demands long-term commitment and perseverance, not only from the regions and municipalities, but also from the government [ 12 ]. The goal is to develop person-centered, efficient, and purposeful methods that cater to patient needs. This also means that different healthcare stakeholders, specialties, and professions need to collaborate more effectively [ 6 ]. To respond to citizens’ demands for accessible care, there’s a need for alternative forms of care, for example, mobile care, that can offer prompt and appropriate care within the available resources [ 12 ].

The terminology, i.e. the meaning of mobile care varies from country to country, but the care provided is the same, as is its purpose, to provide highly specialized care, mainly by physicians and nurses, for conditions that normally require hospital admission [ 13 , 14 ]. Examples of mobile care are geriatric “Hospital at home” programs that offer treatments typically exclusive to hospitals right in the patient’s homes, including monitoring, drug administration, nursing, and rehabilitation processes [ 13 , 15 ]. Hospital at home is defined as “a service that provides active treatment, by health care professionals, in the patient’s home for a condition that otherwise would require acute hospital in-patient care, always for a limited time period” unlike home nursing care [ 13 , 16 ]. Patients are evaluated in various settings, including by their general practitioners or in emergency rooms, before being directed to these services. This model can also support those discharged early from hospital [ 17 , 18 ]. The target group for mobile care varies, but the mobile units in the current study focus on the frail older people. The National Board of Health and Welfare defines frail frail older people as people over 65 years of age with several chronic diseases and extensive needs for both outpatient and inpatient medical care [ 17 ].

Transitioning the care of patients from hospitals to their homes poses a formidable challenge, primarily due to concerns regarding patient safety and the constraints inherent to a patient’s home environment. Previous studies show that many patients are sent to hospitals instead of being assessed for mobile care due to various circumstances, e.g., for reasons of convenience [ 14 , 19 ]. In cases where the assessment is performed, the mobile care team often rejects the patient due to lack of time, logistical reasons or that the patient is unsuitable [ 13 , 14 , 20 ]. More knowledge is needed about physicians and nurses experiences of mobile care to provide an improved and developed perspective on how it can be incorporated into the healthcare system. The aim of the current study was to describe physicians and nurses experiences of providing care to patients within a mobile care unit in Sweden.

We employed a qualitative, inductive approach and used a content analysis methodology as outlined by Hsieh & Shannon [ 21 ]. In this approach, coding and theme development were driven by the shared meaning found within the data. The design’s primary objective was to discern, analyze, and interpret patterns within the qualitative data. The study adhered to the Standards for Reporting Qualitative Research (SRQR) [ 22 ]. The study was accepted by the Ethics Review Authority, Uppsala, Sweden (reg. number: 2020–06986).

Sampling and setting

The interviews were conducted between March 2022 and January 2023 in four Swedish cities in four different regions with populations varying between 61,000 and 160,000 inhabitants. All cities were equipped with mobile care units. Five units were found through an internet search, after which contact was made with the region management. Of these, four teams agreed to participate. These units specialize in mobile care as their primary field, delivering direct care to patients and offering indirect support to other physicians and nurses involved in providing such care. Mobile care units primarily offer home-based and inpatient care, with the number of patients receiving home care varying from 5 to 15. To be eligible for inclusion, participants had to meet the following criteria: active employment in a specialized field related to internal medicine or geriatric care, a minimum of 2 years of professional experience in the domain of the mobile care unit, and master the Swedish language. Invitation to participate in the study was issued by either the department head or a senior supervising physician within the healthcare facility. All physicians and nurses working in the included mobile care units who fulfilled the inclusion criteria were invited to participate, and all agreed to participate (Table  1 ).

Data collection

Due to the COVID-19 pandemic, interviews were performed using telephone ( n  = 11) and Microsoft Teams© (Microsoft Corporation, California, U.S.A) ( n  = 3). Participants were given the opportunity to propose a suitable time for the interview. The interview began with the participant introducing themselves and describing their experience with mobile care. The semi-structured interview guide was created by the authors with open-ended questions and was followed up by probing questions (See supplementary file). One pilot interview was conducted and did not result in any changes to the interview guide and was therefore included in the analysis. All interviews were performed by the first author (CT). CT is a registered nurse working within the field of emergency care and with previous experience in qualitative interviewing. CT had no prior care relationship with the study participants. Participants were encouraged to engage in open discussion, with occasional probing queries aimed at enhancing clarity, such as requests for further elaboration, explanations, and exploration of the how and why aspects. The interviews lasted between 25 and 55 min, were audio-recorded and then transcribed verbatim by CT. Before the study commenced, physicians and nurses were briefed on the study through both verbal and written communication. The participants were assured of confidentiality, and solely the researchers associated with the project could access the data, in line with The Swedish Research Council’s protocols [ 23 ].

Data analysis

The analysis of the transcribed interviews was conducted according to conventional content analysis based on Hsieh & Shannon [ 21 ]. All authors individually read four transcripts to gain both depth and breadth in understanding the material. Then, units of meaning in the text that were perceived to capture key thoughts or concepts were marked directly in the text. After this, notes were made in the margins describing the first impression, thereby conducting an initial analysis. To increase the trustworthiness of the study, all authors individually coded four transcripts and then mutually discussed the findings to employ a consistent coding scheme. Based on this coding scheme, CT coded the rest of the transcripts. The codes were then sorted into subcategories based on how the different codes were related and linked to each other. These subcategories were thereafter used to organize and group codes into meaningful clusters, which formed the basis for the emerging subcategories. Depending on how the subcategories were related to each other, they were afterwards divided into a smaller number of categories. These steps were mutually discussed by all authors. The findings of the research were strengthened and clarified by using specific quotations. These selected pieces, derived directly from the initial dataset, were eventually translated into English. Table  2 provides examples representing different stages of the analysis.

The results are derived from interviews with physicians and nurses, who were actively employed in specialized fields related to internal medicine or geriatric care. Each participant had at least two years of professional experience in the mobile care unit and was proficient in the Swedish language. Analysis of the interviews resulted in two main categories and seven subcategories according to Table  3 . The main categories were: Unlocking the potential of mobile care and The challenges of moving hospitals to patients’ homes.

Unlocking the potential of mobile care

Physicians and nurses described that mobile care promotes person-centered care based on mutual equality. Caring for the patient in their home increases transparency and safety for patients. Cooperation with different treatment units ensures comprehensive and safe care. It is a healthy work environment that gives professional pride.

Person-centered: the right way to care

Physicians and nurses described that it was rewarding to observe the patient in their natural environment. Physicians and nurses who had previously worked in a hospital setting experienced a shift in the balance of power when care had taken place in the patient’s home. The healthcare staff described that they felt that they were not in a position of power and called it “mutual equality”, and that this led to patients being more inclined to open up and share their opinions. This contributed to a more accurate assessment that aligned with a person-centered care approach. In an assessment of the patient in their living environment, physicians and nurses had been able to identify potential obstacles and complications more effectively. Such obstacles might have been, for example, thresholds in the dwelling that could potentially have been a fall risk. A significant distance between the toilet and bedroom might have resulted in the patient avoiding diuretics due to concerns about incontinence. Physicians and nursesdescribed that it is of central importance to not only identify existing shortcomings but also to anticipate potential vulnerabilities that might have arisen during the period when the patient was enrolled in the mobile care unit. Proactively working on prevention had been essential to ensure the patient’s overall well-being.

“I find it very rewarding to enter their home environment. You sort of get on the same wavelength , and it feels , what should I say , more human to sit with them at home. You get a sense of how this patient operates in their home environment , and it’s important information that we lack when the patient is in their hospital room” [ 7 ].

Safer care through increased patient activity

Physicians and nurses described that patients are satisfied with being cared for at home. The care can be planned collaboratively to a greater extent, ensuring continuous patient involvement. It facilitates conducting examinations and treatments at home rather than needing transportation. Physicians and nurses shared their experiences of safety of care and that a factor for increased safety of care was to enable a care plan with the patients. They expressed that this form of care offers a different type of continuity compared to hospital care where there is variability in the staff. Knowing the patient and their history increased the safety of care. According to physicians and nurses, communication was a key factor. It was essential to inform patients about the reason for the unit’s visit and the necessary treatments Additionally, informing relatives was highlighted as a aspect of care. Physicians and nurses described that relative need to be involved and aware of the plan for the patients, especially since this form of care might be new to some. Furthermore, it was important for physicians and nurses that they provide information to both relatives and the patient on how to contact healthcare if required as this leads to increased security for them.

“Sometimes , they may need an injection to reduce fluid retention for a week , and then the nurse will work together with the patient to develop a plan so that they feel confident in saying , ‘Yes , now we’re going to do it like this” [ 10 ].

Good care requires good collaboration

To ensure high-quality care, collaboration within different healthcare organizations was essential according to physicians and nurses. They conveyed that frequent interaction between various healthcare entities and professions enhanced the sense of security for physicians and nurses, which in turn positively affected the patients. When the mobile unit was aware that home care services assisted or that home healthcare was responsible for the patient at night, the unit felt an increased sense of security in providing care in the patient’s home.

“But the idea and the goal are that patients who do not require inpatient care should be able to stay with our assistance and in collaboration with home healthcare , as well as with , for example , occupational therapists and physiotherapists” [ 11 ].

The perceived benefit of collaborating with hospital specialists, who are not directly part of the mobile unit, was perceived to facilitate the unit’s care delivery. A contributing factor to effective collaboration was that the facility was a smaller hospital, and the mobile unit was stationed close to the hospital’s departments.

“We are a very small hospital , so we have the advantage of being close at hand. We have cooperation among all in.” [ 9 ].

Making a difference gives a sense of professional pride

Physicians and nurses experienced that they were doing something good for the frail older people. They provided good healthcare in a place where the patient wanted to be. Physicians and nurses believed that care in a patient’s home environment surpassed the care that was provided in hospitals. They felt that they had a meaningful profession and that they had a impact on the patients’ lives, but they also perceived that they contributed to the patient’s well-being. Physicians and nurses perceived that they contributed to the patient’s well-being. Physicians and nurses described that they had more time for each patient and did not have to move between patients as they did in the hospital. This led to less stress. It also allowed for a thorough assessment and promoted the establishment of a more rewarding mutual relationship.

“I believe that it’s necessary for us to fulfill a role and make a contribution for the elderly. I see that the unit is needed and that we serve a purpose” [ 3 ].

The challenges of moving hospitals to patients’ homes

Physicians and nurses describe that geographical differences and the limited operating hours of mobile care teams lead to unequal care. They face logistical challenges, such as transporting equipment and navigating different administrative systems, which need improvement. Additionally, maintaining good hygiene in less clean home environments can be difficult.

Mobile care availability varies among different populations

Physicians and nurses emphasized the limitations of a mobile care unit compared to traditional hospital care. They often used expressions such as: “compared to the hospital or the emergency room”.

Some of the physicians and nurses highlighted that this type of care is limited to geographical boundaries. Within a municipality, there is often a higher concentration of resources and opportunities compared to areas outside the central parts of the municipality. Physicians and nurses described that if the patients live within the area of the unit, they will be offered this type of care, otherwise not, leading to inequality in care. Furthermore, mobile care was perceived as insufficient as the number of scheduled visits must be reduced if the travel time becomes too long. At most, physicians and nurses need to travel up to 60 minutes for a visit.

“There are still quite significant differences in the care one receives when living inside the city as opposed to living outside the municipality.” [ 1 ]. “The furthest locations. It’s travel time and such. Considering that , we are not very efficient.” [ 3 ].

Limitations due to the unit size and working hours

According to physicians and nurses, the mobile unit usually consists of a fixed number of employees who are not replaced when illness occurs, making the unit fragile. The units’ operations include both scheduled and emergency visits, and emergency visits can be limited due to lack of necessary resources, e.g. due to illness in the unit members. In such situations, the common alternative is to call for ambulance transportation that brings the patient to nearest hospital for an emergency assessment.

Another aspect is that the mobile unit is only available during office hours. If the patient experiences an emergency with their health outside the office hours, they could speak to a healthcare professional who works in a hospital. Physicians and nurses perceived this opportunity as positive, that it provided an extra security for patients connected to mobile care, while others were more negative to the limited opening hours compared to the hospital.

“We work regular office hours , Monday to Friday. Then during other times , they can call us , and we leave a brochure. And if we don’t answer the phone , they are redirected to the department , so they can get in touch with the doctor. It has never really become a problem.” [ 8 ].

The importance of equipment and logistics

Physicians and nurses described that conducting home visits required extensive preparation, especially concerning the equipment that needed to be brought along. Technical complications can arise, which may be difficult to address in the patient’s home, underscoring the importance of reliable equipment. Another challenge highlighted by physicians and nurses was the incompatibility in record-keeping systems across different forms of care. Standardizing these systems could optimize the workflow. Moreover, physicians and nurses emphasized that some medical equipment cannot be easily implemented in the home environment. These were for example monitoring equipment, including the tracking of vital functions, and infusion systems that administer intravenous drugs safely.

“It requires quite a bit of logistics. You have to bring things with you. I realized it now when I was about to leave. It demands logistics , and you have to be organized.” [ 9 ].

Som physicians and nurses made it clear that not all patients are suitable for a specific treatment at home. In situations where the patient’s condition requires intravenous treatment, but the patient lacks supervision or municipal interventions, the unit need to make an assessment. If the unit can be present during the entire treatment period, then it is safe for the patient to receive the treatment at home, otherwise the alternative is to go to hospital.

Another issue was hygiene problems experienced by physicians and nurses. For example, in wound dressings, it is difficult to maintain cleanliness if the home is already dirty, which normally is not a problem in the hospital environment.

“First , it’s about how the home looks and what possibilities there are. If the home is in disarray , it’s impossible to keep it clean. I know , I was sewing today , and when I compare it to the healthcare center , it’s quite sterile in comparison to a bedroom” [ 2 ].

To our knowledge, this is the first study describing physicians and nurses’ experiences with providing care to patients within a mobile care unit in Sweden. The study contributes valuable knowledge and insights into how Physicians and nurses experience this type of highly specialized care in the patients’ homes, which differs from home care nursing which mainly offers basic medical treatment such as health monitoring, medication administration, wound dressing, and overall patient health support. Physicians and nurses considered that mobile care in the home environment offers advantages that have a positive impact on both the patient and physicians and nurses themselves. However, they also expressed some challenges connected with mobile care.

Physicians and nurses described mobile care as a person-centered approach, where caring for patients in their own home has several positive aspects that benefit not only the patient but also physicians and nurses. They perceived it as gratifying to witness patients in their natural surroundings and noted a power shift during home care, fostering mutual equality, which they felt was difficult to achieve when they worked in hospitals. Physicians and nurses described that patients experience satisfaction when they receive care at home. They emphasized that mobile care is characterized by collaborative planning, which ensures continuous patient participation. Although person-centered care emphasizes the importance of patient involvement in decision-making [ 24 ], earlier research has shown that not all patients prefer active participation. [ 25 , 26 ]. This is mainly due to health-related limitations, lack of support from physicians and nurses, or unfamiliarity with the possibility of participate actively. However, in cases where patients want to participate actively, they feel opposed by physicians and nurses. In those moments, they might feel like they don’t have much say or control, and it can make them feel less powerful and independent [ 27 , 28 ]. This suggests that physicians and nurses should pay attention to patients’ needs and wishes for participation in their care. It is also valuable to address non-active participation through targeted efforts such as patient education and empowerment initiatives to facilitate a smooth transition to acceptance of person-centered care in the home environment [ 26 ]. Through these efforts, we believe that it is possible to further promote and implement a person-centered approach in mobile care.

Physicians and nurses described that they received more quality time for each patient, enabling a more comprehensive assessment and fostering a more satisfying person-centered care. Specifically, they believed that their contributions had a substantial impact on the patient’s overall well-being and perceived a consistent sense of welcome, receiving affirmative responses regarding their endeavors. Physicians and nurses experience that the mobile care unit provides a unique continuity compared to hospital care, where staff turnover can introduce variability. Getting to know the patient and their medical history contributes to enhanced safety in care delivery. Previous research [ 11 , 12 , 13 ] has shown that building and maintaining relationships with the frail older people with physicians and nurses can be challenging due to the specialized and fragmented healthcare system. A limited number of staff meeting patients in their home environment usually means consistent contact that promotes the quality of care, affecting patients’ feelings of safety and comfort. However, other studies show that patients receiving medical care at home tend to report higher levels of satisfaction with their treating physician compared to patients receiving care in a traditional acute hospital environment [ 12 , 13 , 14 ]. Physicians and nurses in this study advocate for the mobile care unit, citing its unique continuity compared to hospitals. We therefore assume that consistent contact with a limited number of staff promotes relational continuity, positively impacting patient satisfaction.

Healthcare professional described that they provided good healthcare in a place where the patient wanted to be. They believed that care in a patients’ home environment surpassed the care that was provided in hospitals and had an impact on the patients’ lives. Physicians and nursesalso described having more time for each individual patient. This allowed for a thorough assessment and promoted the establishment of a more rewarding mutual relationship. This suggests that physicians and nurses appreciated the work environment in the mobile care team. Previous studies have shown a positive correlation between a healthy work environment and better patient experiences [ 29 , 30 ]. This implicates that a positive work environment in mobile care has far-reaching implications that extend beyond just the well-being of physicians and nurses. It also positively influences patient satisfaction, quality of care, staff engagement, and the overall efficiency of healthcare delivery in the mobile setting.

Physicians and nurses also described challenges in the work environment including unsanitary living conditions that can worsen a patient’s medical condition and make infection control more difficult. Previous studies confirm that there is an increased risk associated with certain types of treatment at home and that it is important to make a careful assessment of whether the patient and the environment are suitable for care [ 14 ]. On the other hand, being hospitalized, increases the risk ofnosocomial infections [ 31 , 32 , 33 ].

Physicians and nurses described that the mobile care units have limitations in terms of accessibility. This mostly concerns geographic accessibility, where patients in rural areas do not have the same opportunity for mobile care as in the cities. They also described that the units’ working hours and travelling distances was a limiting factor. The availability of mobile care, both geographically and in terms of restricted opening hours, is not in line with the Healthcare Act in Sweden [ 17 ], which stipulates that healthcare should be provided on equal terms for the entire population. Geographical accessibility can however be challenging to fulfill as Sweden is sparsely populated compared to many other European countries [ 34 ]. Proximity to patients in rural areas is a crucial factor affecting access to primary care [ 19 , 20 ]. To address this issue, the Ministry of Social Affairs has been tasked by the government to investigate and propose changes to increase access to healthcare in rural areas [ 21 ]. Global observations indicate a variety of essential approaches for enhancing accessibility to primary healthcare services in rural areas. These encompass reinforcing the healthcare financing system, enhancing the availability of medicines and supplies, collaborating with diverse partners and communities, implementing a robust evaluation system, and fostering dedicated leadership [ 35 , 36 , 37 ]. This indicates that follow-up healthcare appointments, digital solutions may become more relevant in the future to minimize transportation for the mobile care units.

Strengths and limitations

Mobile care is not yet widely adopted as a working method in Sweden. Consequently, a geographic spread could not be achieved, and the number of participants was limited. Nevertheless, the findings in this study are based on data collected from a relatively high number of physicians and nurses in Sweden with experiences of working in different mobile care units. According to Malterud [ 38 ], this indicates that the study has achieved sufficient information power, as all physicians and nurses working in the mobile care units participated in this study. They contributed with their unique experience and provided valuable knowledge to answer the aim of the study. Furthermore, the interviews yielded consistent data since no new information appeared in the last interviews, and this data was analyzed using an established analysis strategy by Hsieh & Shannon [ 21 ].

Eleven of the interviews were carried out via telephone and three via video using Microsoft Teams©. This might be considered as a limitation as telephone interviews may have impacted the richness of interview content compared with video interviews. However, research shows that the difference between telephone and video interviews is modest [ 39 , 40 ]. One strength is that the first author (CT) conducted all interviews, which may have influenced the quality of the interviews positively as the interviewer’s interview technique improved with each interview. Another strength is that all authors individually coded four transcripts and mutually discussed the findings to employ a consistent coding scheme, that CT used to code the rest of the transcripts afterwards. Furthermore, all authors participated in forming subcategories and categories to ensure credibility. Dependability was established by maintaining a comprehensive audit trail, ensuring consistent coding procedures, and involving multiple analysts to verify the stability and reliability of the findings, with every step of the research process thoroughly documented in the methods section.Variations were discussed among the authors during the meetings for the data analysis to enhance the confirmability of the study. The authors have different backgrounds and expertise, i.e. nursing, medicine and biomedicine and this can be seen as an “investigator triangulation” and thus a strength [ 41 ].

Although other mobile care units may work differently and have other experiences, our findings may demonstrate transferability to this context as care is delivered to patients in their homes, even though this could differ in content and delivery mode.

Physicians and nurses experience mobile care as a person-centered approach, promoting holistic care and collaborative planning. It emphasizes ongoing patient participation and eliminated transportation needs. On the other hand, mobile care poses challenges such as inequality of care if patients live outside the units’ areas, incompatible record-keeping, and difficulty implementing the use of certain medical devices. Despite this, mobile care is considered a a good alternative to traditional hospital care, where physicians and nurses felt they had a meaningful profession that positively affects the lives and well-being of the patients, and thus fostering rewarding mutual relationships. The challenge for the future is to engage at a national level with physicians, managers, and politicians to achieve improvements. Failing to come together to develop care pathways relevant to rural communities, for example, could be missing an opportunity to improve the nation’s health.

Data availability

The datasets used and/or analysed during the current study cannot be shared openly but are available on request from authors.

Atella V, Piano Mortari A, Kopinska J, Belotti F, Lapi F, Cricelli C, Fontana L. Trends in age-related disease burden and healthcare utilization. Aging Cell. 2019;18(1):e12861.

Article   CAS   PubMed   Google Scholar  

Nilsson P. Vården ur befolkningens perspektiv, 65 år och äldre - International Health Policy Survey (IHP). Vård- Och omsorgsanalys 2022. Contract no : Rapport 2022:2.

(WHO) Who. Ageing and health https://www.who.int/news-room/fact-sheets/detail/ageing-and-health : WHO; 2022 [.

Population. IoMUCotL-RMEotAUS. Aging and the macroeconomy - long-term implications of an older Population. Washington (DC): National Academies Press (US); 2012.

Google Scholar  

Harrison KL, Leff B, Altan A, Dunning S, Patterson CR, Ritchie CS. What’s happening at home: a claims-based Approach to Better Understand Home Clinical Care received by older adults. Med Care. 2020;58(4):360–7.

Article   PubMed   PubMed Central   Google Scholar  

Lee G, Pickstone N, Facultad J, Titchener K. The future of community nursing: hospital in the home. Br J Community Nurs. 2017;22(4):174–80.

Article   PubMed   Google Scholar  

Johansson L, Long H, Parker MG. Informal caregiving for elders in Sweden: an analysis of current policy developments. J Aging Soc Policy. 2011;23(4):335–53.

Regioner SoSKo. Överenskommelse mellan staten och Sveriges Kommuner och Regioner om God och nära vård 2023. In: Socialdepartementet, editor. 2023.

Socialdepartementet. Uppdrag att möjliggöra bättre tillgång till hälso- och sjukvård i hela landet genom främjande av etablering i glesbygd. In: Socialdepartementet, editor. chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/ https://www.regeringen.se/contentassets/dd4eebcd5b574e6a87ac750c76323ad9/uppdrag-att-mojliggora-battre-tillgang-till-halso--och-sjukvard-i-hela-landet-genom-framjande-av-etablering-i-glesbygd-ds-202323--.pdf 2023.

Brundisini F, Giacomini M, DeJean D, Vanstone M, Winsor S, Smith A. Chronic disease patients’ experiences with accessing health care in rural and remote areas: a systematic review and qualitative meta-synthesis. Ont Health Technol Assess Ser. 2013;13(15):1–33.

CAS   PubMed   PubMed Central   Google Scholar  

Ranzani OT, Kalra A, Di Girolamo C, Curto A, Valerio F, Halonen JI, et al. Urban-rural differences in hypertension prevalence in low-income and middle-income countries, 1990–2020: a systematic review and meta-analysis. PLoS Med. 2022;19(8):e1004079.

Socialdepartementet. God och nära vård 2023 En omställning av hälso- och sjukvården med primärvården som nav. In: Socialdepartementet, editor. chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/ https://www.regeringen.se/contentassets/f814b739c29447efb04753bfa37baa11/overenskommelse-mellan-staten-och-sveriges-kommuner-och-regioner-om-god-och-nara-vard-2023.pdf : Regeringskansliet; 2023.

Teske C, Mourad G, Milovanovic M. Mobile care - a possible future for emergency care in Sweden. BMC Emerg Med. 2023;23(1):80.

Patel HY, West DJ Jr. Hospital at home: an Evolving Model for Comprehensive Healthcare. Glob J Qual Saf Healthc. 2021;4(4):141–6.

Leff B, Montalto M. Home hospital-toward a tighter definition. J Am Geriatr Soc. 2004;52(12):2141.

Shepperd S, Iliffe S. Hospital-at-home versus in-patient hospital care. Cochrane Database Syst Rev. 2000(2):Cd000356.

Socialstyrelsen. Mest sjuka äldre ochnationella riktlinjer. Hur riktlinjerna kan anpassas till mest sjuka äldres. särskilda förutsättningar och behov. In: Socialdepartementet, editor. 2021.

Yao X, Paulson M, Maniaci MJ, Dunn AN, Nelson CR, Behnken EM, et al. Effect of hospital-at-home vs. traditional brick-and-mortar hospital care in acutely ill adults: study protocol for a pragmatic randomized controlled trial. Trials. 2022;23(1):503.

Klein S. Hospital at home programs improve outcomes, lower costs but face resistance from providers and payers. Commonwealth Fund. 2021.

Cerdan de Las Heras J, Andersen SL, Matthies S, Sandreva TV, Johannesen CK, Nielsen TL et al. Hospitalisation at home of patients with COVID-19: a qualitative study of user experiences. Int J Environ Res Public Health. 2023;20(2).

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Vetenskapsrådet. GOD FORSKNINGSSED2017.

Coulter A, Oldham J. Person-centred care: what is it and how do we get there? Future Hosp J. 2016;3(2):114–6.

Driever EM, Stiggelbout AM, Brand PLP. Patients’ preferred and perceived decision-making roles, and observed patient involvement in videotaped encounters with medical specialists. Patient Educ Couns. 2022;105(8):2702–7.

Nilsen ER, Hollister B, Söderhamn U, Dale B. What matters to older adults? Exploring person-centred care during and after transitions between hospital and home. J Clin Nurs. 2022;31(5–6):569–81.

Mishra SR, Haldar S, Pollack AH, Kendall L, Miller AD, Khelifi M, Pratt W. Not just a receiver: understanding patient behavior in the Hospital Environment. Proc SIGCHI Conf Hum Factor Comput Syst. 2016;2016:3103–14.

PubMed   PubMed Central   Google Scholar  

Oudshoorn A, Ward-Griffin C, McWilliam C. Client-nurse relationships in home-based palliative care: a critical analysis of power relations. J Clin Nurs. 2007;16(8):1435–43.

Aiken LH, Sermeus W, Van den Heede K, Sloane DM, Busse R, McKee M, et al. Patient safety, satisfaction, and quality of hospital care: cross sectional surveys of nurses and patients in 12 countries in Europe and the United States. BMJ. 2012;344:e1717.

McHugh MD, Kutney-Lee A, Cimiotti JP, Sloane DM, Aiken LH. Nurses’ widespread job dissatisfaction, burnout, and frustration with health benefits signal problems for patient care. Health Aff (Millwood). 2011;30(2):202–10.

Forster AJ, Clark HD, Menard A, Dupuis N, Chernish R, Chandok N, et al. Adverse events among medical patients after discharge from hospital. CMAJ. 2004;170(3):345–9.

Sharek PJ, Parry G, Goldmann D, Bones K, Hackbarth A, Resar R, et al. Performance characteristics of a methodology to quantify adverse events over time in hospitalized patients. Health Serv Res. 2011;46(2):654–78.

Allegranzi B, Bagheri Nejad S, Combescure C, Graafmans W, Attar H, Donaldson L, Pittet D. Burden of endemic health-care-associated infection in developing countries: systematic review and meta-analysis. Lancet. 2011;377(9761):228–41.

Eurostat. Population density https://ec.europa.eu/eurostat/databrowser/view/tps00003/default/table?lang=en 2023 [.

Chen H, Ignatowicz A, Skrybant M, Lasserson D. An integrated understanding of the impact of hospital at home: a mixed-methods study to articulate and test a programme theory. BMC Health Serv Res. 2024;24(1):163.

Lin L, Cheng M, Guo Y, Cao X, Tang W, Xu X, et al. Early discharge hospital at home as alternative to routine hospital care for older people: a systematic review and meta-analysis. BMC Med. 2024;22(1):250.

Truong TT, Siu AL. The Evolving Practice of Hospital at Home in the United States. Annu Rev Med. 2024;75:391–9.

Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by Information Power. Qual Health Res. 2016;26(13):1753–60.

Krouwel M, Jolly K, Greenfield S. Comparing Skype (video calling) and in-person qualitative interview modes in a study of people with irritable bowel syndrome - an exploratory comparative analysis. BMC Med Res Methodol. 2019;19(1):219.

Janghorban R, Latifnejad Roudsari R, Taghipour A. Skype interviewing: the new generation of online synchronous interview in qualitative research. Int J Qual Stud Health Well-being. 2014;9:24152.

Polit DF, Beck CT. Essentials of nursing research: appraising evidence for nursing practice. Wolters Kluwer Health/Lippincott Williams & Wilkins; 2010.

Download references

Acknowledgements

We thank all the physicians and nurses for sharing their experiences in the interviews for this study.

No funding was received for conducting this study.

Open access funding provided by Linköping University.

Author information

Authors and affiliations.

Department of Health, Medicine, and Caring Sciences, Linköping University, Linköping, Sweden

Christofer Teske, Ghassan Mourad & Micha Milovanovic

Department of Emergency medicine in Norrköping, Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden

Christofer Teske

Department of Internal Medicine in Norrköping, Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden

Micha Milovanovic

You can also search for this author in PubMed   Google Scholar

Contributions

CT collected and analyzed all data. Also contributed to writing - review and editing of the manuscript. GM contributed to study design, analysis of data via triangulation, reviewing and editing of the manuscript. MM contributed to discussion regarding all data of the study. Also contributed to writing - review and editing of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Christofer Teske .

Ethics declarations

Ethics approval and consent to participate.

The study was accepted by the Ethics Review Authority, Uppsala, Sweden (reg. number: 2020–06986). The study has been carried out in accordance with the Declaration of Helsinki. Informed written consent was obtained from all participants involved in the study.

Consent for publication

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Teske, C., Mourad, G. & Milovanovic, M. Physicians and nurses experiences of providing care to patients within a mobile care unit – a qualitative interview study. BMC Health Serv Res 24 , 1065 (2024). https://doi.org/10.1186/s12913-024-11517-8

Download citation

Received : 27 February 2024

Accepted : 30 August 2024

Published : 13 September 2024

DOI : https://doi.org/10.1186/s12913-024-11517-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mobile care
  • Mobile unit
  • Qualitative interviews

BMC Health Services Research

ISSN: 1472-6963

content analysis for qualitative research

IMAGES

  1. A Quick Guide To Content Analysis

    content analysis for qualitative research

  2. 10 Content Analysis Examples (2024)

    content analysis for qualitative research

  3. Content Analysis For Research

    content analysis for qualitative research

  4. What is directed content analysis (DQICA) in qualitative research?

    content analysis for qualitative research

  5. General Process of Qualitative Content Analysis as presented in

    content analysis for qualitative research

  6. PPT

    content analysis for qualitative research

VIDEO

  1. Qualitative Content Analysis: What and How?

  2. Qualitative Data Analysis Procedures in Linguistics

  3. Five Types of Data Analysis

  4. [2017] MAXQDA 12: MAXDictio meets Sherlock Holmes (Part 1)

  5. What is Thematic Content Analysis

  6. Content Analysis in Research

COMMENTS

  1. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual: Books, newspapers and magazines. Speeches and interviews. Web content and social media posts. Photographs and films.

  2. Chapter 17. Content Analysis

    Content analyses often include counting as part of the interpretive (qualitative) process. In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, "strategies and tactics" is a bit of a stretch here.

  3. How to plan and perform a qualitative study using content analysis

    Abstract. This paper describes the research process - from planning to presentation, with the emphasis on credibility throughout the whole process - when the methodology of qualitative content analysis is chosen in a qualitative study. The groundwork for the credibility initiates when the planning of the study begins.

  4. Content Analysis Method and Examples

    Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts.

  5. A hands-on guide to doing content analysis

    There is a growing recognition for the important role played by qualitative research and its usefulness in many fields, including the emergency care context in Africa. ... Our objective with this manuscript is to provide a practical hands-on example of qualitative content analysis to aid novice qualitative researchers in their task. Keywords: ...

  6. Qualitative Content Analysis 101 (+ Examples)

    Content analysis is a qualitative analysis method that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants - this is called unobtrusive research. In other words, with content ...

  7. Content Analysis

    Step 1: Select the content you will analyse. Based on your research question, choose the texts that you will analyse. You need to decide: The medium (e.g., newspapers, speeches, or websites) and genre (e.g., opinion pieces, political campaign speeches, or marketing copy)

  8. Content Analysis

    Abstract. In this chapter, the focus is on ways in which content analysis can be used to investigate and describe interview and textual data. The chapter opens with a contextualization of the method and then proceeds to an examination of the role of content analysis in relation to both quantitative and qualitative modes of social research.

  9. Demystifying Content Analysis

    Quantitative content analysis is always describing a positivist manifest content analysis, in that the nature of truth is believed to be objective, observable, and measurable. Qualitative research, which favors the researcher's interpretation of an individual's experience, may also be used to analyze manifest content.

  10. Sage Research Methods Foundations

    Abstract. This entry focuses on qualitative content analysis as a rule-guided method for describing and conceptualizing the meaning of qualitative data. Following a brief introduction to core characteristics of the method, the history of the method is described, including its origins in the quantitative version of the method as well as the ...

  11. Three Approaches to Qualitative Content Analysis

    Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed, or summative. All three approaches are used to interpret meaning from the content of text data and, hence, adhere to the naturalistic paradigm.

  12. (PDF) Qualitative content analysis

    Shava et al. (2021) stated that qualitative content analysis is one of the research methods used to analyze text data which extracting objective content from text to examine meanings, themes, and ...

  13. Qualitative Content Analysis

    Qualitative content analysis (cf. Mayring, 2015b, 2020, 2021; Kuckartz, 2014) is a methodological approach to text analysis mainly in social sciences, which can be used in empirical research projects working with textual material like interview or group discussion transcripts, open observational protocols (field notes) or document analyses ...

  14. Reflexive Content Analysis: An Approach to Qualitative Data Analysis

    Firstly, it has become difficult to understand and apply qualitative content analysis procedures (Cho & Lee, 2014; Vears & Gillam, 2022).Indeed, research papers often cite competing qualitative content analysis approaches in their analytical procedures without reconciling apparent discrepancies.

  15. Content Analysis

    Content analysis can be used to study a wide range of topics, including media coverage of social issues, political speeches, advertising messages, and online discussions, among others. It is often used in qualitative research and can be combined with other methods to provide a more comprehensive understanding of a particular phenomenon.

  16. The Practical Guide to Qualitative Content Analysis

    Qualitative content analysis is a research method used to analyze and interpret the content of textual data, such as written documents, interview transcripts, or other forms of communication. It provides a systematic way to identify patterns, concepts, and larger themes within the data to gain insight into the meaning and context of the content.

  17. UCSF Guides: Qualitative Research Guide: Content Analysis

    "Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts." Source: Columbia Public Health

  18. Qualitative Content Analysis

    It is a flexible research method (Anastas, 1999). Qualitative content analysis may use either newly collected data, existing texts and materials, or a combination of both. It may be used in exploratory, descriptive, comparative, or explanatory research designs, though its primary use is descriptive.

  19. Qualitative Content Analysis

    Qualitative content analysis is one of the several qualita-tive methods currently available for analyzing data and inter-preting its meaning (Schreier, 2012). As a research method, it represents a systematic and objective means of describing and quantifying phenomena (Downe-Wamboldt, 1992; Schreier, 2012).

  20. Qualitative Content Analysis: Theoretical Background and Procedures

    From an analysis of common qualitative oriented text analysis techniques (cf. Mayring 2010a, b) we can show that they can be reduced to three fundamental forms of interpreting: summary (text reduction), explication and structuring:. Reducing procedures: The object of the analysis is to reduce the material such that the essential contents remain, in order to create through abstraction a ...

  21. Three approaches to qualitative content analysis

    Abstract. Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed, or summative. All three approaches are used to interpret meaning from the content of text data and, hence, adhere to the naturalistic ...

  22. Directed qualitative content analysis: the description and elaboration

    Qualitative content analysis (QCA) is a research approach for the description and interpretation of textual data using the systematic process of coding. The final product of data analysis is the identification of categories, themes and patterns ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ; Zhang and Wildemuth, 2009 ).

  23. Education Sciences

    Using qualitative content analysis, she examined 711 students' questions that were recorded in writing by the researcher over a period of five years (2003-2008) in five different classes (grade levels 2-4) at a primary school in Germany. ... Creswell, J.W. Research design: Qualitative, Quantitative, and Mixed Methods Approaches, 3rd ed ...

  24. Data Analysis

    Data visualization can be employed formatively, to aid your data analysis, or summatively, to present your findings. Many qualitative data analysis (QDA) software platforms, such as NVivo, feature search functionality and data visualization options within them to aid data analysis during the formative stages of your project.. For expert assistance creating data visualizations to present your ...

  25. Qualitative Content Analysis Coding: A Step-by-Step Guide

    Qualitative content analysis coding is a systematic approach to interpreting textual data. This method enables researchers to dissect large volumes of text, identifying patterns, themes, and meaning in qualitative information. ... you lay a solid foundation for a thorough and meaningful qualitative content analysis. Defining Your Research ...

  26. Qualitative research

    Content analysis is an important building block in the conceptual analysis of qualitative data. It is frequently used in sociology. For example, content analysis has been applied to research on such diverse aspects of human life as changes in perceptions of race over time, [36] the lifestyles of contractors, [37] and even reviews of automobiles ...

  27. A hands-on guide to doing content analysis

    A common starting point for qualitative content analysis is often transcribed interview texts. The objective in qualitative content analysis is to systematically transform a large amount of text into a highly organised and concise summary of key results. Analysis of the raw data from verbatim transcribed interviews to form categories or themes ...

  28. Socio-technical Grounded Theory for Qualitative Data Analysis

    Qualitative data analysis forms the heart of socio-technical grounded theory (STGT). In this chapter, we will start by learning about the basics of qualitative data analysis and preparing for qualitative data analysis as it applies in STGT, including the mindset, approach, and worldview applied to analysis. This is followed by a visual notes-taking analogy to help explain the general idea of ...

  29. Barriers and facilitators to implementing imaging-based diagnostic

    Deductive content analysis was primarily used for data analysis. As a systematic and objective qualitative approach, content analysis is used to describe and quantify phenomena by deriving replicable and reliable inferences from qualitative data within relevant context. 43 For deductive content analysis, data were coded based on existing ...

  30. Physicians and nurses experiences of providing care to patients within

    These interviews were transcribed verbatim and subjected to content analysis, with the study adhering to the Standards for Reporting Qualitative Research (SRQR). The analysis resulted in two main categories: "Unlocking the potential of mobile care", and "The challenges of moving hospitals to patients' homes"; and seven subcategories.