how to write result in qualitative research

How To Write The Results/Findings Chapter

For qualitative studies (dissertations & theses).

By: Jenna Crossley (PhD). Expert Reviewed By: Dr. Eunice Rautenbach | August 2021

So, you’ve collected and analysed your qualitative data, and it’s time to write up your results chapter. But where do you start? In this post, we’ll guide you through the qualitative results chapter (also called the findings chapter), step by step. 

Overview: Qualitative Results Chapter

  • What (exactly) the qualitative results chapter is
  • What to include in your results chapter
  • How to write up your results chapter
  • A few tips and tricks to help you along the way
  • Free results chapter template

What exactly is the results chapter?

The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and discuss its meaning), depending on your university’s preference.  We’ll treat the two chapters as separate, as that’s the most common approach.

In contrast to a quantitative results chapter that presents numbers and statistics, a qualitative results chapter presents data primarily in the form of words . But this doesn’t mean that a qualitative study can’t have quantitative elements – you could, for example, present the number of times a theme or topic pops up in your data, depending on the analysis method(s) you adopt.

Adding a quantitative element to your study can add some rigour, which strengthens your results by providing more evidence for your claims. This is particularly common when using qualitative content analysis. Keep in mind though that qualitative research aims to achieve depth, richness and identify nuances , so don’t get tunnel vision by focusing on the numbers. They’re just cream on top in a qualitative analysis.

So, to recap, the results chapter is where you objectively present the findings of your analysis, without interpreting them (you’ll save that for the discussion chapter). With that out the way, let’s take a look at what you should include in your results chapter.

Free template for results section of a dissertation or thesis

What should you include in the results chapter?

As we’ve mentioned, your qualitative results chapter should purely present and describe your results , not interpret them in relation to the existing literature or your research questions . Any speculations or discussion about the implications of your findings should be reserved for your discussion chapter.

In your results chapter, you’ll want to talk about your analysis findings and whether or not they support your hypotheses (if you have any). Naturally, the exact contents of your results chapter will depend on which qualitative analysis method (or methods) you use. For example, if you were to use thematic analysis, you’d detail the themes identified in your analysis, using extracts from the transcripts or text to support your claims.

While you do need to present your analysis findings in some detail, you should avoid dumping large amounts of raw data in this chapter. Instead, focus on presenting the key findings and using a handful of select quotes or text extracts to support each finding . The reams of data and analysis can be relegated to your appendices.

While it’s tempting to include every last detail you found in your qualitative analysis, it is important to make sure that you report only that which is relevant to your research aims, objectives and research questions .  Always keep these three components, as well as your hypotheses (if you have any) front of mind when writing the chapter and use them as a filter to decide what’s relevant and what’s not.

Need a helping hand?

how to write result in qualitative research

How do I write the results chapter?

Now that we’ve covered the basics, it’s time to look at how to structure your chapter. Broadly speaking, the results chapter needs to contain three core components – the introduction, the body and the concluding summary. Let’s take a look at each of these.

Section 1: Introduction

The first step is to craft a brief introduction to the chapter. This intro is vital as it provides some context for your findings. In your introduction, you should begin by reiterating your problem statement and research questions and highlight the purpose of your research . Make sure that you spell this out for the reader so that the rest of your chapter is well contextualised.

The next step is to briefly outline the structure of your results chapter. In other words, explain what’s included in the chapter and what the reader can expect. In the results chapter, you want to tell a story that is coherent, flows logically, and is easy to follow , so make sure that you plan your structure out well and convey that structure (at a high level), so that your reader is well oriented.

The introduction section shouldn’t be lengthy. Two or three short paragraphs should be more than adequate. It is merely an introduction and overview, not a summary of the chapter.

Pro Tip – To help you structure your chapter, it can be useful to set up an initial draft with (sub)section headings so that you’re able to easily (re)arrange parts of your chapter. This will also help your reader to follow your results and give your chapter some coherence.  Be sure to use level-based heading styles (e.g. Heading 1, 2, 3 styles) to help the reader differentiate between levels visually. You can find these options in Word (example below).

Heading styles in the results chapter

Section 2: Body

Before we get started on what to include in the body of your chapter, it’s vital to remember that a results section should be completely objective and descriptive, not interpretive . So, be careful not to use words such as, “suggests” or “implies”, as these usually accompany some form of interpretation – that’s reserved for your discussion chapter.

The structure of your body section is very important , so make sure that you plan it out well. When planning out your qualitative results chapter, create sections and subsections so that you can maintain the flow of the story you’re trying to tell. Be sure to systematically and consistently describe each portion of results. Try to adopt a standardised structure for each portion so that you achieve a high level of consistency throughout the chapter.

For qualitative studies, results chapters tend to be structured according to themes , which makes it easier for readers to follow. However, keep in mind that not all results chapters have to be structured in this manner. For example, if you’re conducting a longitudinal study, you may want to structure your chapter chronologically. Similarly, you might structure this chapter based on your theoretical framework . The exact structure of your chapter will depend on the nature of your study , especially your research questions.

As you work through the body of your chapter, make sure that you use quotes to substantiate every one of your claims . You can present these quotes in italics to differentiate them from your own words. A general rule of thumb is to use at least two pieces of evidence per claim, and these should be linked directly to your data. Also, remember that you need to include all relevant results , not just the ones that support your assumptions or initial leanings.

In addition to including quotes, you can also link your claims to the data by using appendices , which you should reference throughout your text. When you reference, make sure that you include both the name/number of the appendix , as well as the line(s) from which you drew your data.

As referencing styles can vary greatly, be sure to look up the appendix referencing conventions of your university’s prescribed style (e.g. APA , Harvard, etc) and keep this consistent throughout your chapter.

Section 3: Concluding summary

The concluding summary is very important because it summarises your key findings and lays the foundation for the discussion chapter . Keep in mind that some readers may skip directly to this section (from the introduction section), so make sure that it can be read and understood well in isolation.

In this section, you need to remind the reader of the key findings. That is, the results that directly relate to your research questions and that you will build upon in your discussion chapter. Remember, your reader has digested a lot of information in this chapter, so you need to use this section to remind them of the most important takeaways.

Importantly, the concluding summary should not present any new information and should only describe what you’ve already presented in your chapter. Keep it concise – you’re not summarising the whole chapter, just the essentials.

Tips for writing an A-grade results chapter

Now that you’ve got a clear picture of what the qualitative results chapter is all about, here are some quick tips and reminders to help you craft a high-quality chapter:

  • Your results chapter should be written in the past tense . You’ve done the work already, so you want to tell the reader what you found , not what you are currently finding .
  • Make sure that you review your work multiple times and check that every claim is adequately backed up by evidence . Aim for at least two examples per claim, and make use of an appendix to reference these.
  • When writing up your results, make sure that you stick to only what is relevant . Don’t waste time on data that are not relevant to your research objectives and research questions.
  • Use headings and subheadings to create an intuitive, easy to follow piece of writing. Make use of Microsoft Word’s “heading styles” and be sure to use them consistently.
  • When referring to numerical data, tables and figures can provide a useful visual aid. When using these, make sure that they can be read and understood independent of your body text (i.e. that they can stand-alone). To this end, use clear, concise labels for each of your tables or figures and make use of colours to code indicate differences or hierarchy.
  • Similarly, when you’re writing up your chapter, it can be useful to highlight topics and themes in different colours . This can help you to differentiate between your data if you get a bit overwhelmed and will also help you to ensure that your results flow logically and coherently.

If you have any questions, leave a comment below and we’ll do our best to help. If you’d like 1-on-1 help with your results chapter (or any chapter of your dissertation or thesis), check out our private dissertation coaching service here or book a free initial consultation to discuss how we can help you.

how to write result in qualitative research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

21 Comments

David Person

This was extremely helpful. Thanks a lot guys

Aditi

Hi, thanks for the great research support platform created by the gradcoach team!

I wanted to ask- While “suggests” or “implies” are interpretive terms, what terms could we use for the results chapter? Could you share some examples of descriptive terms?

TcherEva

I think that instead of saying, ‘The data suggested, or The data implied,’ you can say, ‘The Data showed or revealed, or illustrated or outlined’…If interview data, you may say Jane Doe illuminated or elaborated, or Jane Doe described… or Jane Doe expressed or stated.

Llala Phoshoko

I found this article very useful. Thank you very much for the outstanding work you are doing.

Oliwia

What if i have 3 different interviewees answering the same interview questions? Should i then present the results in form of the table with the division on the 3 perspectives or rather give a results in form of the text and highlight who said what?

Rea

I think this tabular representation of results is a great idea. I am doing it too along with the text. Thanks

Nomonde Mteto

That was helpful was struggling to separate the discussion from the findings

Esther Peter.

this was very useful, Thank you.

tendayi

Very helpful, I am confident to write my results chapter now.

Sha

It is so helpful! It is a good job. Thank you very much!

Nabil

Very useful, well explained. Many thanks.

Agnes Ngatuni

Hello, I appreciate the way you provided a supportive comments about qualitative results presenting tips

Carol Ch

I loved this! It explains everything needed, and it has helped me better organize my thoughts. What words should I not use while writing my results section, other than subjective ones.

Hend

Thanks a lot, it is really helpful

Anna milanga

Thank you so much dear, i really appropriate your nice explanations about this.

Wid

Thank you so much for this! I was wondering if anyone could help with how to prproperly integrate quotations (Excerpts) from interviews in the finding chapter in a qualitative research. Please GradCoach, address this issue and provide examples.

nk

what if I’m not doing any interviews myself and all the information is coming from case studies that have already done the research.

FAITH NHARARA

Very helpful thank you.

Philip

This was very helpful as I was wondering how to structure this part of my dissertation, to include the quotes… Thanks for this explanation

Aleks

This is very helpful, thanks! I am required to write up my results chapters with the discussion in each of them – any tips and tricks for this strategy?

Wei Leong YONG

For qualitative studies, can the findings be structured according to the Research questions? Thank you.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Structuring a qualitative findings section

Reporting the findings from a qualitative study in a way that is interesting, meaningful, and trustworthy can be a struggle. Those new to qualitative research often find themselves trying to quantify everything to make it seem more “rigorous,” or asking themselves, “Do I really need this much data to support my findings?” Length requirements and word limits imposed by academic journals can also make the process difficult because qualitative data takes up a lot of room! In this post, I’m going to outline a few ways to structure qualitative findings, and a few tips and tricks to develop a strong findings section.

There are A LOT of different ways to structure a qualitative findings section. I’m going to focus on the following:

Tables (but not ONLY tables)

Themes/Findings as Headings

Research Questions as Headings

Anchoring Quotations

Anchoring Excerpts from Field Notes

Before I get into each of those, however, here is a bit of general guidance. First, make sure that you are providing adequate direct evidence for your findings. Second, be sure to integrate that direct evidence into the narrative. In other words, if for example, you were using quotes from a participant to support one of your themes, you should present and explain the theme (akin to a thesis statement), introduce the supporting quote, present it, explain the quote, and connect it to your finding. Below is an example of what I mean from one of my articles on implementation challenges in personalized learning ( Bingham, Pane, Steiner, & Hamilton, 2018 ). The finding supported by this paragraph was: “Inadequate Teacher Preparation, Development, and Support”

To mitigate the difficulties of enacting personalized learning in their classrooms, teachers wanted a model from which they could extrapolate practices that might serve them well in their own classrooms. As one teacher explained, “the ideas and the implementation is what’s lacking I think. I don’t feel like I know what I’m doing. I need to see things modeled and I need to know what it is. I need to be able to touch it. Show me a model, model for me.” Unfortunately, teachers had little to draw on for effective practices. Professional development was not as helpful as teachers had hoped, outside training on using the digital content or learning platforms fell short, and few examples or best practices existed for teachers to use in their own classrooms. As a result, teachers had to work harder to address gaps in their own knowledge. 

Finally, you should not leave quotations to speak for themselves and you should not have quotations as standalone paragraphs or sentences, with no introduction or explanation. Don’t make the reader do the analytic work for you.

Now, on to some specific ways to structure your findings section.

Screen Shot 2020-09-26 at 9.47.48 AM.png

Tables can be used to give an overview of what you’re about to present in your findings, including the themes, some supporting evidence, and the meaning/explanation of the theme. Tables can be a useful way to give readers a quick reference for what your findings are. However, tables should not be used as your ONLY means of presenting those findings.

If you are choosing to use a table to present qualitative findings, you must also describe the findings in context, and provide supporting evidence in a narrative format (as in the paragraph outlined in the previous section).

2). Themes/Findings as Headings

Another option is to present your themes/findings as general or specific headings in your findings section. Here are some examples of findings as general headings:

Importance of Data Utilization and Analysis in the Classroom  The Role of Student Discipline and Accountability Differences in the Experiences of Teachers 

As you can see these headings do not describe precisely what the finding is, but they give the general idea/subject of the finding. You can have sub-headings within these findings that are more specific if you would like.

Another way to do this would be to be a bit more specific. For example:

School Infrastructure and Available Technology Do Not yet Fully Align with Teachers’ Needs 

Structural support for high levels of technology use is not fully developed 

Using multiple sources of digital content led to alignment issues 

Measures of School and Student Success are Misaligned

Traditional methods of measuring student progress conflict with personalized learning

Difficulties communicating new measures of student success to colleges and universities.

As you can see, here the findings are shown as headings, but are structured as specific sentences, with sub-themes included as well.

3). Research Questions as Headings

You can also present your findings using your research questions as the headings in the findings section. This is a useful strategy that ensures you’re answering your research questions and also allows the reader to quickly ascertain where the answers to your research questions are. Often, you will also need to present themes within each research question to keep yourself organized and to adequately flesh out your findings. The example below presents a research question from my study of blended learning at a charter high school (Bingham, 2016) , and an excerpt from my findings that answered that research question. I have also included the associated theme.

Research Question 1: What challenges, if any, do teachers face in implementing a blended model in a school’s first year? Theme: TROUBLESHOOTING AND TASK-MANAGING: TECHNOLOGY USE IN THE CLASSROOM In the original vision for instruction at Blended Academy, technology was to be an integral part of students’ learning, meant to allow students to find their own answers to their questions, to explore their personal interests, and to provide multiple opportunities for learning. The use of iPods in the classroom was partially intended to serve the social-emotional component of the model, allowing students to enjoy music and to “tune out” from other classroom activities when working on Digital X. Further, the iPods would allow stu- dents to listen to podcasts or teacher-created content at any time, in any location. However, prior to the school’s opening, little attention was paid to the management of these devices, and their potential for misuse. As a result, teachers spent much of their time managing students’ technology use, troubleshooting, and developing classroom procedures to ensure that technology use was relevant to learning. For example, in Ms. L’s classroom, she attempted to ensure learning was happening by instituting “Technology-Free” periods in the classroom. When students had to be working on their laptops in order to complete lessons or quizzes, the majority of her time was spent walking from student to student, watching for off-task behavior, and calling out students for how long they were “logged in” to the digital curriculum. In one typical interaction, Ms. L admonished one student, saying “It says you only logged in for one minute . . . when are you going to finish your English if you only logged in one minute today?” The difficulties around ensuring students were using technology productively resulted in teachers “hovering” over students, making it difficult to provide targeted instructional help. Teachers often responded to off-task behavior/ technology use by confiscating computers and devices or restricting their use, in order to ensure that students were working. However, because the majority of tasks were meant to be delivered online or through technological devices, this was not a productive or effective solution.

4). Vignettes

Vignettes can be a strategy to spark interest in your study, add narrative context, and provide a descriptive overview of your study/site/participants. They can also be used as a strategy to introduce themes. You can place them at the beginning of a paper, or at the start of the findings section, or in your discussion of each theme. They wouldn’t typically be the only representation of your findings that you present, but you can use them to hook the reader and provide a story that exemplifies findings, themes, contexts, participants, etc. Below is an example from one of my recent studies.

The Role of Pilot Teachers in Schoolwide Technology Integration Blended High School is a lot like many other charter schools. Students wear uniforms, and as you walk through the halls, there is almost always a teacher issuing a demerit to a student who is not wearing the right shoes, or who hasn’t tucked in their shirt. In this school, however, teachers use technology in almost every facet of their instruction, operating in a school model that blends face-to-face and online learning in the classroom in order to personalize students’ learning experiences. It has, however, been a long road to this level of technology use. BHS’s first year of operation was, arguably, disastrous. Teachers were overwhelmed and students didn’t progress as expected. In one staff meeting toward the end of the schools’ first year, teachers and administrators expressed frustration with each other and with the school model, with several teachers arguing that technology was hurting, not helping. The atmosphere was tense, with one teacher finally shrugging anxiously and saying “Maybe need to ask ourselves, ‘Is this the best model to use with some of our kids?’” Ultimately, by the end of the first year, technology was not a regular classroom practice. In BHS’s second year, the administration again pushed for full technology integration, but they wanted to start slow. In a fall semester staff meeting, the principal and the assistant principal ran what the principal referred to as a “technology therapy session,” where teachers could share their struggles with using technology to engage in PL. During the session, one of the new teachers mentions that she is having a difficult time letting go – changing her focus from lecturing to computer-based work. Another teacher worries about finding good online resources. Most of the teachers, new and veteran, are alarmed by the time it is taking for them design lessons that integrate technology. Some admit only engaging in technology use in a shallow way – uploading worksheets to Google Docs, recording Powerpoints, etc.  A few months after the discussion in which teachers aired their fears and struggles, the principal leads the teachers in analyzing student data from that week and spends a bit of time highlighting the work of a few teachers whose students are doing particularly well and who have been able to use technology in everyday classroom practice. Those teachers are part of a small group of “pilot teachers,” each of whom have been experimenting with various technology-based practices, including testing new learning management systems, designing their own online modules with personalized student objectives, providing students with technology-facilitated immediate feedback, and using up-to-the-minute data to develop technology-guided small-group instruction.  Over the course of the next several months, administrators encouraged teachers to continue to be transparent about their concerns and share those concerns in regular staff meetings. Administrators conferred with the pilot teachers and administrators and teachers together set incremental goals based on the pilot teachers’ recommendations. In weekly staff meetings, the pilot teachers shared their progress, including concerns and challenges. They collaborated with the other teachers to find solutions and worked with the administration to get what they needed to enact those solutions. For example, after a push from the pilot teachers, administration increased funding for technology purchases and introduced shifts in the school schedule to allow for planning in order to help teachers manage the demands of a high-tech classroom. Because the pilot teachers emphasized how much time meaningful technology integration took, and knew what worked and what didn’t, they were able to train other teachers in high-tech practices and to make the case to administration for needed changes.  By BHS’s third year, teachers schoolwide were able to fully integrate technology in their classrooms. All teachers were using the same learning management system, which had been initially chosen and tested by a pilot teacher. In every classroom, teachers were also engaging online modules, technology-facilitated breakout groups, and real time technology-based data analysis – all of which were practices the pilot teachers had tested and shared in the second year. The consistent collaboration between administration and pilot teachers and pilot teachers and other teachers helped calibrate classroom changes to manage the conflict between existing practices and new high-tech practices. By focusing on student learning data, creating the room for experimentation, collaborating consistently, and distributing the leadership for technology integration, teachers and administrators felt comfortable with the increasing reliance on tech-heavy practices.

I developed this vignette as a composite from my field notes and interviews and used it to set the stage for the rest of the findings section.

4). Anchoring Quotes

Using exemplar quotes from your participants is another way to structure your findings. In the following, which also comes from Bingham et al. (2018) , the finding itself is used as the heading, and the anchoring quotes come directly after the heading, prior to the rest of the narrative discussion of the finding. These quotations help provide some initial evidence and set the stage for what’s to come.

School Infrastructure and Available Technology Do Not Yet Fully Align With Teachers’ Needs  “I know that computer problems are an issue almost daily.” (Middle school personalized learning teacher)  “If the data was exactly what we needed, it would be easier. I think a lot of times we’re not using it enough because the way we’re using the data is not as effective as it should be.” (High school personalized learning teacher) 

You can note the source next to or after the quote. This can be done with your chosen pseudonyms, or with a general description, as I've done above.

5). Anchoring Excerpts from Field Notes

Similarly, excerpts from field notes can be used to start your discussion of a finding. Again, the finding itself is used as the heading, and the excerpt from field notes supporting that finding comes directly after the heading, prior to the rest of the narrative discussion of the finding. The example below comes from a study in which I explored how a personalized learning model evolved over the course of three years (Bingham, 2017) . I used excerpts from my field notes to open the discussion of each year.

Year 1: Navigating the disconnect between vision and practice  Walking into the large classroom space shared by Ms. Z and Ms. H, it is not immediately evident that these are high-tech PL classrooms. At first, there are no laptops out in either class. Both Ms. Z’s and Ms. H’s students are completing warm-up activities that are projected on each teacher’s white board. After a few minutes, Ms. Z’s students get up and get laptops. Ms. Z walks around to students and asks them what lesson from the digital curriculum they will be working on today. As Ms. Z speaks to a table of students, other students in the room listen to their iPods, sometimes singing loudly. Some students are on YouTube, watching music videos; others are messaging friends on GChat or Facebook. As Ms. Z makes her way around, students toggle back to the screen devoted to the digital curriculum. Sometimes, Ms. Z notices that students are off-task and she redirects them. Other times, she is too busy unlocking an online quiz for a student, or confiscating a student’s iPod. 

This excerpt from my field notes provided an overview of what teacher practice looked like in the first year of the school, so that I could then discuss several themes that were representative of how practice evolved over that first year.

The key takeaway here is that there are many ways to structure your findings section. You have to choose the method that best supports your study, and best represents your data and participants. No matter what you choose, the findings section itself should be constructed to answer your research questions, while also providing context and thick description, and, of course, telling a story.

Writing a discussion section

Some tips for academic writing.

Logo for Rhode Island College Digital Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Qualitative Data Analysis

23 Presenting the Results of Qualitative Analysis

Mikaila Mariel Lemonik Arthur

Qualitative research is not finished just because you have determined the main findings or conclusions of your study. Indeed, disseminating the results is an essential part of the research process. By sharing your results with others, whether in written form as scholarly paper or an applied report or in some alternative format like an oral presentation, an infographic, or a video, you ensure that your findings become part of the ongoing conversation of scholarship in your field, forming part of the foundation for future researchers. This chapter provides an introduction to writing about qualitative research findings. It will outline how writing continues to contribute to the analysis process, what concerns researchers should keep in mind as they draft their presentations of findings, and how best to organize qualitative research writing

As you move through the research process, it is essential to keep yourself organized. Organizing your data, memos, and notes aids both the analytical and the writing processes. Whether you use electronic or physical, real-world filing and organizational systems, these systems help make sense of the mountains of data you have and assure you focus your attention on the themes and ideas you have determined are important (Warren and Karner 2015). Be sure that you have kept detailed notes on all of the decisions you have made and procedures you have followed in carrying out research design, data collection, and analysis, as these will guide your ultimate write-up.

First and foremost, researchers should keep in mind that writing is in fact a form of thinking. Writing is an excellent way to discover ideas and arguments and to further develop an analysis. As you write, more ideas will occur to you, things that were previously confusing will start to make sense, and arguments will take a clear shape rather than being amorphous and poorly-organized. However, writing-as-thinking cannot be the final version that you share with others. Good-quality writing does not display the workings of your thought process. It is reorganized and revised (more on that later) to present the data and arguments important in a particular piece. And revision is totally normal! No one expects the first draft of a piece of writing to be ready for prime time. So write rough drafts and memos and notes to yourself and use them to think, and then revise them until the piece is the way you want it to be for sharing.

Bergin (2018) lays out a set of key concerns for appropriate writing about research. First, present your results accurately, without exaggerating or misrepresenting. It is very easy to overstate your findings by accident if you are enthusiastic about what you have found, so it is important to take care and use appropriate cautions about the limitations of the research. You also need to work to ensure that you communicate your findings in a way people can understand, using clear and appropriate language that is adjusted to the level of those you are communicating with. And you must be clear and transparent about the methodological strategies employed in the research. Remember, the goal is, as much as possible, to describe your research in a way that would permit others to replicate the study. There are a variety of other concerns and decision points that qualitative researchers must keep in mind, including the extent to which to include quantification in their presentation of results, ethics, considerations of audience and voice, and how to bring the richness of qualitative data to life.

Quantification, as you have learned, refers to the process of turning data into numbers. It can indeed be very useful to count and tabulate quantitative data drawn from qualitative research. For instance, if you were doing a study of dual-earner households and wanted to know how many had an equal division of household labor and how many did not, you might want to count those numbers up and include them as part of the final write-up. However, researchers need to take care when they are writing about quantified qualitative data. Qualitative data is not as generalizable as quantitative data, so quantification can be very misleading. Thus, qualitative researchers should strive to use raw numbers instead of the percentages that are more appropriate for quantitative research. Writing, for instance, “15 of the 20 people I interviewed prefer pancakes to waffles” is a simple description of the data; writing “75% of people prefer pancakes” suggests a generalizable claim that is not likely supported by the data. Note that mixing numbers with qualitative data is really a type of mixed-methods approach. Mixed-methods approaches are good, but sometimes they seduce researchers into focusing on the persuasive power of numbers and tables rather than capitalizing on the inherent richness of their qualitative data.

A variety of issues of scholarly ethics and research integrity are raised by the writing process. Some of these are unique to qualitative research, while others are more universal concerns for all academic and professional writing. For example, it is essential to avoid plagiarism and misuse of sources. All quotations that appear in a text must be properly cited, whether with in-text and bibliographic citations to the source or with an attribution to the research participant (or the participant’s pseudonym or description in order to protect confidentiality) who said those words. Where writers will paraphrase a text or a participant’s words, they need to make sure that the paraphrase they develop accurately reflects the meaning of the original words. Thus, some scholars suggest that participants should have the opportunity to read (or to have read to them, if they cannot read the text themselves) all sections of the text in which they, their words, or their ideas are presented to ensure accuracy and enable participants to maintain control over their lives.

Audience and Voice

When writing, researchers must consider their audience(s) and the effects they want their writing to have on these audiences. The designated audience will dictate the voice used in the writing, or the individual style and personality of a piece of text. Keep in mind that the potential audience for qualitative research is often much more diverse than that for quantitative research because of the accessibility of the data and the extent to which the writing can be accessible and interesting. Yet individual pieces of writing are typically pitched to a more specific subset of the audience.

Let us consider one potential research study, an ethnography involving participant-observation of the same children both when they are at daycare facility and when they are at home with their families to try to understand how daycare might impact behavior and social development. The findings of this study might be of interest to a wide variety of potential audiences: academic peers, whether at your own academic institution, in your broader discipline, or multidisciplinary; people responsible for creating laws and policies; practitioners who run or teach at day care centers; and the general public, including both people who are interested in child development more generally and those who are themselves parents making decisions about child care for their own children. And the way you write for each of these audiences will be somewhat different. Take a moment and think through what some of these differences might look like.

If you are writing to academic audiences, using specialized academic language and working within the typical constraints of scholarly genres, as will be discussed below, can be an important part of convincing others that your work is legitimate and should be taken seriously. Your writing will be formal. Even if you are writing for students and faculty you already know—your classmates, for instance—you are often asked to imitate the style of academic writing that is used in publications, as this is part of learning to become part of the scholarly conversation. When speaking to academic audiences outside your discipline, you may need to be more careful about jargon and specialized language, as disciplines do not always share the same key terms. For instance, in sociology, scholars use the term diffusion to refer to the way new ideas or practices spread from organization to organization. In the field of international relations, scholars often used the term cascade to refer to the way ideas or practices spread from nation to nation. These terms are describing what is fundamentally the same concept, but they are different terms—and a scholar from one field might have no idea what a scholar from a different field is talking about! Therefore, while the formality and academic structure of the text would stay the same, a writer with a multidisciplinary audience might need to pay more attention to defining their terms in the body of the text.

It is not only other academic scholars who expect to see formal writing. Policymakers tend to expect formality when ideas are presented to them, as well. However, the content and style of the writing will be different. Much less academic jargon should be used, and the most important findings and policy implications should be emphasized right from the start rather than initially focusing on prior literature and theoretical models as you might for an academic audience. Long discussions of research methods should also be minimized. Similarly, when you write for practitioners, the findings and implications for practice should be highlighted. The reading level of the text will vary depending on the typical background of the practitioners to whom you are writing—you can make very different assumptions about the general knowledge and reading abilities of a group of hospital medical directors with MDs than you can about a group of case workers who have a post-high-school certificate. Consider the primary language of your audience as well. The fact that someone can get by in spoken English does not mean they have the vocabulary or English reading skills to digest a complex report. But the fact that someone’s vocabulary is limited says little about their intellectual abilities, so try your best to convey the important complexity of the ideas and findings from your research without dumbing them down—even if you must limit your vocabulary usage.

When writing for the general public, you will want to move even further towards emphasizing key findings and policy implications, but you also want to draw on the most interesting aspects of your data. General readers will read sociological texts that are rich with ethnographic or other kinds of detail—it is almost like reality television on a page! And this is a contrast to busy policymakers and practitioners, who probably want to learn the main findings as quickly as possible so they can go about their busy lives. But also keep in mind that there is a wide variation in reading levels. Journalists at publications pegged to the general public are often advised to write at about a tenth-grade reading level, which would leave most of the specialized terminology we develop in our research fields out of reach. If you want to be accessible to even more people, your vocabulary must be even more limited. The excellent exercise of trying to write using the 1,000 most common English words, available at the Up-Goer Five website ( https://www.splasho.com/upgoer5/ ) does a good job of illustrating this challenge (Sanderson n.d.).

Another element of voice is whether to write in the first person. While many students are instructed to avoid the use of the first person in academic writing, this advice needs to be taken with a grain of salt. There are indeed many contexts in which the first person is best avoided, at least as long as writers can find ways to build strong, comprehensible sentences without its use, including most quantitative research writing. However, if the alternative to using the first person is crafting a sentence like “it is proposed that the researcher will conduct interviews,” it is preferable to write “I propose to conduct interviews.” In qualitative research, in fact, the use of the first person is far more common. This is because the researcher is central to the research project. Qualitative researchers can themselves be understood as research instruments, and thus eliminating the use of the first person in writing is in a sense eliminating information about the conduct of the researchers themselves.

But the question really extends beyond the issue of first-person or third-person. Qualitative researchers have choices about how and whether to foreground themselves in their writing, not just in terms of using the first person, but also in terms of whether to emphasize their own subjectivity and reflexivity, their impressions and ideas, and their role in the setting. In contrast, conventional quantitative research in the positivist tradition really tries to eliminate the author from the study—which indeed is exactly why typical quantitative research avoids the use of the first person. Keep in mind that emphasizing researchers’ roles and reflexivity and using the first person does not mean crafting articles that provide overwhelming detail about the author’s thoughts and practices. Readers do not need to hear, and should not be told, which database you used to search for journal articles, how many hours you spent transcribing, or whether the research process was stressful—save these things for the memos you write to yourself. Rather, readers need to hear how you interacted with research participants, how your standpoint may have shaped the findings, and what analytical procedures you carried out.

Making Data Come Alive

One of the most important parts of writing about qualitative research is presenting the data in a way that makes its richness and value accessible to readers. As the discussion of analysis in the prior chapter suggests, there are a variety of ways to do this. Researchers may select key quotes or images to illustrate points, write up specific case studies that exemplify their argument, or develop vignettes (little stories) that illustrate ideas and themes, all drawing directly on the research data. Researchers can also write more lengthy summaries, narratives, and thick descriptions.

Nearly all qualitative work includes quotes from research participants or documents to some extent, though ethnographic work may focus more on thick description than on relaying participants’ own words. When quotes are presented, they must be explained and interpreted—they cannot stand on their own. This is one of the ways in which qualitative research can be distinguished from journalism. Journalism presents what happened, but social science needs to present the “why,” and the why is best explained by the researcher.

So how do authors go about integrating quotes into their written work? Julie Posselt (2017), a sociologist who studies graduate education, provides a set of instructions. First of all, authors need to remain focused on the core questions of their research, and avoid getting distracted by quotes that are interesting or attention-grabbing but not so relevant to the research question. Selecting the right quotes, those that illustrate the ideas and arguments of the paper, is an important part of the writing process. Second, not all quotes should be the same length (just like not all sentences or paragraphs in a paper should be the same length). Include some quotes that are just phrases, others that are a sentence or so, and others that are longer. We call longer quotes, generally those more than about three lines long, block quotes , and they are typically indented on both sides to set them off from the surrounding text. For all quotes, be sure to summarize what the quote should be telling or showing the reader, connect this quote to other quotes that are similar or different, and provide transitions in the discussion to move from quote to quote and from topic to topic. Especially for longer quotes, it is helpful to do some of this writing before the quote to preview what is coming and other writing after the quote to make clear what readers should have come to understand. Remember, it is always the author’s job to interpret the data. Presenting excerpts of the data, like quotes, in a form the reader can access does not minimize the importance of this job. Be sure that you are explaining the meaning of the data you present.

A few more notes about writing with quotes: avoid patchwriting, whether in your literature review or the section of your paper in which quotes from respondents are presented. Patchwriting is a writing practice wherein the author lightly paraphrases original texts but stays so close to those texts that there is little the author has added. Sometimes, this even takes the form of presenting a series of quotes, properly documented, with nothing much in the way of text generated by the author. A patchwriting approach does not build the scholarly conversation forward, as it does not represent any kind of new contribution on the part of the author. It is of course fine to paraphrase quotes, as long as the meaning is not changed. But if you use direct quotes, do not edit the text of the quotes unless how you edit them does not change the meaning and you have made clear through the use of ellipses (…) and brackets ([])what kinds of edits have been made. For example, consider this exchange from Matthew Desmond’s (2012:1317) research on evictions:

The thing was, I wasn’t never gonna let Crystal come and stay with me from the get go. I just told her that to throw her off. And she wasn’t fittin’ to come stay with me with no money…No. Nope. You might as well stay in that shelter.

A paraphrase of this exchange might read “She said that she was going to let Crystal stay with her if Crystal did not have any money.” Paraphrases like that are fine. What is not fine is rewording the statement but treating it like a quote, for instance writing:

The thing was, I was not going to let Crystal come and stay with me from beginning. I just told her that to throw her off. And it was not proper for her to come stay with me without any money…No. Nope. You might as well stay in that shelter.

But as you can see, the change in language and style removes some of the distinct meaning of the original quote. Instead, writers should leave as much of the original language as possible. If some text in the middle of the quote needs to be removed, as in this example, ellipses are used to show that this has occurred. And if a word needs to be added to clarify, it is placed in square brackets to show that it was not part of the original quote.

Data can also be presented through the use of data displays like tables, charts, graphs, diagrams, and infographics created for publication or presentation, as well as through the use of visual material collected during the research process. Note that if visuals are used, the author must have the legal right to use them. Photographs or diagrams created by the author themselves—or by research participants who have signed consent forms for their work to be used, are fine. But photographs, and sometimes even excerpts from archival documents, may be owned by others from whom researchers must get permission in order to use them.

A large percentage of qualitative research does not include any data displays or visualizations. Therefore, researchers should carefully consider whether the use of data displays will help the reader understand the data. One of the most common types of data displays used by qualitative researchers are simple tables. These might include tables summarizing key data about cases included in the study; tables laying out the characteristics of different taxonomic elements or types developed as part of the analysis; tables counting the incidence of various elements; and 2×2 tables (two columns and two rows) illuminating a theory. Basic network or process diagrams are also commonly included. If data displays are used, it is essential that researchers include context and analysis alongside data displays rather than letting them stand by themselves, and it is preferable to continue to present excerpts and examples from the data rather than just relying on summaries in the tables.

If you will be using graphs, infographics, or other data visualizations, it is important that you attend to making them useful and accurate (Bergin 2018). Think about the viewer or user as your audience and ensure the data visualizations will be comprehensible. You may need to include more detail or labels than you might think. Ensure that data visualizations are laid out and labeled clearly and that you make visual choices that enhance viewers’ ability to understand the points you intend to communicate using the visual in question. Finally, given the ease with which it is possible to design visuals that are deceptive or misleading, it is essential to make ethical and responsible choices in the construction of visualization so that viewers will interpret them in accurate ways.

The Genre of Research Writing

As discussed above, the style and format in which results are presented depends on the audience they are intended for. These differences in styles and format are part of the genre of writing. Genre is a term referring to the rules of a specific form of creative or productive work. Thus, the academic journal article—and student papers based on this form—is one genre. A report or policy paper is another. The discussion below will focus on the academic journal article, but note that reports and policy papers follow somewhat different formats. They might begin with an executive summary of one or a few pages, include minimal background, focus on key findings, and conclude with policy implications, shifting methods and details about the data to an appendix. But both academic journal articles and policy papers share some things in common, for instance the necessity for clear writing, a well-organized structure, and the use of headings.

So what factors make up the genre of the academic journal article in sociology? While there is some flexibility, particularly for ethnographic work, academic journal articles tend to follow a fairly standard format. They begin with a “title page” that includes the article title (often witty and involving scholarly inside jokes, but more importantly clearly describing the content of the article); the authors’ names and institutional affiliations, an abstract , and sometimes keywords designed to help others find the article in databases. An abstract is a short summary of the article that appears both at the very beginning of the article and in search databases. Abstracts are designed to aid readers by giving them the opportunity to learn enough about an article that they can determine whether it is worth their time to read the complete text. They are written about the article, and thus not in the first person, and clearly summarize the research question, methodological approach, main findings, and often the implications of the research.

After the abstract comes an “introduction” of a page or two that details the research question, why it matters, and what approach the paper will take. This is followed by a literature review of about a quarter to a third the length of the entire paper. The literature review is often divided, with headings, into topical subsections, and is designed to provide a clear, thorough overview of the prior research literature on which a paper has built—including prior literature the new paper contradicts. At the end of the literature review it should be made clear what researchers know about the research topic and question, what they do not know, and what this new paper aims to do to address what is not known.

The next major section of the paper is the section that describes research design, data collection, and data analysis, often referred to as “research methods” or “methodology.” This section is an essential part of any written or oral presentation of your research. Here, you tell your readers or listeners “how you collected and interpreted your data” (Taylor, Bogdan, and DeVault 2016:215). Taylor, Bogdan, and DeVault suggest that the discussion of your research methods include the following:

  • The particular approach to data collection used in the study;
  • Any theoretical perspective(s) that shaped your data collection and analytical approach;
  • When the study occurred, over how long, and where (concealing identifiable details as needed);
  • A description of the setting and participants, including sampling and selection criteria (if an interview-based study, the number of participants should be clearly stated);
  • The researcher’s perspective in carrying out the study, including relevant elements of their identity and standpoint, as well as their role (if any) in research settings; and
  • The approach to analyzing the data.

After the methods section comes a section, variously titled but often called “data,” that takes readers through the analysis. This section is where the thick description narrative; the quotes, broken up by theme or topic, with their interpretation; the discussions of case studies; most data displays (other than perhaps those outlining a theoretical model or summarizing descriptive data about cases); and other similar material appears. The idea of the data section is to give readers the ability to see the data for themselves and to understand how this data supports the ultimate conclusions. Note that all tables and figures included in formal publications should be titled and numbered.

At the end of the paper come one or two summary sections, often called “discussion” and/or “conclusion.” If there is a separate discussion section, it will focus on exploring the overall themes and findings of the paper. The conclusion clearly and succinctly summarizes the findings and conclusions of the paper, the limitations of the research and analysis, any suggestions for future research building on the paper or addressing these limitations, and implications, be they for scholarship and theory or policy and practice.

After the end of the textual material in the paper comes the bibliography, typically called “works cited” or “references.” The references should appear in a consistent citation style—in sociology, we often use the American Sociological Association format (American Sociological Association 2019), but other formats may be used depending on where the piece will eventually be published. Care should be taken to ensure that in-text citations also reflect the chosen citation style. In some papers, there may be an appendix containing supplemental information such as a list of interview questions or an additional data visualization.

Note that when researchers give presentations to scholarly audiences, the presentations typically follow a format similar to that of scholarly papers, though given time limitations they are compressed. Abstracts and works cited are often not part of the presentation, though in-text citations are still used. The literature review presented will be shortened to only focus on the most important aspects of the prior literature, and only key examples from the discussion of data will be included. For long or complex papers, sometimes only one of several findings is the focus of the presentation. Of course, presentations for other audiences may be constructed differently, with greater attention to interesting elements of the data and findings as well as implications and less to the literature review and methods.

Concluding Your Work

After you have written a complete draft of the paper, be sure you take the time to revise and edit your work. There are several important strategies for revision. First, put your work away for a little while. Even waiting a day to revise is better than nothing, but it is best, if possible, to take much more time away from the text. This helps you forget what your writing looks like and makes it easier to find errors, mistakes, and omissions. Second, show your work to others. Ask them to read your work and critique it, pointing out places where the argument is weak, where you may have overlooked alternative explanations, where the writing could be improved, and what else you need to work on. Finally, read your work out loud to yourself (or, if you really need an audience, try reading to some stuffed animals). Reading out loud helps you catch wrong words, tricky sentences, and many other issues. But as important as revision is, try to avoid perfectionism in writing (Warren and Karner 2015). Writing can always be improved, no matter how much time you spend on it. Those improvements, however, have diminishing returns, and at some point the writing process needs to conclude so the writing can be shared with the world.

Of course, the main goal of writing up the results of a research project is to share with others. Thus, researchers should be considering how they intend to disseminate their results. What conferences might be appropriate? Where can the paper be submitted? Note that if you are an undergraduate student, there are a wide variety of journals that accept and publish research conducted by undergraduates. Some publish across disciplines, while others are specific to disciplines. Other work, such as reports, may be best disseminated by publication online on relevant organizational websites.

After a project is completed, be sure to take some time to organize your research materials and archive them for longer-term storage. Some Institutional Review Board (IRB) protocols require that original data, such as interview recordings, transcripts, and field notes, be preserved for a specific number of years in a protected (locked for paper or password-protected for digital) form and then destroyed, so be sure that your plans adhere to the IRB requirements. Be sure you keep any materials that might be relevant for future related research or for answering questions people may ask later about your project.

And then what? Well, then it is time to move on to your next research project. Research is a long-term endeavor, not a one-time-only activity. We build our skills and our expertise as we continue to pursue research. So keep at it.

  • Find a short article that uses qualitative methods. The sociological magazine Contexts is a good place to find such pieces. Write an abstract of the article.
  • Choose a sociological journal article on a topic you are interested in that uses some form of qualitative methods and is at least 20 pages long. Rewrite the article as a five-page research summary accessible to non-scholarly audiences.
  • Choose a concept or idea you have learned in this course and write an explanation of it using the Up-Goer Five Text Editor ( https://www.splasho.com/upgoer5/ ), a website that restricts your writing to the 1,000 most common English words. What was this experience like? What did it teach you about communicating with people who have a more limited English-language vocabulary—and what did it teach you about the utility of having access to complex academic language?
  • Select five or more sociological journal articles that all use the same basic type of qualitative methods (interviewing, ethnography, documents, or visual sociology). Using what you have learned about coding, code the methods sections of each article, and use your coding to figure out what is common in how such articles discuss their research design, data collection, and analysis methods.
  • Return to an exercise you completed earlier in this course and revise your work. What did you change? How did revising impact the final product?
  • Find a quote from the transcript of an interview, a social media post, or elsewhere that has not yet been interpreted or explained. Write a paragraph that includes the quote along with an explanation of its sociological meaning or significance.

The style or personality of a piece of writing, including such elements as tone, word choice, syntax, and rhythm.

A quotation, usually one of some length, which is set off from the main text by being indented on both sides rather than being placed in quotation marks.

A classification of written or artistic work based on form, content, and style.

A short summary of a text written from the perspective of a reader rather than from the perspective of an author.

Social Data Analysis Copyright © 2021 by Mikaila Mariel Lemonik Arthur is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

  • Introduction for Types of Dissertations
  • Overview of the Dissertation
  • Self-Assessment Exercise
  • What is a Dissertation Committee
  • Different Types of Dissertations
  • Introduction for Overview of the Dissertation Process
  • Responsibilities: the Chair, the Team and You
  • Sorting Exercise
  • Stages of a Dissertation
  • Managing Your Time
  • Create Your Own Timeline
  • Working with a Writing Partner
  • Key Deadlines
  • Self Assessment Exercise
  • Additional Resources
  • Purpose and Goals
  • Read and Evaluate Chapter 1 Exemplars
  • Draft an Introduction of the Study
  • Outline the Background of the Problem
  • Draft your Statement of the Problem
  • Draft your Purpose of the Study
  • Draft your Significance of the Study
  • List the Possible Limitations and Delimitations
  • Explicate the Definition of Terms
  • Outline the Organization of the Study
  • Recommended Resources and Readings
  • Purpose of the Literature Review
  • What is the Literature?
  • Article Summary Table
  • Writing a Short Literature Review
  • Outline for Literature Review
  • Synthesizing the Literature Review
  • Purpose of the Methodology Chapter
  • Topics to Include
  • Preparing to Write the Methodology Chapter
  • Confidentiality
  • Building the Components for Chapter Three
  • Preparing for Your Qualifying Exam (aka Proposal Defense)
  • What is Needed for Your Proposal Defense?
  • Submitting Your Best Draft
  • Preparing Your Abstract for IRB
  • Use of Self-Assessment
  • Preparing Your PowerPoint
  • During Your Proposal Defense
  • After Your Proposal Defense
  • Pre-observation – Issues to consider
  • During Observations
  • Wrapping Up
  • Recommended Resources and Readings (Qualitative)
  • Quantitative Data Collection
  • Recommended Resources and Readings (Quantitative)
  • Qualitative: Before you Start
  • Qualitative: During Analysis
  • Qualitative: After Analysis
  • Qualitative: Recommended Resources and Readings
  • Quantitative: Deciding on the Right Analysis
  • Quantitative: Data Management and Cleaning
  • Quantitative: Keep Track of your Analysis
  • The Purpose of Chapter 4
  • The Elements of Chapter 4
  • Presenting Results (Quantitative)
  • Presenting Findings (Qualitative)
  • Chapter 4 Considerations
  • The Purpose of Chapter 5
  • Preparing Your Abstract for the Graduate School
  • Draft the Introduction for Chapter 5
  • Draft the Summary of Findings
  • Draft Implications for Practice
  • Draft your Recommendations for Research
  • Draft your Conclusions
  • What is Needed
  • What Happens During the Final Defense?
  • What Happens After the Final Defense?

Presenting Findings (Qualitative)  Topic 1:  Chapter 4

  • Your findings should provide sufficient evidence from your data to support the  conclusions you have made. Evidence takes the form of quotations from interviews  and excerpts from observations and documents. 
  • Ethically you have to make sure you have confidence in your findings and account  for counter-evidence (evidence that contradicts your primary finding) and not report  something that does not have sufficient evidence to back it up. 
  • Your findings should be related back to your conceptual framework. 
  • Your findings should be in response to the problem presented (as defined by the  research questions) and should be the “solution” or “answer” to those questions. 
  • You should focus on data that enables you to answer your research questions, not  simply on offering raw data. 
  • Qualitative research presents “best examples” of raw data to demonstrate an  analytic point, not simply to display data. 
  • Numbers (descriptive statistics) help your reader understand how prevalent or  typical a finding is. Numbers are helpful and should not be avoided simply because  this is a qualitative dissertation. 


  • How it works

researchprospect post subheader

How to Write the Dissertation Findings or Results – Steps & Tips

Published by Grace Graffin at August 11th, 2021 , Revised On June 11, 2024

Each  part of the dissertation is unique, and some general and specific rules must be followed. The dissertation’s findings section presents the key results of your research without interpreting their meaning .

Theoretically, this is an exciting section of a dissertation because it involves writing what you have observed and found. However, it can be a little tricky if there is too much information to confuse the readers.

The goal is to include only the essential and relevant findings in this section. The results must be presented in an orderly sequence to provide clarity to the readers.

This section of the dissertation should be easy for the readers to follow, so you should avoid going into a lengthy debate over the interpretation of the results.

It is vitally important to focus only on clear and precise observations. The findings chapter of the  dissertation  is theoretically the easiest to write.

It includes  statistical analysis and a brief write-up about whether or not the results emerging from the analysis are significant. This segment should be written in the past sentence as you describe what you have done in the past.

This article will provide detailed information about  how to   write the findings of a dissertation .

When to Write Dissertation Findings Chapter

As soon as you have gathered and analysed your data, you can start to write up the findings chapter of your dissertation paper. Remember that it is your chance to report the most notable findings of your research work and relate them to the research hypothesis  or  research questions set out in  the introduction chapter of the dissertation .

You will be required to separately report your study’s findings before moving on to the discussion chapter  if your dissertation is based on the  collection of primary data  or experimental work.

However, you may not be required to have an independent findings chapter if your dissertation is purely descriptive and focuses on the analysis of case studies or interpretation of texts.

  • Always report the findings of your research in the past tense.
  • The dissertation findings chapter varies from one project to another, depending on the data collected and analyzed.
  • Avoid reporting results that are not relevant to your research questions or research hypothesis.

Does your Dissertation Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of the Dissertation strong.

research methodology

1. Reporting Quantitative Findings

The best way to present your quantitative findings is to structure them around the research  hypothesis or  questions you intend to address as part of your dissertation project.

Report the relevant findings for each research question or hypothesis, focusing on how you analyzed them.

Analysis of your findings will help you determine how they relate to the different research questions and whether they support the hypothesis you formulated.

While you must highlight meaningful relationships, variances, and tendencies, it is important not to guess their interpretations and implications because this is something to save for the discussion  and  conclusion  chapters.

Any findings not directly relevant to your research questions or explanations concerning the data collection process  should be added to the dissertation paper’s appendix section.

Use of Figures and Tables in Dissertation Findings

Suppose your dissertation is based on quantitative research. In that case, it is important to include charts, graphs, tables, and other visual elements to help your readers understand the emerging trends and relationships in your findings.

Repeating information will give the impression that you are short on ideas. Refer to all charts, illustrations, and tables in your writing but avoid recurrence.

The text should be used only to elaborate and summarize certain parts of your results. On the other hand, illustrations and tables are used to present multifaceted data.

It is recommended to give descriptive labels and captions to all illustrations used so the readers can figure out what each refers to.

How to Report Quantitative Findings

Here is an example of how to report quantitative results in your dissertation findings chapter;

Two hundred seventeen participants completed both the pretest and post-test and a Pairwise T-test was used for the analysis. The quantitative data analysis reveals a statistically significant difference between the mean scores of the pretest and posttest scales from the Teachers Discovering Computers course. The pretest mean was 29.00 with a standard deviation of 7.65, while the posttest mean was 26.50 with a standard deviation of 9.74 (Table 1). These results yield a significance level of .000, indicating a strong treatment effect (see Table 3). With the correlation between the scores being .448, the little relationship is seen between the pretest and posttest scores (Table 2). This leads the researcher to conclude that the impact of the course on the educators’ perception and integration of technology into the curriculum is dramatic.

Paired Samples

Mean N Std. Deviation Std. Error Mean
PRESCORE 29.00 217 7.65 .519
PSTSCORE 26.00 217 9.74 .661

Paired Samples Correlation

N Correlation Sig.
PRESCORE & PSTSCORE 217 .448 .000

Paired Samples Test

Paired Differences
Mean Std. Deviation Std. Error Mean 95% Confidence Interval of the Difference t df Sig. (2-tailed)
Lower Upper
Pair 1 PRESCORE-PSTSCORE 2.50 9.31 .632 1.26 3.75 3.967 216 .000

Also Read: How to Write the Abstract for the Dissertation.

2. Reporting Qualitative Findings

A notable issue with reporting qualitative findings is that not all results directly relate to your research questions or hypothesis.

The best way to present the results of qualitative research is to frame your findings around the most critical areas or themes you obtained after you examined the data.

In-depth data analysis will help you observe what the data shows for each theme. Any developments, relationships, patterns, and independent responses directly relevant to your research question or hypothesis should be mentioned to the readers.

Additional information not directly relevant to your research can be included in the appendix .

How to Report Qualitative Findings

Here is an example of how to report qualitative results in your dissertation findings chapter;

The last question of the interview focused on the need for improvement in Thai ready-to-eat products and the industry at large, emphasizing the need for enhancement in the current products being offered in the market. When asked if there was any particular need for Thai ready-to-eat meals to be improved and how to improve them in case of ‘yes,’ the males replied mainly by saying that the current products need improvement in terms of the use of healthier raw materials and preservatives or additives. There was an agreement amongst all males concerning the need to improve the industry for ready-to-eat meals and the use of more healthy items to prepare such meals. The females were also of the opinion that the fast-food items needed to be improved in the sense that more healthy raw materials such as vegetable oil and unsaturated fats, including whole-wheat products, to overcome risks associated with trans fat leading to obesity and hypertension should be used for the production of RTE products. The frozen RTE meals and packaged snacks included many preservatives and chemical-based flavouring enhancers that harmed human health and needed to be reduced. The industry is said to be aware of this fact and should try to produce RTE products that benefit the community in terms of healthy consumption.

Looking for dissertation help?

Research prospect to the rescue then.

We have expert writers on our team who are skilled at helping students with dissertations across a variety of disciplines. Guaranteeing 100% satisfaction!

quantitative_dissertation

What to Avoid in Dissertation Findings Chapter

  • Avoid using interpretive and subjective phrases and terms such as “confirms,” “reveals,” “suggests,” or “validates.” These terms are more suitable for the discussion chapter , where you will be expected to interpret the results in detail.
  • Only briefly explain findings in relation to the key themes, hypothesis, and research questions. You don’t want to write a detailed subjective explanation for any research questions at this stage.

The Do’s of Writing the Findings or Results Section

  • Ensure you are not presenting results from other research studies in your findings.
  • Observe whether or not your hypothesis is tested or research questions answered.
  • Illustrations and tables present data and are labelled to help your readers understand what they relate to.
  • Use software such as Excel, STATA, and SPSS to analyse results and important trends.

Essential Guidelines on How to Write Dissertation Findings

The dissertation findings chapter should provide the context for understanding the results. The research problem should be repeated, and the research goals should be stated briefly.

This approach helps to gain the reader’s attention toward the research problem. The first step towards writing the findings is identifying which results will be presented in this section.

The results relevant to the questions must be presented, considering whether the results support the hypothesis. You do not need to include every result in the findings section. The next step is ensuring the data can be appropriately organized and accurate.

You will need to have a basic idea about writing the findings of a dissertation because this will provide you with the knowledge to arrange the data chronologically.

Start each paragraph by writing about the most important results and concluding the section with the most negligible actual results.

A short paragraph can conclude the findings section, summarising the findings so readers will remember as they transition to the next chapter. This is essential if findings are unexpected or unfamiliar or impact the study.

Our writers can help you with all parts of your dissertation, including statistical analysis of your results . To obtain free non-binding quotes, please complete our online quote form here .

Be Impartial in your Writing

When crafting your findings, knowing how you will organize the work is important. The findings are the story that needs to be told in response to the research questions that have been answered.

Therefore, the story needs to be organized to make sense to you and the reader. The findings must be compelling and responsive to be linked to the research questions being answered.

Always ensure that the size and direction of any changes, including percentage change, can be mentioned in the section. The details of p values or confidence intervals and limits should be included.

The findings sections only have the relevant parts of the primary evidence mentioned. Still, it is a good practice to include all the primary evidence in an appendix that can be referred to later.

The results should always be written neutrally without speculation or implication. The statement of the results mustn’t have any form of evaluation or interpretation.

Negative results should be added in the findings section because they validate the results and provide high neutrality levels.

The length of the dissertation findings chapter is an important question that must be addressed. It should be noted that the length of the section is directly related to the total word count of your dissertation paper.

The writer should use their discretion in deciding the length of the findings section or refer to the dissertation handbook or structure guidelines.

It should neither belong nor be short nor concise and comprehensive to highlight the reader’s main findings.

Ethically, you should be confident in the findings and provide counter-evidence. Anything that does not have sufficient evidence should be discarded. The findings should respond to the problem presented and provide a solution to those questions.

Structure of the Findings Chapter

The chapter should use appropriate words and phrases to present the results to the readers. Logical sentences should be used, while paragraphs should be linked to produce cohesive work.

You must ensure all the significant results have been added in the section. Recheck after completing the section to ensure no mistakes have been made.

The structure of the findings section is something you may have to be sure of primarily because it will provide the basis for your research work and ensure that the discussions section can be written clearly and proficiently.

One way to arrange the results is to provide a brief synopsis and then explain the essential findings. However, there should be no speculation or explanation of the results, as this will be done in the discussion section.

Another way to arrange the section is to present and explain a result. This can be done for all the results while the section is concluded with an overall synopsis.

This is the preferred method when you are writing more extended dissertations. It can be helpful when multiple results are equally significant. A brief conclusion should be written to link all the results and transition to the discussion section.

Numerous data analysis dissertation examples are available on the Internet, which will help you improve your understanding of writing the dissertation’s findings.

Problems to Avoid When Writing Dissertation Findings

One of the problems to avoid while writing the dissertation findings is reporting background information or explaining the findings. This should be done in the introduction section .

You can always revise the introduction chapter based on the data you have collected if that seems an appropriate thing to do.

Raw data or intermediate calculations should not be added in the findings section. Always ask your professor if raw data needs to be included.

If the data is to be included, then use an appendix or a set of appendices referred to in the text of the findings chapter.

Do not use vague or non-specific phrases in the findings section. It is important to be factual and concise for the reader’s benefit.

The findings section presents the crucial data collected during the research process. It should be presented concisely and clearly to the reader. There should be no interpretation, speculation, or analysis of the data.

The significant results should be categorized systematically with the text used with charts, figures, and tables. Furthermore, avoiding using vague and non-specific words in this section is essential.

It is essential to label the tables and visual material properly. You should also check and proofread the section to avoid mistakes.

The dissertation findings chapter is a critical part of your overall dissertation paper. If you struggle with presenting your results and statistical analysis, our expert dissertation writers can help you get things right. Whether you need help with the entire dissertation paper or individual chapters, our dissertation experts can provide customized dissertation support .

FAQs About Findings of a Dissertation

How do i report quantitative findings.

The best way to present your quantitative findings is to structure them around the research hypothesis or research questions you intended to address as part of your dissertation project. Report the relevant findings for each of the research questions or hypotheses, focusing on how you analyzed them.

How do I report qualitative findings?

The best way to present the qualitative research results is to frame your findings around the most important areas or themes that you obtained after examining the data.

An in-depth analysis of the data will help you observe what the data is showing for each theme. Any developments, relationships, patterns, and independent responses that are directly relevant to your research question or hypothesis should be clearly mentioned for the readers.

Can I use interpretive phrases like ‘it confirms’ in the finding chapter?

No, It is highly advisable to avoid using interpretive and subjective phrases in the finding chapter. These terms are more suitable for the discussion chapter , where you will be expected to provide your interpretation of the results in detail.

Can I report the results from other research papers in my findings chapter?

NO, you must not be presenting results from other research studies in your findings.

You May Also Like

Finding it difficult to maintain a good relationship with your supervisor? Here are some tips on ‘How to Deal with an Unhelpful Dissertation Supervisor’.

Wish that you had more time to write your dissertation paper? Here are some practical tips for you to learn “How to get dissertation deadline extension”.

Stuck on the recommendations section of your research? Read our guide on how to write recommendations for a research study and get started.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

how to write result in qualitative research

  • Translation

How to write the analysis and discussion chapters in qualitative (SSAH) research

By charlesworth author services.

  • Charlesworth Author Services
  • 11 November, 2021

While it is more common for Science, Technology, Engineering and Mathematics (STEM) researchers to write separate, distinct chapters for their data/ results and analysis/ discussion , the same sections can feel less clearly defined for a researcher in Social Sciences, Arts and Humanities (SSAH). This article will look specifically at some useful approaches to writing the analysis and discussion chapters in qualitative/SSAH research.

Note : Most of the differences in approaches to research, writing, analysis and discussion come down, ultimately, to differences in epistemology – how we approach, create and work with knowledge in our respective fields. However, this is a vast topic that deserves a separate discussion.

Look for emerging themes and patterns

The ‘results’ of qualitative research can sometimes be harder to pinpoint than in quantitative research. You’re not dealing with definitive numbers and results in the same way as, say, a scientist conducting experiments that produce measurable data. Instead, most qualitative researchers explore prominent, interesting themes and patterns emerging from their data – that could comprise interviews, textual material or participant observation, for example. 

You may find that your data presents a huge number of themes, issues and topics, all of which you might find equally significant and interesting. In fact, you might find yourself overwhelmed by the many directions that your research could take, depending on which themes you choose to study in further depth. You may even discover issues and patterns that you had not expected , that may necessitate having to change or expand the research focus you initially started off with.

It is crucial at this point not to panic. Instead, try to enjoy the many possibilities that your data is offering you. It can be useful to remind yourself at each stage of exactly what you are trying to find out through this research.

What exactly do you want to know? What knowledge do you want to generate and share within your field?

Then, spend some time reflecting upon each of the themes that seem most interesting and significant, and consider whether they are immediately relevant to your main, overarching research objectives and goals.

Suggestion: Don’t worry too much about structure and flow at the early stages of writing your discussion . It would be a more valuable use of your time to fully explore the themes and issues arising from your data first, while also reading widely alongside your writing (more on this below). As you work more intimately with the data and develop your ideas, the overarching narrative and connections between those ideas will begin to emerge. Trust that you’ll be able to draw those links and craft the structure organically as you write.

Let your data guide you

A key characteristic of qualitative research is that the researchers allow their data to ‘speak’ and guide their research and their writing. Instead of insisting too strongly upon the prominence of specific themes and issues and imposing their opinions and beliefs upon the data, a good qualitative researcher ‘listens’ to what the data has to tell them.

Again, you might find yourself having to address unexpected issues or your data may reveal things that seem completely contradictory to the ideas and theories you have worked with so far. Although this might seem worrying, discovering these unexpected new elements can actually make your research much richer and more interesting. 

Suggestion: Allow yourself to follow those leads and ask new questions as you work through your data. These new directions could help you to answer your research questions in more depth and with greater complexity; or they could even open up other avenues for further study, either in this or future research.

Work closely with the literature

As you analyse and discuss the prominent themes, arguments and findings arising from your data, it is very helpful to maintain a regular and consistent reading practice alongside your writing. Return to the literature that you’ve already been reading so far or begin to check out new texts, studies and theories that might be more appropriate for working with any new ideas and themes arising from your data.

Reading and incorporating relevant literature into your writing as you work through your analysis and discussion will help you to consistently contextualise your research within the larger body of knowledge. It will be easier to stay focused on what you are trying to say through your research if you can simultaneously show what has already been said on the subject and how your research and data supports, challenges or extends those debates. By drawing from existing literature , you are setting up a dialogue between your research and prior work, and highlighting what this research has to add to the conversation.

Suggestion : Although it might sometimes feel tedious to have to blend others’ writing in with yours, this is ultimately the best way to showcase the specialness of your own data, findings and research . Remember that it is more difficult to highlight the significance and relevance of your original work without first showing how that work fits into or responds to existing studies. 

In conclusion

The discussion chapters form the heart of your thesis and this is where your unique contribution comes to the forefront. This is where your data takes centre-stage and where you get to showcase your original arguments, perspectives and knowledge. To do this effectively needs you to explore the original themes and issues arising from and within the data, while simultaneously contextualising these findings within the larger, existing body of knowledge of your specialising field. By striking this balance, you prove the two most important qualities of excellent qualitative research : keen awareness of your field and a firm understanding of your place in it.

Charlesworth Author Services , a trusted brand supporting the world’s leading academic publishers, institutions and authors since 1928. 

To know more about our services, visit: Our Services

Visit our new Researcher Education Portal that offers articles and webinars covering all aspects of your research to publication journey! And sign up for our newsletter on the Portal to stay updated on all essential researcher knowledge and information!

Register now: Researcher Education Portal

Maximise your publication success with Charlesworth Author Services.

Share with your colleagues

cwg logo

Scientific Editing Services

Sign up – stay updated.

We use cookies to offer you a personalized experience. By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.

blog @ precision

Presenting your qualitative analysis findings: tables to include in chapter 4.

The earliest stages of developing a doctoral dissertation—most specifically the topic development  and literature review  stages—require that you immerse yourself in a ton of existing research related to your potential topic. If you have begun writing your dissertation proposal, you have undoubtedly reviewed countless results and findings sections of studies in order to help gain an understanding of what is currently known about your topic. 

how to write result in qualitative research

In this process, we’re guessing that you observed a distinct pattern: Results sections are full of tables. Indeed, the results chapter for your own dissertation will need to be similarly packed with tables. So, if you’re preparing to write up the results of your statistical analysis or qualitative analysis, it will probably help to review your APA editing  manual to brush up on your table formatting skills. But, aside from formatting, how should you develop the tables in your results chapter?

In quantitative studies, tables are a handy way of presenting the variety of statistical analysis results in a form that readers can easily process. You’ve probably noticed that quantitative studies present descriptive results like mean, mode, range, standard deviation, etc., as well the inferential results that indicate whether significant relationships or differences were found through the statistical analysis . These are pretty standard tables that you probably learned about in your pre-dissertation statistics courses.

But, what if you are conducting qualitative analysis? What tables are appropriate for this type of study? This is a question we hear often from our dissertation assistance  clients, and with good reason. University guidelines for results chapters often contain vague instructions that guide you to include “appropriate tables” without specifying what exactly those are. To help clarify on this point, we asked our qualitative analysis experts to share their recommendations for tables to include in your Chapter 4.

Demographics Tables

As with studies using quantitative methods , presenting an overview of your sample demographics is useful in studies that use qualitative research methods. The standard demographics table in a quantitative study provides aggregate information for what are often large samples. In other words, such tables present totals and percentages for demographic categories within the sample that are relevant to the study (e.g., age, gender, job title). 

how to write result in qualitative research

If conducting qualitative research  for your dissertation, however, you will use a smaller sample and obtain richer data from each participant than in quantitative studies. To enhance thick description—a dimension of trustworthiness—it will help to present sample demographics in a table that includes information on each participant. Remember that ethical standards of research require that all participant information be deidentified, so use participant identification numbers or pseudonyms for each participant, and do not present any personal information that would allow others to identify the participant (Blignault & Ritchie, 2009). Table 1 provides participant demographics for a hypothetical qualitative research study exploring the perspectives of persons who were formerly homeless regarding their experiences of transitioning into stable housing and obtaining employment.

Participant Demographics

Participant ID  Gender Age Current Living Situation
P1 Female 34 Alone
P2 Male 27 With Family
P3 Male 44 Alone
P4 Female 46 With Roommates
P5 Female 25 With Family
P6 Male 30 With Roommates
P7 Male 38 With Roommates
P8 Male 51 Alone

Tables to Illustrate Initial Codes

Most of our dissertation consulting clients who are conducting qualitative research choose a form of thematic analysis . Qualitative analysis to identify themes in the data typically involves a progression from (a) identifying surface-level codes to (b) developing themes by combining codes based on shared similarities. As this process is inherently subjective, it is important that readers be able to evaluate the correspondence between the data and your findings (Anfara et al., 2002). This supports confirmability, another dimension of trustworthiness .

A great way to illustrate the trustworthiness of your qualitative analysis is to create a table that displays quotes from the data that exemplify each of your initial codes. Providing a sample quote for each of your codes can help the reader to assess whether your coding was faithful to the meanings in the data, and it can also help to create clarity about each code’s meaning and bring the voices of your participants into your work (Blignault & Ritchie, 2009).

how to write result in qualitative research

Table 2 is an example of how you might present information regarding initial codes. Depending on your preference or your dissertation committee’s preference, you might also present percentages of the sample that expressed each code. Another common piece of information to include is which actual participants expressed each code. Note that if your qualitative analysis yields a high volume of codes, it may be appropriate to present the table as an appendix.

Initial Codes

Initial code of participants contributing ( =8) of transcript excerpts assigned Sample quote
Daily routine of going to work enhanced sense of identity 7 12 “It’s just that good feeling of getting up every day like everyone else and going to work, of having that pattern that’s responsible. It makes you feel good about yourself again.” (P3)
Experienced discrimination due to previous homelessness  2 3 “At my last job, I told a couple other people on my shift I used to be homeless, and then, just like that, I get put into a worse job with less pay. The boss made some excuse why they did that, but they didn’t want me handling the money is why. They put me in a lower level job two days after I talk to people about being homeless in my past. That’s no coincidence if you ask me.” (P6) 
Friends offered shared housing 3 3 “My friend from way back had a spare room after her kid moved out. She let me stay there until I got back on my feet.” (P4)
Mental health services essential in getting into housing 5 7 “Getting my addiction treated was key. That was a must. My family wasn’t gonna let me stay around their place without it. So that was a big help for getting back into a place.” (P2)

Tables to Present the Groups of Codes That Form Each Theme

As noted previously, most of our dissertation assistance clients use a thematic analysis approach, which involves multiple phases of qualitative analysis  that eventually result in themes that answer the dissertation’s research questions. After initial coding is completed, the analysis process involves (a) examining what different codes have in common and then (b) grouping similar codes together in ways that are meaningful given your research questions. In other words, the common threads that you identify across multiple codes become the theme that holds them all together—and that theme answers one of your research questions.

As with initial coding, grouping codes together into themes involves your own subjective interpretations, even when aided by qualitative analysis software such as NVivo  or MAXQDA. In fact, our dissertation assistance clients are often surprised to learn that qualitative analysis software does not complete the analysis in the same ways that statistical analysis software such as SPSS does. While statistical analysis software completes the computations for you, qualitative analysis software does not have such analysis capabilities. Software such as NVivo provides a set of organizational tools that make the qualitative analysis far more convenient, but the analysis itself is still a very human process (Burnard et al., 2008).

how to write result in qualitative research

Because of the subjective nature of qualitative analysis, it is important to show the underlying logic behind your thematic analysis in tables—such tables help readers to assess the trustworthiness of your analysis. Table 3 provides an example of how to present the codes that were grouped together to create themes, and you can modify the specifics of the table based on your preferences or your dissertation committee’s requirements. For example, this type of table might be presented to illustrate the codes associated with themes that answer each research question. 

Grouping of Initial Codes to Form Themes

Theme

Initial codes grouped to form theme

of participants contributing ( =8) of transcript excerpts assigned
     Assistance from friends, family, or strangers was instrumental in getting back into stable housing 6 10
            Family member assisted them to get into housing
            Friends offered shared housing
            Stranger offered shared housing
     Obtaining professional support was essential for overcoming the cascading effects of poverty and homelessness 7 19
            Financial benefits made obtaining housing possible
            Mental health services essential in getting into housing
            Social services helped navigate housing process
     Stigma and concerns about discrimination caused them to feel uncomfortable socializing with coworkers 6 9
            Experienced discrimination due to previous homelessness 
            Feared negative judgment if others learned of their pasts
     Routine productivity and sense of making a contribution helped to restore self-concept and positive social identity 8 21
            Daily routine of going to work enhanced sense of identity
            Feels good to contribute to society/organization 
            Seeing products of their efforts was rewarding

Tables to Illustrate the Themes That Answer Each Research Question

Creating alignment throughout your dissertation is an important objective, and to maintain alignment in your results chapter, the themes you present must clearly answer your research questions. Conducting qualitative analysis is an in-depth process of immersion in the data, and many of our dissertation consulting  clients have shared that it’s easy to lose your direction during the process. So, it is important to stay focused on your research questions during the qualitative analysis and also to show the reader exactly which themes—and subthemes, as applicable—answered each of the research questions.

how to write result in qualitative research

Below, Table 4 provides an example of how to display the thematic findings of your study in table form. Depending on your dissertation committee’s preference or your own, you might present all research questions and all themes and subthemes in a single table. Or, you might provide separate tables to introduce the themes for each research question as you progress through your presentation of the findings in the chapter.

Emergent Themes and Research Questions

Research question

 

Themes that address question

 

RQ1. How do adults who have previously experienced homelessness describe their transitions to stable housing?

 

 

 

Theme 1: Assistance from friends, family, or strangers was instrumental in getting back into stable housing

Theme 2: Obtaining professional support was essential for overcoming the cascading effects of poverty and homelessness

 

RQ2. How do adults who have previously experienced homelessness describe returning to paid employment?

 

 

Theme 3: Self-perceived stigma caused them to feel uncomfortable socializing with coworkers

Theme 4: Routine productivity and sense of making a contribution helped to restore self-concept and positive social identity

Bonus Tip! Figures to Spice Up Your Results

Although dissertation committees most often wish to see tables such as the above in qualitative results chapters, some also like to see figures that illustrate the data. Qualitative software packages such as NVivo offer many options for visualizing your data, such as mind maps, concept maps, charts, and cluster diagrams. A common choice for this type of figure among our dissertation assistance clients is a tree diagram, which shows the connections between specified words and the words or phrases that participants shared most often in the same context. Another common choice of figure is the word cloud, as depicted in Figure 1. The word cloud simply reflects frequencies of words in the data, which may provide an indication of the importance of related concepts for the participants.

how to write result in qualitative research

As you move forward with your qualitative analysis and development of your results chapter, we hope that this brief overview of useful tables and figures helps you to decide on an ideal presentation to showcase the trustworthiness your findings. Completing a rigorous qualitative analysis for your dissertation requires many hours of careful interpretation of your data, and your end product should be a rich and detailed results presentation that you can be proud of. Reach out if we can help  in any way, as our dissertation coaches would be thrilled to assist as you move through this exciting stage of your dissertation journey!

Anfara Jr., V. A., Brown, K. M., & Mangione, T. L. (2002). Qualitative analysis on stage: Making the research process more public.  Educational Researcher ,  31 (7), 28-38. https://doi.org/10.3102/0013189X031007028

Blignault, I., & Ritchie, J. (2009). Revealing the wood and the trees: Reporting qualitative research.  Health Promotion Journal of Australia ,  20 (2), 140-145. https://doi.org/10.1071/HE09140

Burnard, P., Gill, P., Stewart, K., Treasure, E., & Chadwick, B. (2008). Analysing and presenting qualitative data.  British Dental Journal ,  204 (8), 429-432. https://doi.org/10.1038/sj.bdj.2008.292

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how to write result in qualitative research

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved August 7, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

offer

How to Write an Impressive Thesis Results Section

how to write result in qualitative research

After collecting and analyzing your research data, it’s time to write the results section. This article explains how to write and organize the thesis results section, the differences in reporting qualitative and quantitative data, the differences in the thesis results section across different fields, and the best practices for tables and figures.

What is the thesis results section?

The thesis results section factually and concisely describes what was observed and measured during the study but does not interpret the findings. It presents the findings in a logical order.

What should the thesis results section include?

  • Include all relevant results as text, tables, or figures
  • Report the results of subject recruitment and data collection
  • For qualitative research, present the data from all statistical analyses, whether or not the results are significant
  • For quantitative research, present the data by coding or categorizing themes and topics
  • Present all secondary findings (e.g., subgroup analyses)
  • Include all results, even if they do not fit in with your assumptions or support your hypothesis

What should the thesis results section not include?

  • If the study involves the thematic analysis of an interview, don’t include complete transcripts of all interviews. Instead, add these as appendices
  • Don’t present raw data. These may be included in appendices
  • Don’t include background information (this should be in the introduction section )
  • Don’t speculate on the meaning of results that do not support your hypothesis. This will be addressed later in the discussion and conclusion sections.
  • Don’t repeat results that have been presented in tables and figures. Only highlight the pertinent points or elaborate on specific aspects

How should the thesis results section be organized?

The opening paragraph of the thesis results section should briefly restate the thesis question. Then, present the results objectively as text, figures, or tables.

Quantitative research presents the results from experiments and  statistical tests , usually in the form of tables and figures (graphs, diagrams, and images), with any pertinent findings emphasized in the text. The results are structured around the thesis question. Demographic data are usually presented first in this section.

For each statistical test used, the following information must be mentioned:

  • The type of analysis used (e.g., Mann–Whitney U test or multiple regression analysis)
  • A concise summary of each result, including  descriptive statistics   (e.g., means, medians, and modes) and  inferential statistics   (e.g., correlation, regression, and  p  values) and whether the results are significant
  • Any trends or differences identified through comparisons
  • How the findings relate to your research and if they support or contradict your hypothesis

Qualitative research   presents results around key themes or topics identified from your data analysis and explains how these themes evolved. The data are usually presented as text because it is hard to present the findings as figures.

For each theme presented, describe:

  • General trends or patterns observed
  • Significant or representative responses
  • Relevant quotations from your study subjects

Relevant characteristics about your study subjects

Differences among the results section in different fields of research

Nevertheless, results should be presented logically across all disciplines and reflect the thesis question and any hypotheses that were tested.

The presentation of results varies considerably across disciplines. For example, a thesis documenting how a particular population interprets a specific event and a thesis investigating customer service may both have collected data using interviews and analyzed it using similar methods. Still, the presentation of the results will vastly differ because they are answering different thesis questions. A science thesis may have used experiments to generate data, and these would be presented differently again, probably involving statistics. Nevertheless, results should be presented logically across all disciplines and reflect the thesis question and any  hypotheses that were tested.

Differences between reporting thesis results in the Sciences and the Humanities and Social Sciences (HSS) domains

In the Sciences domain (qualitative and experimental research), the results and discussion sections are considered separate entities, and the results from experiments and statistical tests are presented. In the HSS domain (qualitative research), the results and discussion sections may be combined.

There are two approaches to presenting results in the HSS field:

  • If you want to highlight important findings, first present a synopsis of the results and then explain the key findings.
  • If you have multiple results of equal significance, present one result and explain it. Then present another result and explain that, and so on. Conclude with an overall synopsis.

Best practices for using tables and figures

The use of figures and tables is highly encouraged because they provide a standalone overview of the research findings that are much easier to understand than wading through dry text mentioning one result after another. The text in the results section should not repeat the information presented in figures and tables. Instead, it should focus on the pertinent findings or elaborate on specific points.

Some popular software programs that can be used for the analysis and presentation of statistical data include  Statistical Package for the Social Sciences (SPSS ) ,  R software ,  MATLAB , Microsoft Excel,  Statistical Analysis Software (SAS) ,  GraphPad Prism , and  Minitab .

The easiest way to construct tables is to use the  Table function in Microsoft Word . Microsoft Excel can also be used; however, Word is the easier option.

General guidelines for figures and tables

  • Figures and tables must be interpretable independent from the text
  • Number tables and figures consecutively (in separate lists) in the order in which they are mentioned in the text
  • All tables and figures must be cited in the text
  • Provide clear, descriptive titles for all figures and tables
  • Include a legend to concisely describe what is presented in the figure or table

Figure guidelines

  • Label figures so that the reader can easily understand what is being shown
  • Use a consistent font type and font size for all labels in figure panels
  • All abbreviations used in the figure artwork should be defined in the figure legend

Table guidelines

  • All table columns should have a heading abbreviation used in tables should be defined in the table footnotes
  • All numbers and text presented in tables must correlate with the data presented in the manuscript body

Quantitative results example : Figure 3 presents the characteristics of unemployed subjects and their rate of criminal convictions. A statistically significant association was observed between unemployed people <20 years old, the male sex, and no household income.

how to write result in qualitative research

Qualitative results example: Table 5 shows the themes identified during the face-to-face interviews about the application that we developed to anonymously report corruption in the workplace. There was positive feedback on the app layout and ease of use. Concerns that emerged from the interviews included breaches of confidentiality and the inability to report incidents because of unstable cellphone network coverage.

Ease of use of the appThe app was easy to use, and I did not have to contact the helpdesk
 I wish all apps were so user-friendly!
App layoutThe screen was not cluttered. The text was easy to read
 The icons on the screen were easy to understand
ConfidentialityI am scared that the app developers will disclose my name to my employer
Unstable network coverageI was unable to report an incident that occurred at one of our building sites because there was no cellphone reception
 I wanted to report the incident immediately , but I had to wait until I was home, where the cellphone network signal was strong

Table 5. Themes and selected quotes from the evaluation of our app designed to anonymously report workplace corruption.

Tips for writing the thesis results section

  • Do not state that a difference was present between the two groups unless this can be supported by a significant  p-value .
  • Present the findings only . Do not comment or speculate on their interpretation.
  • Every result included  must have a corresponding method in the methods section. Conversely, all methods  must have associated results presented in the results section.
  • Do not explain commonly used methods. Instead, cite a reference.
  • Be consistent with the units of measurement used in your thesis study. If you start with kg, then use the same unit all throughout your thesis. Also, be consistent with the capitalization of units of measurement. For example, use either “ml” or “mL” for milliliters, but not both.
  • Never manipulate measurement outcomes, even if the result is unexpected. Remain objective.

Results vs. discussion vs. conclusion

Results are presented in three sections of your thesis: the results, discussion, and conclusion.

  • In the results section, the data are presented simply and objectively. No speculation or interpretation is given.
  • In the discussion section, the meaning of the results is interpreted and put into context (e.g., compared with other findings in the literature ), and its importance is assigned.
  • In the conclusion section, the results and the main conclusions are summarized.

A thesis is the most crucial document that you will write during your academic studies. For professional thesis editing and thesis proofreading services , visit Enago Thesis Editing for more information.

Editor’s pick

Get free updates.

Subscribe to our newsletter for regular insights from the research and publishing industry!

Review Checklist

Have you  completed all data collection procedures and analyzed all results ?

Have you  included all results relevant to your thesis question, even if they do not support your hypothesis?

Have you reported the results  objectively , with no interpretation or speculation?

For quantitative research, have you included both  descriptive and  inferential statistical results and stated whether they support or contradict your hypothesis?

Have you used  tables and figures to present all results?

In your thesis body, have you presented only the pertinent results and elaborated on specific aspects that were presented in the tables and figures?

Are all tables and figures  correctly labeled and cited in numerical order in the text?

Frequently Asked Questions

What file formats do you accept for plagiarism check service +.

We accept all file formats, including Microsoft Word, Microsoft Excel, PDF, Latex, etc.

What information do i need to provide with my Plagiarism Check order? +

Please upload your research manuscript when you order Plagiarism Check Service . If you want to include the tables, charts, and figure legends in the plagiarism check, please ensure that all content is in editable formats and in one single document. 

Is repetition percentage of 25-30% considered acceptable by the journal editors? +

Acceptable repetition rate varies by journal but aim for low percentages (usually <5%). Avoid plagiarism (including self-plagiarism), cite sources, and use detection tools. Plagiarism can lead to rejection, reputation damage, and serious consequences. Consult your institution for guidance on addressing plagiarism concerns.

Do you offer help to rewrite and paraphrase text that has plagiarism concern in my manuscript? +

We can help you rewrite and paraphrase text in your manuscript to ensure it is not plagiarized under our Developmental Content Rewriting Service. You can provide specific passages or sentences that you are concerned about, and we can assist you in rephrasing them or citing the source materials in a proper format. 

Which languages does iThenticate support in its database? +

iThenticate searches for content matches in the following 30 languages: Chinese (Simplified and Traditional), Japanese, Thai, Korean, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Hungarian, Italian, Norwegian (Bokmal, Nynorsk), Polish, Portuguese, Romanian, Serbian, Slovak, Slovenian, Spanish, Swedish, Arabic, Greek, Hebrew, Farsi, Russian, and Turkish. Please note that iThenticate will match your text with text of the same language.

Writing a qualitative results section

This is a missive from the trenches of research. I’m trying to write up half of the results section of a qualitative paper from the outline I’ve drafted. In qualitative research, the writing is not just reporting results but part of the research itself .

I‘m sharing this example because I’ve been doing qualitative research for 15 years, at this point, and I still need to find ways to manage the different mind traps of writing. This is one of two qualitative papers I'm writing up this year, and one of many I've written thus far in my career. With time and experience, I’m getting faster at identifying the mind trap and having strategies to get out of it. Maybe someday I’ll even avoid them all together. But if you are newer to qualitative research, I want you to know you are not alone, and give you ideas for how you can manage your own writing process.

I need to confirm prior iterations of analysis and write it up in a way that’s not just a list. I’m also trying not to overwrite by 1000 words or more. I’m aiming for the proverbial “crappy first draft” that I will improve over time and with the help of my (many, and interprofessional) coauthors  

In my attempt not to overwrite the length, I give myself some boundaries based on word count. Of the 3500 words for the ultimate draft for a clinical journal, I’ll probably use 300 for intro, 500-800 for methods, 1-2k for results, and 800-1200 for discussion & conclusion. If I aim for 2k for results right now (knowing I could edit things down), it would mean 1k or less for the first section of the results reporting challenges. We identified 4 types (themes? subthemes?) of challenges, so that’s 250 words per “flavor”. Within each type of challenge, like disease related challenges, there’s usually 5-6 sub-elements, so basically each element gets a sentence each. Some elements can get quotes but not all.

I started by both skimming the coded data to confirm the take-aways we have outlined but then I kept finding different awesome quotes and my brain tried to re-adjudicate the analysis and I wrote 500 words where I needed 100.

So I stopped and checked in with a coauthor and peer qualitative expert. She validated this stage of the process, and agreed with the following plan:

Close the data and quotes.

Write a bare bones generic description of the section.

Go back to Atlas, skim each code to “check” my analytic summary, add specificity.

Add 1-3 high-value (surprising, pithy, unusual) quotes to paragraphs.

Choose 1-2 longer quotes on different themes for the table.

As a result, I finished my task of writing the generic description – 800 words so far - in the same time that it took me to overwrite the first half of the first challenge type.

Sometimes in the process of doing this you realize you don’t have the story straight yet. This also happened to me recently. Though ideally I’d do this before trying to write the results, I realized I needed to back and review the data and do some memoing to figuring out the story.

I’m working with data coded in Atlas.ti , and for each code, I’m reviewing the data and summarizing each quote with a bullet point in a memo. Then I re-organize the bullets by type (however my brain is wanting to group them), and write headers, and re-write those headers until they are phrases that can be complied into sentences.

What other ideas do you use to get unstuck?

Setting boundaries

Imposter syndrome and early career research.

  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

Research MethodologyResearch Methods
Research methodology refers to the philosophical and theoretical frameworks that guide the research process. refer to the techniques and procedures used to collect and analyze data.
It is concerned with the underlying principles and assumptions of research.It is concerned with the practical aspects of research.
It provides a rationale for why certain research methods are used.It determines the specific steps that will be taken to conduct research.
It is broader in scope and involves understanding the overall approach to research.It is narrower in scope and focuses on specific techniques and tools used in research.
It is concerned with identifying research questions, defining the research problem, and formulating hypotheses.It is concerned with collecting data, analyzing data, and interpreting results.
It is concerned with the validity and reliability of research.It is concerned with the accuracy and precision of data.
It is concerned with the ethical considerations of research.It is concerned with the practical considerations of research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data Verification

Data Verification – Process, Types and Examples

Dissertation Methodology

Dissertation Methodology – Structure, Example...

Research Paper Conclusion

Research Paper Conclusion – Writing Guide and...

Research Process

Research Process – Steps, Examples and Tips

APA Research Paper Format

APA Research Paper Format – Example, Sample and...

Research Results

Research Results Section – Writing Guide and...

George Mason University

Quantative Analysis & Statistics

  • Watch Videos
  • Read & Explore
  • Learn to Use Software
  • Get Data for Class Projects
  • Quick Start
  • Multivariate Analyses
  • Become a Researcher

Structuring a Paper

Writing about data, writing about results, more apa style.

  • Get More Help

On this page

  • Structuring a Paper : Learn about IMRAD and what goes in each section.
  • Writing About Data : Understand what people need to know when you write about data.
  • Writing about Results : See examples of reporting tests with statistics in APA style.
  • More APA Style : See other elements of APA style that are relevant. 
  • JARS-Quant: General pdf 3pgs - Outlines of the content expected within each section of a paper.
  • See more instructions for specific designs at Quantitative Research and Mixed methods research
  • See also the advice from Grad Coach: Dissertation Results Chapter / Video (~25 min)

Cover Art

With many examples (including "Poor", "Better", and "Best" versions), this book shows how to understand your data as well as take the perspective of the reader. Learn to explain everything from a single number to the results of multiple logistic regressions in plain words (though not every type of relationship or test is covered). See also her Supplementary Materials  including  Podcasts (Video) presentations of slides.

  • Tips and examples for newer researchers reporting descriptive and basic inferential statistics.
  • A lengthy, tip-filled guide for graduate or advanced students using regressions or other modeling.

Writing About Results

Always be careful using these templates for writing up results. They are specific to the way the data was coding and the specific research question. You may need to include more, less, or different information in your field or for particular journals. The best model is from your advisor, colleague, or another paper in your field. 

Cover Art

Organized by test with additional FAQs, explanations, and commentary, this book is a well organized compilation with complete examples of reporting each test, including a description of the findings and the test results in APA style. Includes Descriptive Statistics, Reliability, and standard tests up to ANOVA and Multiple Regression, plus a chapter on tables .

  • With both general guidelines and examples for basic parametric AND non-parametric statistics up to Mixed ANOVA and Multiple linear regression.  
  • See also their SPSS 'How to' Guides with APA write up examples also including screenshots of the setup in SPSS with the same tests but different examples.
  • A worked example with calculations and a complete textual write-up of the displayed results in APA Style. PLUS, step-by-step instructions AND one-page annotated output for SPSS, jamovi, JASP, R (base and EASI package). Most common parametric tests, but NOT Regressions or Chi-Square.
  • 9 15-25 min videos using Jamovi output, but relevant to any software.

If none of the above cover your situation, check the Psychology Resource Archive by University of Nebraska, which has a huge collection of short pdfs on specific analyses, many (but not all) with example write-ups. Also with instructions and output from SPSS.

Using APA Style for Numbers and Statistics

  • APA Style Numbers and Statistics Guide (7th Edition) - Avoid common errors ► see pg 2 .
  • Table Guidelines , including sample tables.
  • Data Set References
  • Common Statistical Abbreviations and Symbols in APA 7th pdf from James Cook University

Cover Art

  • << Previous: Become a Researcher
  • Next: Get More Help >>
  • Last Updated: Aug 7, 2024 4:27 PM
  • URL: https://infoguides.gmu.edu/quant

Ask a Librarian | Hours & Directions | Mason Libraries Home

Copyright © George Mason University

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Pediatr Psychol

Logo of jpepsy

Commentary: Writing and Evaluating Qualitative Research Reports

Yelena p. wu.

1 Division of Public Health, Department of Family and Preventive Medicine, University of Utah,

2 Cancer Control and Population Sciences, Huntsman Cancer Institute,

Deborah Thompson

3 Department of Pediatrics-Nutrition, USDA/ARS Children’s Nutrition Research Center, Baylor College of Medicine,

Karen J. Aroian

4 College of Nursing, University of Central Florida,

Elizabeth L. McQuaid

5 Department of Psychiatry and Human Behavior, Brown University, and

Janet A. Deatrick

6 School of Nursing, University of Pennsylvania

Objective  To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research. Methods  A question and answer format is used to address considerations for writing and evaluating qualitative research. Results and Conclusions  When producing qualitative research, individuals are encouraged to address the qualitative research considerations raised and to explicitly identify the systematic strategies used to ensure rigor in study design and methods, analysis, and presentation of findings. Increasing capacity for review and publication of qualitative research within pediatric psychology will advance the field’s ability to gain a better understanding of the specific needs of pediatric populations, tailor interventions more effectively, and promote optimal health.

The Journal of Pediatric Psychology (JPP) has a long history of emphasizing high-quality, methodologically rigorous research in social and behavioral aspects of children’s health ( Palermo, 2013 , 2014 ). Traditionally, research published in JPP has focused on quantitative methodologies. Qualitative approaches are of interest to pediatric psychologists given the important role of qualitative research in developing new theories ( Kelly & Ganong, 2011 ), illustrating important clinical themes ( Kars, Grypdonck, de Bock, & van Delden, 2015 ), developing new instruments ( Thompson, Bhatt, & Watson, 2013 ), understanding patients’ and families’ perspectives and needs ( Bevans, Gardner, Pajer, Riley, & Forrest, 2013 ; Lyons, Goodwin, McCreanor, & Griffin, 2015 ), and documenting new or rarely examined issues ( Haukeland, Fjermestad, Mossige, & Vatne, 2015 ; Valenzuela et al., 2011 ). Further, these methods are integral to intervention development ( Minges et al., 2015 ; Thompson et al., 2007 ) and understanding intervention outcomes ( de Visser et al., 2015 ; Hess & Straub, 2011 ). For example, when designing an intervention, qualitative research can identify patient and family preferences for and perspectives on desirable intervention characteristics and perceived needs ( Cassidy et al., 2013 ; Hess & Straub, 2011 ; Thompson, 2014 ), which may lead to a more targeted, effective intervention.

Both qualitative and quantitative approaches are concerned with issues such as generalizability of study findings (e.g., to whom the study findings can be applied) and rigor. However, qualitative and quantitative methods have different approaches to these issues. The purpose of qualitative research is to contribute knowledge or understanding by describing phenomenon within certain groups or populations of interest. As such, the purpose of qualitative research is not to provide generalizable findings. Instead, qualitative research has a discovery focus and often uses an iterative approach. Thus, qualitative work is often foundational to future qualitative, quantitative, or mixed-methods studies.

At the time of this writing, three of six current calls for papers for special issues of JPP specifically note that manuscripts incorporating qualitative approaches would be welcomed. Despite apparent openness to broadening JPP’s emphasis beyond its traditional quantitative approach, few published articles have used qualitative methods. For example, of 232 research articles published in JPP from 2012 to 2014 (excluding commentaries and reviews), only five used qualitative methods (2% of articles).

The goal of the current article is to present considerations for writing and evaluating qualitative research within the context of pediatric psychology to provide a framework for writing and reviewing manuscripts reporting qualitative findings. The current article may be especially useful to reviewers and authors who are less familiar with qualitative methods. The tenets presented here are grounded in the well-established literature on reporting and evaluating qualitative research, including guidelines and checklists ( Eakin & Mykhalovskiy, 2003 ; Elo et al., 2014 ; Mays & Pope, 2000 ; Tong, Sainsbury, & Craig, 2007 ). For example, the Consolidated Criteria for Reporting Qualitative Research checklist describes essential elements for reporting qualitative findings ( Tong et al., 2007 ). Although the considerations presented in the current manuscript have broad applicability to many fields, examples were purposively selected for the field of pediatric psychology.

Our goal is that this article will stimulate publication of more qualitative research in pediatric psychology and allied fields. More specifically, the goal is to encourage high-quality qualitative research by addressing key issues involved in conducting qualitative studies, and the process of conducting, reporting, and evaluating qualitative findings. Readers interested in more in-depth information on designing and implementing qualitative studies, relevant theoretical frameworks and approaches, and analytic approaches are referred to the well-developed literature in this area ( Clark, 2003 ; Corbin & Strauss, 2008 ; Creswell, 1994 ; Eakin & Mykhalovskiy, 2003 ; Elo et al., 2014 ; Mays & Pope, 2000 ; Miles, Huberman, & Saldaña, 2013 ; Ritchie & Lewis, 2003 ; Saldaña, 2012 ; Sandelowski, 1995 , 2010 ; Tong et al., 2007 ; Yin, 2015 ). Researchers new to qualitative research are also encouraged to obtain specialized training in qualitative methods and/or to collaborate with a qualitative expert in an effort to ensure rigor (i.e., validity).

We begin the article with a definition of qualitative research and an overview of the concept of rigor. While we recognize that qualitative methods comprise multiple and distinct approaches with unique purposes, we present an overview of considerations for writing and evaluating qualitative research that cut across qualitative methods. Specifically, we present basic principles in three broad areas: (1) study design and methods, (2) analytic considerations, and (3) presentation of findings (see Table 1 for a summary of the principles addressed in each area). Each area is addressed using a “question and answer” format. We present a brief explanation of each question, options for how one could address the issue raised, and a suggested recommendation. We recognize, however, that there are no absolute “right” or “wrong” answers and that the most “right” answer for each situation depends on the specific study and its purpose. In fact, our strongest recommendation is that authors of qualitative research manuscripts be explicit about their rationale for design, analytic choices, and strategies so that readers and reviewers can evaluate the rationale and rigor of the study methods.

Summary of Overarching Principles to Address in Qualitative Research Manuscripts

1. Research question identification
 a. Describe a clear and feasible research question that focuses on discovery or exploration
 b. Hypotheses: Avoid providing hypotheses
2. Rigor and transparency
 a. Rigor: Describe how rigor (e.g., credibility, dependability, confirmability, transferability) was documented throughout the research process
 b. Transparency: Clearly articulate study procedures and data analysis strategies
3. Study design and methods
 a. Theory: Describe how theory informed the study, including research question, design, analysis, and/or interpretation
 i. Use methodological congruence as a guiding principle
 ii. If divergence from theory occurs, explain and justify how and why theory was modified
 b. Sampling and sample size: Following the concept of transferability, clearly describe sample selection methods and sample descriptive characteristics, and provide evidence of data saturation and depth of categories
 c. Describe any changes to data collection methods made over the course of the study (e.g., modifications to interview guide)
4 Data analysis
 a. Implement, document, and describe a systematic analytic process (e.g., use of code book, development of codes—a priori codes, emergent codes, how codes were collapsed, methods used for coding, memos, coding process)
 b. Coding reliability: Provide information on who comprised the coding team (if multiple coders were used), and coding training and process, with emphasis on systematic methods, including strategies for resolving differences between coders
 c. Method of organizing data (e.g., computer software, manually): Describe how data were organized. If qualitative computer software was used, provide name and version number of software used.
5. Presentation of findings
 a. Results and discussion: Provide summaries and interpretations of the data (e.g., themes, conceptual models) and select illustrative quotes. Present the findings in the context of the relevant literature.
 b. Quantification of results: Consider whether quantification of findings is appropriate. If quantification is used, provide justification for its use.

What Is Qualitative Research?

Qualitative methods are used across many areas of health research, including health psychology ( Gough & Deatrick, 2015 ), to study the meaning of people’s lives in their real-world roles, represent their views and perspectives, identify important contextual conditions, discover new or additional insights about existing social and behavioral concepts, and acknowledge the contribution of multiple perspectives ( Yin, 2015 ). Qualitative research is a family of approaches rather than a single approach. There are multiple and distinct qualitative methodologies or stances (e.g., constructivism, post-positivism, critical theory), each with different underlying ontological and epistemological assumptions ( Lincoln, Lynham, & Guba, 2011 ). However, certain features are common to most qualitative approaches and distinguish qualitative research from quantitative research ( Creswell, 1994 ).

Key to all qualitative methodologies is that multiple perspectives about a phenomenon of interest are essential, and that those perspectives are best inductively derived or discovered from people with personal experience regarding that phenomenon. These perspectives or definitions may differ from “conventional wisdom.” Thus, meanings need to be discovered from the population under study to ensure optimal understanding. For instance, in a recent qualitative study about texting while driving, adolescents said that they did not approve of texting while driving. The investigators, however, discovered that the respondents did not consider themselves driving while a vehicle was stopped at a red light. In other words, the respondents did approve of texting while stopped at a red light. In addition, the adolescents said that they highly valued being constantly connected via texting. Thus, what is meant by “driving” and the value of “being connected” need to be considered when approaching the issue of texting while driving with adolescents ( McDonald & Sommers, 2015 ).

Qualitative methods are also distinct from a mixed-method approach (i.e., integration of qualitative and quantitative approaches; Creswell, 2013b ). A mixed-methods study may include a first phase of quantitative data collection that provides results that inform a second phase of the study that includes qualitative data collection, or vice versa. A mixed-methods study may also include concurrent quantitative and qualitative data collection. The timing, priority, and stage of integration of the two approaches (quantitative and qualitative) are complex and vary depending on the research question; they also dictate how to attend to differing qualitative and quantitative principles ( Creswell et al., 2011 ). Understanding the basic tenets of qualitative research is preliminary to integrating qualitative research with another approach that has different tenets. A full discussion of the integration of qualitative and quantitative research approaches is beyond the scope of this article. Readers interested in the topic are referred to one of the many excellent resources on the topic ( Creswell, 2013b ).

What Are Typical Qualitative Research Questions?

Qualitative research questions are typically open-ended and are framed in the spirit of discovery and exploration and to address existing knowledge gaps. The current manuscript provides exemplar pediatric qualitative studies that illustrate key issues that arise when reporting and evaluating qualitative studies. Example research questions that are contained in the studies cited in the current manuscript are presented in Table 2 .

Example Qualitative Research Questions From the Pediatric Literature

CitationStudy purpose or research question
“How do parents who no longer live together make treatment decisions for their children with cancer?”
“(a) How parents gained insight into their child’s perspective [when the child had incurable cancer]; (b) to elucidate the parental diversity in acknowledging the ‘voice of the child’;and (c) to gain insight into the factors that underlie the diversity in the parents’ ability to take into account their child’s perspective.”
Instrument development: “The [PROMIS Pediatric Stress] instruments were developed successively with guidance from developmental, cultural, and linguistic experts and based on input from an international group of youth…This article describes the qualitative development of the PROMIS Pediatric Stress Response item banks.”
“The study objective was to explore the emotional experiences of siblings as expressed by participants during group sessions, and to identify relevant themes for interventions targeted at siblings [of children with rare disorders].”
“We describe here the development and components of a pilot school-based health care transition education program implemented in 2005 in a large urban county in central Flordia. We then present [qualitative] data on program acceptability (report of relevance and satisfaction) and feasibility (ease of implementation, integration, and expansion).”
“What are the various components of a successful health care transition for adolescents and young adults with Type 1 Diabetes?”

What Are Rigor and Transparency in Qualitative Research?

There are several overarching principles with unique application in qualitative research, including definitions of scientific rigor and the importance of transparency. Quantitative research generally uses the terms reliability and validity to describe the rigor of research, while in qualitative research, rigor refers to the goal of seeking to understand the tacit knowledge of participants’ conception of reality ( Polanyi, 1958 ). For example, Haukeland and colleagues (2015) used qualitative analysis to identify themes describing the emotional experiences of a unique and understudied population—pediatric siblings of children with rare medical conditions such as Turner syndrome and Duchenne muscular dystrophy. Within this context, the authors’ rendering of the diverse and contradictory emotions experienced by siblings of children with these rare conditions represents “rigor” within a qualitative framework.

While debate exists regarding the terminology describing and strategies for strengthening scientific rigor in qualitative studies ( Guba, 1981 ; Morse, 2015a , 2015b ; Sandelowski, 1993a ; Whittemore, Chase, & Mandle, 2001 ), little debate exists regarding the importance of explaining strategies used to strengthen rigor. Such strategies should be appropriate for the specific study; therefore, it is wise to clearly describe what is relevant for each study. For example, in terms of strengthening credibility or the plausibility of data analysis and interpretation, prolonged engagement with participants is appropriate when conducting an observational study (e.g., observations of parent–child mealtime interactions; Hughes et al., 2011 ; Power et al., 2015 ). For an interview-only study, however, it would be more practical to strengthen credibility through other strategies (e.g., keeping detailed field notes about the interviews included in the analysis).

Dependability is the stability of a data analysis protocol. For instance, stepwise development of a coding system from an “a priori” list of codes based on the underlying conceptual framework or existing literature (e.g., creating initial codes for potential barriers to medication adherence based on prior studies) may be essential for analysis of data from semi-structured interviews using multiple coders. But this may not be the ideal strategy if the purpose is to inductively derive all possible coding categories directly from data in an area where little is known. For some research questions, the strategy may be to strengthen confirmability or to verify a specific phenomenon of interest using different sources of data before generating conclusions. This process, which is commonly referred to in the research literature as triangulation, may also include collecting different types of data (e.g., interview data, observational data), using multiple coders to incorporate different ways of interpreting the data, or using multiple theories ( Krefting, 1991 ; Ritchie & Lewis, 2003 ). Alternatively, another investigator may use triangulation to provide complementarity data ( Krefting, 1991 ) to garner additional information to deepen understanding. Because the purpose of qualitative research is to discover multiple perspectives about a phenomenon, it is not necessarily appropriate to attain concordance across studies or investigators when independently analyzing data. Some qualitative experts also believe that it is inappropriate to use triangulation to confirm findings, but this debate has not been resolved within the field ( Ritchie & Lewis, 2003 ; Tobin & Begley, 2004 ). More agreement exists, however, regarding the value of triangulation to complement, deepen, or expand understanding of a particular topic or issue ( Ritchie & Lewis, 2003 ). Finally, instead of basing a study on a sample that allows for generalizing statistical results to other populations, investigators in qualitative research studies are focused on designing a study and conveying the results so that the reader understands the transferability of the results. Strategies for transferability may include explanations of how the sample was selected and descriptive characteristics of study participants, which provides a context for the results and enables readers to decide if other samples share critical attributes. A study is deemed transferable if relevant contextual features are common to both the study sample and the larger population.

Strategies to enhance rigor should be used systematically across each phase of a study. That is, rigor needs to be identified, managed, and documented throughout the research process: during the preparation phase (data collection and sampling), organization phase (analysis and interpretation), and reporting phase (manuscript or final report; Elo et al., 2014 ). From this perspective, the strategies help strengthen the trustworthiness of the overall study (i.e., to what extent the study findings are worth heeding; Eakin & Mykhalovskiy, 2003 ; Lincoln & Guba, 1985 ).

A good example of managing and documenting rigor and trustworthiness can be found in a study of family treatment decisions for children with cancer ( Kelly & Ganong, 2011 ). The researchers describe how they promoted the rigor of the study and strengthening its credibility by triangulating data sources (e.g., obtaining data from children’s custodial parents, stepparents, etc.), debriefing (e.g., holding detailed conversations with colleagues about the data and interpretations of the data), member checking (i.e., presenting preliminary findings to participants to obtain their feedback and interpretation), and reviewing study procedure decisions and analytic procedures with a second party.

Transparency is another key concept in written reports of qualitative research. In other words, enough detail should be provided for the reader to understand what was done and why ( Ritchie & Lewis, 2003 ). Examples of information that should be included are a clear rationale for selecting a particular population or people with certain characteristics, the research question being investigated, and a meaningful explanation of why this research question was selected (i.e., the gap in knowledge or understanding that is being investigated; Ritchie & Lewis, 2003 ). Clearly describing recruitment, enrollment, data collection, and data analysis or extraction methods are equally important ( Dixon-Woods, Shaw, Agarwal, & Smith, 2004 ). Coherency among methods and transparency about research decisions adds to the robustness of qualitative research ( Tobin & Begley, 2004 ) and provides a context for understanding the findings and their implications.

Study Design and Methods

Is qualitative research hypothesis driven.

In contrast to quantitative research, qualitative research is not typically hypothesis driven ( Creswell, 1994 ; Ritchie & Lewis, 2003 ). A risk associated with using hypotheses in qualitative research is that the findings could be biased by the hypotheses. Alternatively, qualitative research is exploratory and typically guided by a research question or conceptual framework rather than hypotheses ( Creswell, 1994 ; Ritchie & Lewis, 2003 ). As previously stated, the goal of qualitative research is to increase understanding in areas where little is known by developing deeper insight into complex situations or processes. According to Richards and Morse (2013) , “If you know what you are likely to find, …  you should not be working qualitatively” (p. 28). Thus, we do not recommend that a hypothesis be stated in manuscripts presenting qualitative data.

What Is the Role of Theory in Qualitative Research?

Consistent with the exploratory nature of qualitative research, one particular qualitative method, grounded theory, is used specifically for discovering substantive theory (i.e., working theories of action or processes developed for a specific area of concern; Bryant & Charmaz, 2010 ; Glaser & Strauss, 1967 ). This method uses a series of structured steps to break down qualitative data into codes, organize the codes into conceptual categories, and link the categories into a theory that explains the phenomenon under study. For example, Kelly and Ganong (2011) used grounded theory methods to produce a substantive theory about how single and re-partnered parents (e.g., households with a step-parent) made treatment decisions for children with childhood cancer. The theory of decision making developed in this study included “moving to place,” which described the ways in which parents from different family structures (e.g., single and re-partnered parents) were involved in the child’s treatment decision-making. The resulting theory also delineated the causal conditions, context, and intervening factors that contributed to the strategies used for moving to place.

Theories may be used in other types of qualitative research as well, serving as the impetus or organizing framework for the study ( Sandelowski, 1993b ). For example, Izaguirre and Keefer (2014) used Social Cognitive Theory ( Bandura, 1986 ) to investigate self-efficacy among adolescents with inflammatory bowel disease. The impetus for selecting the theory was to inform the development of a self-efficacy measure for adolescent self-management. In another study on health care transition in youth with Type 1 Diabetes ( Pierce, Wysocki, & Aroian, 2016 ), the investigators adapted a social-ecological model—the Socio-ecological Model of Adolescent and Young Adult Transition Readiness (SMART) model ( Schwartz, Tuchman, Hobbie, & Ginsberg, 2011 )—to their study population ( Pierce & Wysocki, 2015 ). Pierce et al. (2016) are currently using the adapted SMART model to focus their data collection and structure the preliminary analysis of their data about diabetes health care transition.

Regardless of whether theory is induced from data or selected in advance to guide the study, consistent with the principle of transparency , its role should be clearly identified and justified in the research publication ( Bradbury-Jones, Taylor, & Herber, 2014 ; Kelly, 2010 ). Methodological congruence is an important guiding principle in this regard ( Richards & Morse, 2013 ). If a theory frames the study at the outset, it should guide and direct all phases. The resulting publication(s) should relate the phenomenon of interest and the research question(s) to the theory and specify how the theory guided data collection and analysis. The publication(s) should also discuss how the theory fits with the finished product. For instance, authors should describe how the theory provided a framework for the presentation of the findings and discuss the findings in context with the relevant theoretical literature.

A study examining parents’ motivations to promote vegetable consumption in their children ( Hingle et al., 2012 ) provides an example of methodological congruence. The investigators adapted the Model of Goal Directed Behavior ( Bagozzi & Pieters, 1998 ) for parenting practices relevant to vegetable consumption (Model of Goal Directed Vegetable Parenting Practices; MGDVPP). Consistent with the adapted theoretical model and in keeping with the congruence principle, interviews were guided by the theoretical constructs contained within the MGDVPP, including parents’ attitudes, subjective norms, and perceived behavioral control related to promoting vegetable consumption in children ( Hingle et al., 2012 ). The study discovered that the adapted model successfully identified parents’ motivations to encourage their children to eat more vegetables.

The use of the theory should be consistent with the basic goal of qualitative research, which is discovery. Alternatively stated, theories should be used as broad orienting frameworks for exploring topical areas without imposing preconceived ideas and biases. The theory should be consistent with the study findings and not be used to force-fit the researcher’s interpretation of the data ( Sandelowski, 1993b ). Divergence from the theory when it does not fit the study findings is illustrated in a qualitative study of hypertension prevention beliefs in Hispanics ( Aroian, Peters, Rudner, & Waser, 2012 ). This study used the Theory of Planned Behavior as a guiding theoretical framework but found that coding separately for normative and control beliefs was not the best organizing schema for presenting the study findings. When divergence from the original theory occurs, the research report should explain and justify how and why the theory was modified ( Bradbury-Jones et al., 2014 ).

What Are Typical Sampling Methods in Qualitative Studies?

Qualitative sampling methods should be “purposeful” ( Coyne, 1997 ; Patton, 2015 ; Tuckett, 2004 ). Purposeful sampling is based on the study purpose and investigator judgments about which people and settings will provide the richest information for the research questions. The logic underlying this type of sampling differs from the logic underlying quantitative sampling ( Patton, 2015 ). Quantitative research strives for empirical generalization. In qualitative studies, generalizability beyond the study sample is typically not the intent; rather, the focus is on deriving depth and context-embedded meaning for the relevant study population.

Purposeful sampling is a broad term. Theoretical sampling is one particular type of purposeful sampling unique to grounded theory methods ( Coyne, 1997 ). In theoretical sampling, study participants are chosen according to theoretical categories that emerge from ongoing data collection and analyses ( Bryant & Charmaz, 2010 ). Data collection and analysis are conducted concurrently to allow generating and testing hypotheses that emerge from analyzing incoming data. The following example from the previously mentioned qualitative interview study about transition from pediatric to adult care in adolescents with type 1 diabetes ( Pierce et al., 2016 ) illustrates the process of theoretical sampling: An adolescent study participant stated that he was “turned off” by the “childish” posters in his pediatrician’s office. He elaborated that he welcomed transitioning to adult care because his diabetes was discovered when he was 18, an age when he reportedly felt more “mature” than most pediatric patients. These data were coded as “developmental misfit” and prompted a tentative hypothesis about developmental stage at entry for pediatric diabetes care and readiness for health care transition. Examining this hypothesis prompted seeking study participants who varied according to age or developmental stage at time of diagnosis to examine the theoretical relevance of an emerging theme about developmental fit.

Not all purposeful sampling, however, is “theoretical.” For example, ethnographic studies typically seek to understand a group’s cultural beliefs and practices ( Creswell, 2013a ). Consistent with this purpose, researchers conducting an ethnographic study might purposefully select study participants according to specific characteristics that reflect the social roles and positions in a given group or society (e.g., socioeconomic status, education; Johnson, 1990 ).

Random sampling is generally not used in qualitative research. Random selection requires a sufficiently large sample to maximize the potential for chance and, as will be discussed below, sample size is intentionally small in qualitative studies. However, random sampling may be used to verify or clarify findings ( Patton, 2015 ). Validating study findings with a randomly selected subsample can be used to address the possibility that a researcher is inadvertently giving greater attention to cases that reinforce his or her preconceived ideas.

Regardless of the sampling method used, qualitative researchers should clearly describe the sampling strategy and justify how it fits the study when reporting study findings (transparency). A common error is to refer to theoretical sampling when the cases were not chosen according to emerging theoretical concepts. Another common error is to apply sampling principles from quantitative research (e.g., cluster sampling) to convince skeptical reviewers about the rigor or validity of qualitative research. Rigor is best achieved by being purposeful, making sound decisions, and articulating the rationale for those decisions. As mentioned earlier in the discussion of transferability , qualitative researchers are encouraged to describe their methods of sample selection and descriptive characteristics about their sample so that readers and reviewers can judge how the current sample may differ from others. Understanding the characteristics of each qualitative study sample is essential for the iterative nature of qualitative research whereby qualitative findings inform the development of future qualitative, quantitative, or mixed-methods studies. Reviewers should evaluate sampling decisions based on how they fit the study purpose and how they influence the quality of the end product.

What Sample Size Is Needed for Qualitative Research?

No definitive rules exist about sample size in qualitative research. However, sample sizes are typically smaller than those in quantitative studies ( Patton, 2015 ). Small samples often generate a large volume of data and information-rich cases, ultimately leading to insight regarding the phenomenon under study ( Patton, 2015 ; Ritchie & Lewis, 2003 ). Sample sizes of 20–30 cases are typical, but a qualitative sample can be even smaller under some circumstances ( Mason, 2010 ).

Sample size adequacy is evaluated based on the quality of the study findings, specifically the full development of categories and inter-relationships or the adequacy of information about the phenomenon under study ( Corbin & Strauss, 2008 ; Ritchie & Lewis, 2003 ). Small sample sizes are of concern if they do not result in these outcomes. Data saturation (i.e., the point at which no new information, categories, or themes emerge) is often used to judge informational adequacy ( Morgan, 1998 ; Ritchie & Lewis, 2003 ). Although enough participants should be included to obtain saturation ( Morgan, 1998 ), informational adequacy pertains to more than sample size. It is also a function of the quality of the data, which is influenced by study participant characteristics (e.g., cognitive ability, knowledge, representativeness) and the researcher’s data-gathering skills and analytical ability to generate meaningful findings ( Morse, 2015b ; Patton, 2015 ).

Sample size is also influenced by type of qualitative research, the study purpose, the sample, the depth and complexity of the topic investigated, and the method of data collection. In general, the more heterogeneous the sample, the larger the sample size, particularly if the goal is to investigate similarities and differences by specific characteristics ( Ritchie & Lewis, 2003 ). For instance, in a study to conduct an initial exploration of factors underlying parents’ motivations to use good parenting practices, theoretical saturation (i.e., the point at which no new information, categories, or themes emerge) was obtained with a small sample ( n  = 15), most likely because the study was limited to parents of young children ( Hingle et al., 2012 ). If the goal of the study had been, for example, to identify racial/ethnic, gender, or age differences in food parenting practices, a larger sample would likely be needed to obtain saturation or informational adequacy.

Studies that seek to understand maximum variation in a phenomenon might also need a larger sample than one that is seeking to understand extreme or atypical cases. For example, a qualitative study of diet and physical activity in young Australian men conducted focus groups to identify perceived motivators and barriers to healthy eating and physical activity and examine the influence of body weight on their perceptions. Examining the influence of body weight status required 10 focus groups to allow for group assignment based on body mass index ( Ashton et al., 2015 ). More specifically, 61 men were assigned to a healthy-weight focus group ( n  = 3), an overweight/obese focus group ( n  = 3), or a mixed-weight focus group ( n  = 4). Had the researcher not been interested in whether facilitators and barriers differed by weight status, its likely theoretical saturation could have been obtained with fewer groups. Depth of inquiry also influences sample size ( Sandelowski, 1995 ). For instance, an in-depth analysis of an intervention for children with cancer and their families included 16 family members from three families. Study data comprised 52 hrs of videotaped intervention sessions and 10 interviews ( West, Bell, Woodgate, & Moules, 2015 ). Depth was obtained through multiple data points and types of data, which justified sampling only a few families.

Authors of publications describing qualitative findings should show evidence that the data were “saturated” by a sample with sufficient variation to permit detailing shared and divergent perspectives, meanings, or experiences about the topic of inquiry. Decisions related to the sample (e.g., targeted recruitment) should be detailed in publications so that peer reviewers have the context for evaluating the sample and determining how the sample influenced the study findings ( Patton, 2015 ).

Qualitative Data Analysis

When conducting qualitative research, voluminous amounts of data are gathered and must be prepared (i.e., transcribed) and managed. During the analytic process, data are systematically transformed through identifying, defining, interpreting, and describing findings that are meant to comprehensively describe the phenomenon or the abstract qualities that they have in common. The process should be systematic ( dependability ) and well-documented in the analysis section of a qualitative manuscript. For example, Kelly and Ganong (2011) , in their study of medical treatment decisions made by families of children with cancer, described their analytic procedure by outlining their approach to coding and use of memoing (e.g., keeping careful notes about emerging ideas about the data throughout the analytic process), comparative analysis (e.g., comparing data against one another and looking for similarities and differences), and diagram drawing (e.g., pictorially representing the data structure, including relationships between codes).

How Should Researchers Document Coding Reliability?

Because the intent of qualitative research is to account for multiple perspectives, the goal of qualitative analysis is to comprehensively incorporate those perspectives into discernible findings. Researchers accustomed to doing quantitative studies may expect authors to quantify interrater reliability (e.g., kappa statistic) but this is not typical in qualitative research. Rather, the emphasis in qualitative research is on (1) training those gathering data to be rigorous and produce high-quality data and on (2) using systematic processes to document key decisions (e.g., code book), clear direction, and open communication among team members during data analysis. The goal is to make the most of the collective insight of the investigative team to triangulate or complement each other’s efforts to process and interpret the data. Instead of evaluating if two independent raters came to the same numeric rating, reviewers of qualitative manuscripts should judge to what extent the overall process of coding, data management, and data interpretation were systematic and rigorous. Authors of qualitative reports should articulate their coding procedures for others to evaluate. Together, these strategies promote trustworthiness of the study findings.

An example of how these processes are described in the report of a qualitative study is as follows:

The first two authors independently applied the categories to a sample of two interviews and compared their application of the categories to identify lack of clarity and overlap in categories. The investigators created a code book that contained a definition of categories, guidelines for their application, and excerpts of data exemplifying the categories. The first two authors independently coded the data and compared how they applied the categories to the data and resolved any differences during biweekly meetings. ATLAS.ti, version 6.2, was used to document and accommodate ongoing changes and additions to the coding structure ( Palma et al., 2015 , p. 224).

Do I Need to Use a Specialized Qualitative Data Software Program for Analysis?

Multiple computer software packages for qualitative data analysis are currently available ( Silver & Lewins, 2014 ; Yin, 2015 ). These packages allow the researcher to import qualitative data (e.g., interview transcripts) into the software program and organize data segments (e.g., delineate which interview excerpts are relevant to particular themes). Qualitative analysis software can be useful for organizing and sorting through data, including during the analysis phase. Some software programs also offer sophisticated coding and visualization capabilities that facilitate and enhance interpretation and understanding. For example, if data segments are coded by specific characteristics (e.g., gender, race/ethnicity), the data can be sorted and analyzed by these characteristics, which may contribute to an understanding of whether and/or how a particular phenomenon may vary by these characteristics.

The strength of computer software packages for qualitative data analysis is their potential to contribute to methodological rigor by organizing the data for systematic analyses ( John & Johnson, 2000 ; MacMillan & Koenig, 2004 ). However, the programs do not replace the researchers’ analyses. The researcher or research team is ultimately responsible for analyzing the data, identifying the themes and patterns, and placing the findings within the context of the literature. In other words, qualitative data analysis software programs contribute to, but do not ensure scientific rigor or “objectivity” in, the analytic process. In fact, using a software program for analysis is not essential if the researcher demonstrates the use of alternative tools and procedures for rigor.

Presentation of Findings

Should there be overlap between presentation of themes in the results and discussion sections.

Qualitative papers sometimes combine results and discussion into one section to provide a cohesive presentation of the findings along with meaningful linkages to the existing literature ( Burnard, 2004 ; Burnard, Gill, Stewart, Treasure, & Chadwick, 2008 ). Although doing so is an acceptable method for reporting qualitative findings, some journals prefer the two sections to be distinct.

When the journal style is to distinguish the two sections, the results section should describe the findings, that is, the themes, while the discussion section should pull the themes together to make larger-level conclusions and place the findings within the context of the existing literature. For instance, the findings section of a study of how rural African-American adolescents, parents, and community leaders perceived obesity and topics for a proposed obesity prevention program, contained a description of themes about adolescent eating patterns, body shape, and feedback on the proposed weight gain prevention program according to each subset of participants (i.e., adolescents, parents, community leaders). The discussion section then put these themes within the context of findings from prior qualitative and intervention studies in related populations ( Cassidy et al., 2013 ). In the Discussion, when making linkages to the existing literature, it is important to avoid the temptation to extrapolate beyond the findings or to over-interpret them ( Burnard, 2004 ). Linkages between the findings and the existing literature should be supported by ample evidence to avoid spurious or misleading connections ( Burnard, 2004 ).

What Should I Include in the Results Section?

The results section of a qualitative research report is likely to contain more material than customary in quantitative research reports. Findings in a qualitative research paper typically include researcher interpretations of the data as well as data exemplars and the logic that led to researcher interpretations ( Sandelowski & Barroso, 2002 ). Interpretation pertains to the researcher breaking down and recombining the data and creating new meanings (e.g., abstract categories, themes, conceptual models). Select quotes from interviews or other types of data (e.g., participant observation, focus groups) are presented to illustrate or support researcher interpretations. Researchers trained in the quantitative tradition, where interpretation is restricted to the discussion section, may find this surprising; however, in qualitative methods, researcher interpretations represent an important component of the study results. The presentation of the findings, including researcher interpretations (e.g., themes) and data (e.g., quotes) supporting those interpretations, adds to the trustworthiness of the study ( Elo et al., 2014 ).

The Results section should contain a balance between data illustrations (i.e., quotes) and researcher interpretations ( Lofland & Lofland, 2006 ; Sandelowski, 1998 ). Because interpretation arises out of the data, description and interpretation should be combined. Description should be sufficient to support researcher interpretations, and quotes should be used judiciously ( Morrow, 2005 ; Sandelowski, 1994 ). Not every theme needs to be supported by multiple quotes. Rather, quotes should be carefully selected to provide “voice” to the participants and to help the reader understand the phenomenon from the participant’s perspective within the context of the researcher’s interpretation ( Morrow, 2005 ; Ritchie & Lewis, 2003 ). For example, researchers who developed a grounded theory of sexual risk behavior of urban American Indian adolescent girls identified desire for better opportunities as a key deterrent to neighborhood norms for early sexual activity. They illustrated this theme with the following quote: “I don’t want to live in the ‘hood and all that…My sisters are stuck there because they had babies. That isn’t going to happen to me” ( Saftner, Martyn, Momper, Loveland-Cherry, & Low, 2015 , p. 372).

There is no precise formula for the proportion of description to interpretation. Both descriptive and analytic excess should be avoided ( Lofland & Lofland, 2006 ). The former pertains to presentation of unedited field notes or interview transcripts rather than selecting and connecting data to analytic concepts that explain or summarize the data. The latter pertains to focusing on the mechanics of analysis and interpretation without substantiating researcher interpretations with quotes. Reviewer requests for methodological rigor can result in researchers writing qualitative research papers that suffer from analytic excess ( Sandelowski & Barroso, 2002 ). Page limitations of most journals provide a safeguard against descriptive excess, but page limitations should not circumvent researchers from providing the basis for their interpretations.

Additional potential problems with qualitative results sections include under-elaboration, where themes are too few and not clearly defined. The opposite problem, over-elaboration, pertains to too many analytic distinctions that could be collapsed under a higher level of abstraction. Quotes can also be under- or over-interpreted. Care should be taken to ensure the quote(s) selected clearly support the theme to which they are attached. And finally, findings from a qualitative study should be interesting and make clear contributions to the literature ( Lofland & Lofland, 2006 ; Morse, 2015b ).

Should I Quantify My Results? (e.g., Frequency With Which Themes Were Endorsed)

There is controversy over whether to quantify qualitative findings, such as providing counts for the frequency with which particular themes are endorsed by study participants ( Morgan, 1993 ; Sandelowski, 2001 ). Qualitative papers usually report themes and patterns that emerge from the data without quantification ( Dey, 1993 ). However, it is possible to quantify qualitative findings, such as in qualitative content analysis. Qualitative content analysis is a method through which a researcher identifies the frequency with which a phenomenon, such as specific words, phrases, or concepts, is mentioned ( Elo et al., 2014 ; Morgan, 1993 ). Although this method may appeal to quantitative reviewers, it is important to note that this method only fits specific study purposes, such as studies that investigate the language used by a particular group when communicating about a specific topic. In addition, results may be quantified to provide information on whether themes appeared to be common or atypical. Authors should avoid using imprecise language, such as “some participants” or “many participants.” A good example of quantification of results to illustrate more or less typical themes comes from a manuscript describing a qualitative study of school nurses’ perceived barriers to addressing obesity with students and their families. The authors described that all but one nurse reported not having the resources they needed to discuss weight with students and families whereas one-quarter of nurses reported not feeling competent to discuss weight issues ( Steele et al., 2011 ). If quantification of findings is used, authors should provide justification that explains how quantification is consistent with the aims or goals of the study ( Sandelowski, 2001 ).

Conclusions

This article highlighted key theoretical and logistical considerations that arise in designing, conducting, and reporting qualitative research studies (see Table 1 for a summary). This type of research is vital for obtaining patient, family, community, and other stakeholder perspectives about their needs and interests, and will become increasingly critical as our models of health care delivery evolve. For example, qualitative research could contribute to the study of health care providers and systems with the goal of optimizing our health care delivery models. Given the increasing diversity of the populations we serve, qualitative research will also be critical in providing guidance in how to tailor health interventions to key characteristics and increase the likelihood of acceptable, effective treatment approaches. For example, applying qualitative research methods could enhance our understanding of refugee experiences in our health care system, clarify treatment preferences for emerging adults in the midst of health care transitions, examine satisfaction with health care delivery, and evaluate the applicability of our theoretical models of health behavior changes across racial and ethnic groups. Incorporating patient perspectives into treatment is essential to meeting this nation’s priority on patient-centered health care ( Institute of Medicine Committee on Quality of Health Care in America, 2001 ). Authors of qualitative studies who address the methodological choices addressed in this review will make important contributions to the field of pediatric psychology. Qualitative findings will lead to a more informed field that addresses the needs of a wide range of patient populations and produces effective and acceptable population-specific interventions to promote health.

Acknowledgments

The authors thank Bridget Grahmann for her assistance with manuscript preparation.

This work was supported by National Cancer Institute of the National Institutes of Health (K07CA196985 to Y.W.). This work is a publication of the United States Department of Agriculture/Agricultural Research Center (USDA/ARS), Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, Texas. It is also a publication of the USDA/ARS, Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, Texas, and funded in part with federal funds from the USDA/ARS under Cooperative Agreement No. 58‐6250‐0‐008 (to D.T.). The contents of this publication do not necessarily reflect the views or policies of the USDA, nor does mention of trade names, commercial products, or organizations imply endorsement from the U.S. government. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflicts of interest : None declared.

  • Aroian K. J., Peters R. M., Rudner N., Waser L. (2012). Hypertension prevention beliefs of hispanics . Journal of Transcultural Nursing , 23 , 134–142. doi:10.1177/1043659611433871. [ PubMed ] [ Google Scholar ]
  • Ashton L. M., Hutchesson M. J., Rollo M. E., Morgan P. J., Thompson D. I., Collins C. E. (2015). Young adult males’ motivators and perceived barriers towards eating healthily and being active: A qualitative study . The International Journal of Behavioral Nutrition and Physical Activity , 12 , 93 doi:10.1186/s12966‐015‐0257‐6. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bagozzi R., Pieters R. (1998). Goal-directed emotions . Cognition & Emotion , 12 ( 1 ), 1–26. [ Google Scholar ]
  • Bandura A. (1986). Social foundations of thought and action: A social cognitive theory . Englewood Cliffs, NJ: Prentice-Hall Inc. [ Google Scholar ]
  • Bevans K. B., Gardner W., Pajer K., Riley A. W., Forrest C. B. (2013). Qualitative development of the PROMIS ® pediatric stress response item banks . Journal of Pediatric Psychology , 38 , 173–191. doi:10.1093/jpepsy/jss107. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bradbury-Jones C., Taylor J., Herber O. (2014). How theory is used and articulated in qualitative research: Development of a new typology . Social Science and Medicine , 120 , 135–141. doi:10.1016/j.socscimed.2014.09.014. [ PubMed ] [ Google Scholar ]
  • Bryant A., Charmaz K. (2010). The Sage handbook of grounded theory . Thousand Oaks, CA: Sage. [ Google Scholar ]
  • Burnard P. (2004). Writing a qualitative research report . Nurse Education Today , 24 , 174–179. doi:10.1016/j.nedt.2003.11.005. [ PubMed ] [ Google Scholar ]
  • Burnard P., Gill P., Stewart K., Treasure E., Chadwick B. (2008). Analysing and presenting qualitative data . British Dental Journal , 204 , 429–432. doi:10.1038/sj.bdj.2008.292. [ PubMed ] [ Google Scholar ]
  • Cassidy O., Sbrocco T., Vannucci A., Nelson B., Jackson-Bowen D., Heimdal J., Heimdal J., Mirza N., Wilfley D. E., Osborn R., Shomaker L. B., Young J. F., Waldron H., Carter M., Tanofsky-Kraff M., (2013). Adapting interpersonal psychotherapy for the prevention of excessive weight gain in rural African American girls . Journal of Pediatric Psychology , 38 , 965–977. doi:10.1093/jpepsy/jst029. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Clark J. (2003). How to peer review a qualitative manuscript . Peer Review in Health Sciences , 2 , 219–235. [ Google Scholar ]
  • Corbin S., Strauss A. (2008). Basics of qualitative research (3rd ed.). Los Angeles, CA: Sage Publications. [ Google Scholar ]
  • Coyne I. T. (1997). Sampling in qualitative research. Purposeful and theoretical sampling; merging or clear boundaries? Journal of Advanced Nursing , 26 , 623–630. doi:10.1046/j.1365‐2648.1997.t01‐25‐00999.x. [ PubMed ] [ Google Scholar ]
  • Creswell J. W. (1994). Research design: Qualitative & quantitative approaches . Journal of Marketing Research , 33 , 252 doi:10.2307/3152153. [ Google Scholar ]
  • Creswell J. W. (2013a). Qualitative inquiry and research design: Choosing among five approaches . Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Creswell J. W. (2013b). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Creswell J. W., Klassen A. C., Plano Clark V. L., Smith K. C.;for the Office of Behavioral and Social Sciences Research. (2011). Best practices for mixed methods research in the health sciences . Retrieved from National Institutes of Health: http://obssr.od.nih.gov/mixed_methods_research .
  • de Visser R. O., Graber R., Hart A., Abraham C., Scanlon T., Watten P., Memon A. (2015). Using qualitative methods within a mixed-methods approach to developing and evaluating interventions to address harmful alcohol use among young people . Health Psychology , 34 , 349–360. doi:10.1037/hea0000163. [ PubMed ] [ Google Scholar ]
  • Dey I. (1993). Qualitative data analysis: A user-friendly guide for social scientists . New York, NY: Routledge. [ Google Scholar ]
  • Dixon-Woods M., Shaw R. L., Agarwal S., Smith J. A. (2004). The problem of appraising qualitative research . Quality and Safety in Health Care , 13 , 223–225. doi:10.1136/qhc.13.3.223. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Eakin J. M., Mykhalovskiy E. (2003). Reframing the evaluation of qualitative health research: Reflections on a review of appraisal guidelines in the health sciences . Journal of Evaluation in Clinical Practice , 9 , 187–194. doi:10.1046/j.1365‐2753.2003.00392.x. [ PubMed ] [ Google Scholar ]
  • Elo S., Kääriäinen M., Kanste O., Pölkki T., Utriainen K., Kyngäs H. (2014). Qualitative content analysis: A focus on trustworthiness . SAGE Open , 4 ( 1 ), 1–10. doi:10.1177/2158244014522633. [ Google Scholar ]
  • Glaser B., Strauss A. (1967). The discovery grounded theory: Strategies for qualitative inquiry . Nursing Research , 17 , 364 doi:10.1097/00006199‐196807000‐00014. [ Google Scholar ]
  • Gough B., Deatrick J. A. (2015). Qualitative health psychology research: Diversity, power, and impact . Health Psychology , 34 , 289–292. doi:10.1037/hea0000206. [ PubMed ] [ Google Scholar ]
  • Guba E. G. (1981). Criteria for assessing the trustworthiness of naturalistic inquiries . Educational Communication and Technology , 29 , 75–91. doi:10.1007/BF02766777. [ Google Scholar ]
  • Haukeland Y. B., Fjermestad K. W., Mossige S., Vatne T. M. (2015). Emotional experiences among siblings of children with rare disorders . Journal of Pediatric Psychology , 40 , 12–20. doi:10.1093/jpepsy/jsv022. [ PubMed ] [ Google Scholar ]
  • Hess J. S., Straub D. M. (2011). Brief report: Preliminary findings from a pilot health care transition education intervention for adolescents and young adults with special health care needs . Journal of Pediatric Psychology , 36 , 172–178. doi:10.1093/jpepsy/jsq091. [ PubMed ] [ Google Scholar ]
  • Hingle M., Beltran A., O’Connor T., Thompson D., Baranowski J., Baranowski T. (2012). A model of goal directed vegetable parenting practices . Appetite , 58 , 444–449. doi:10.1016/j.appet.2011.12.011. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hughes S. O., Power T. G., Papaioannou M. A., Cross M. B., Nicklas T. A., Hall S. K., Shewchuk R. M. (2011). Emotional climate, feeding practices, and feeding styles: An observational analysis of the dinner meal in Head Start families . The International Journal of Behavavioral Nutrition and Physical Activity , 8 , 60 doi:10.1186/1479‐5868‐8‐60. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Institute of Medicine Committee on Quality of Health Care in America. (2001). Crossing the quality chasm: A new health system for the 21st century . National Academies Press. Washington, DC. [ Google Scholar ]
  • Izaguirre M. R., Keefer L. (2014). Development of a self-efficacy scale for adolescents and young adults with inflammatory bowel disease . Journal of Pediatric Gastroenterology and Nutrition , 59 , 29–32. doi:10.1097/mpg.0000000000000357. [ PubMed ] [ Google Scholar ]
  • John W. S., Johnson P. (2000). The pros and cons of data analysis software for qualitative research . Journal of Nursing Scholarship , 32 , 393–397. [ PubMed ] [ Google Scholar ]
  • Johnson J. C. (1990). Selecting ethnographic informants . Sage Publications. Thousand Oaks, CA. [ Google Scholar ]
  • Kars M. C., Grypdonck M. H., de Bock L. C., van Delden J. J. (2015). The parents’ ability to attend to the “voice of their child” with incurable cancer during the palliative phase . Health Psychology , 34 , 446–452. doi:10.1037/hea0000166. [ PubMed ] [ Google Scholar ]
  • Kelly K., Ganong L. (2011). Moving to place: Childhood cancer treatment decision making in single-parent and repartnered family structures . Qualitative Health Research , 21 , 349–364. doi:10.1177/1049732310385823. [ PubMed ] [ Google Scholar ]
  • Kelly M. (2010). The role of theory in qualitative health research . Family Practice , 27 , 285–290. doi:10.1093/fampra/cmp077. [ PubMed ] [ Google Scholar ]
  • Krefting L. (1991). Rigor in qualitative research: The assessment of trustworthiness . The American Journal of Occupational Therapy , 45 , 214–222. doi:10.5014/ajot.45.3.214. [ PubMed ] [ Google Scholar ]
  • Lincoln Y. S., Guba E. G. (1985). Naturalistic inquiry . Newbury Park, CA: Sage Publications. [ Google Scholar ]
  • Lincoln Y. S., Lynham S. A., Guba E. G. (2011). Paradigmatic controversies, contradictions, and emerging confluences, revisited . In Denzin N. K., Lincoln Y. S. (Eds.), The Sage handbook of qualitative research (4th ed., pp. 97–128). Thousand Oaks, CA: Sage. [ Google Scholar ]
  • Lofland J., Lofland L. H. (2006). Analyzing social settings: A guide to qualitative observation and analysis . Belmont, CA: Wadsworth Publishing Company. [ Google Scholar ]
  • Lyons A. C., Goodwin I., McCreanor T., Griffin C. (2015). Social networking and young adults’ drinking practices: Innovative qualitative methods for health behavior research . Health Psychology , 34 , 293–302. doi:10.1037/hea0000168. [ PubMed ] [ Google Scholar ]
  • MacMillan K., Koenig T. (2004). The wow factor: Preconceptions and expectations for data analysis software in qualitative research . Social Science Computer Review , 22 , 179–186. doi:10.1177/0894439303262625. [ Google Scholar ]
  • Mason M. (Producer). (2010). Sample size and saturation in PhD studies using qualitative interviews . Forum: Qualitative Social Research . Retrieved from http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 .
  • Mays N., Pope C. (2000). Qualitative research in health care: Assessing quality in qualitative research . British Medical Journal , 320 , 50 doi:10.1136/bmj.320.7226.50. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • McDonald C. C., Sommers M. S. (2015). Teen drivers’ perceptions of inattention and cell phone use while eriving . Traffic Injury Prevention , 16 ( Suppl 2 ), S52–S58. doi:10.1080/15389588.2015.1062886. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Miles M. B., Huberman A. M., Saldaña J. (2013). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Minges K. E., Owen N., Salmon J., Chao A., Dunstan D. W., Whittemore R. (2015). Reducing youth screen time: Qualitative metasynthesis of findings on barriers and facilitators . Health Psychology , 34 , 381–397. doi:10.1037/hea0000172. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Morgan D. L. (1993). Qualitative content analysis: A guide to paths not taken . Qualitative Health Research , 3 , 112–121. doi:10.1177/104973239300300107. [ PubMed ] [ Google Scholar ]
  • Morgan D. L. (1998). Planning Focus Groups: Focus Group Kit #2 . Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Morrow S. (2005). Quality and trustworthiness in qualitative research in counseling psychology . Journal of Counseling Psychology , 52 , 250–260. doi:10.1037/0022‐0167.52.2.250. [ Google Scholar ]
  • Morse J. M. (2015a). Critical analysis of strategies for determining rigor in qualitative inquiry . Qualitative Health Research , 25 , 1212–1222. doi:10.1177/1049732315588501. [ PubMed ] [ Google Scholar ]
  • Morse J. M. (2015b). Data were saturated . Qualitative Health Research , 25 , 587–588. doi:10.1177/1049732315576699. [ PubMed ] [ Google Scholar ]
  • Palermo T. M. (2013). New guidelines for publishing review articles in JPP: Systematic reviews and topical reviews . Journal of Pediatric Psychology , 38 , 5–9. doi:10.1093/jpepsy/jss124. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Palermo T. M. (2014). Evidence-based interventions in pediatric psychology: Progress over the decades . Journal of Pediatric Psychology , 39 , 753–762. doi:10.1093/jpepsy/jsu048. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Palma E., Deatrick J., Hobbie W., Ogle S., Kobayashi K., Maldonado L. (2015). Maternal caregiving demands for adolescent and young adult survivors of pediatric brain tumors . Oncology Nursing Forum , 42 , 222–229. doi:10.1188/15.ONF.. [ PubMed ] [ Google Scholar ]
  • Patton M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice (4th ed.). Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Pierce J. S., Wysocki T. (2015). Topical Review: Advancing research on the transition to adult care for type 1 diabetes . Journal of Pediatric Psychology , 40 , 1041–1047. doi:10.1093/jpepsy/jsv064. [ PubMed ] [ Google Scholar ]
  • Pierce J. S., Wysocki T., Aroian K. (2016). Multiple stakeholder perspectives on health care transition outcomes in Type 1 Diabetes . Unpublished data. [ Google Scholar ]
  • Polanyi M. (1958). Personal knowledge . New York, NY: Harper & Row. [ Google Scholar ]
  • Power T. G., Hughes S. O., Goodell L. S., Johnson S. L., Duran J. A., Williams K., Beck A. D., Frankel L. A. (2015). Feeding practices of low-income mothers: How do they compare to current recommendations? The International Journal of Behavioral Nutrition and Physical Activity , 12 , 34 doi:10.1186/s12966‐015‐0179‐3. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Richards L., Morse J. M. (2013). Readdme first for a user’s guide to qualitative methods (3rd ed.). Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Ritchie J., Lewis J. (Eds.). (2003). Qualitative research practice: A guide for social science students and researchers . Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Saftner M. A., Martyn K. K., Momper S. L., Loveland-Cherry C. J., Low L. K. (2015). Urban American Indian adolescent girls framing sexual risk behavior . Journal of Transcultural Nursing , 26 , 365–375. doi:10.1177/1043659614524789. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Saldaña J. (2012). The coding manual for qualitative researchers . Thousand Oaks, CA: Sage Publications. [ Google Scholar ]
  • Sandelowski M. (1993a). Rigor or rigor mortis: The problem of rigor in qualitative research revisited . Advances in Nursing Science , 16 , 1–8. [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1993b). Theory unmasked: The uses and guises of theory in qualitative research . Research in Nursing & Health , 16 , 213–218. doi:10.1002/nur.4770160308. [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1994). The use of quotes in qualitative research . Research in Nursing and Health , 17 , 479–482. [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1995). Sample size in qualitative research . Research in Nursing and Health , 18 , 179–183. [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1998). Writing a good read: Strategies for re-presenting qualitative data . Research in Nursing and Health , 21 , 375–382. doi:10.1016/s1361‐3111(98)80052‐6. [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (2001). Real qualitative researchers do not count: The use of numbers in qualitative research . Research in Nursing and Health , 24 , 230–240. [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (2010). What’s in a name? Qualitative description revisited . Research in Nursing and Health , 33 , 77–84. doi:10.1002/nur.20362.. [ PubMed ] [ Google Scholar ]
  • Sandelowski M., Barroso J. (2002). Finding the findings in qualitative studies . Journal of Nursing Scholarship , 34 , 213–219. [ PubMed ] [ Google Scholar ]
  • Schwartz L. A., Tuchman L. K., Hobbie W. L., Ginsberg J. P. (2011). A social-ecological model of readiness for transition to adult-oriented care for adolescents and young adults with chronic health conditions . Child: Care, Health, and Development , 37 , 883–895. doi:10.1111/j.1365‐2214.2011.01282.x. [ PubMed ] [ Google Scholar ]
  • Silver C., Lewins A. (2014). Using software in qualitative research: A step-by-step guide (2nd ed.). London: Sage Publications. [ Google Scholar ]
  • Steele R. G., Wu Y. P., Jensen C. D., Pankey S., Davis A. M., Aylward B. S. (2011). School nurses’ perceived barriers to discussing weight with children and their families: A qualitative approach . Journal of School Health , 81 , 128–137. doi:10.1111/j.1746‐1561.2010.00571.x. [ PubMed ] [ Google Scholar ]
  • Thompson D. (2014). Talk to me, please!: The importance of qualitative research to games for health . Games for Health: Research, Development, and Clinical Applications , 3 , 117–118. doi:10.1089/g4h.2014.0023. [ PubMed ] [ Google Scholar ]
  • Thompson D., Baranowski T., Buday R., Baranowski J., Juliano M., Frazior M., Wilsdon J., Jago R. (2007). In pursuit of change: Youth response to intensive goal setting embedded in a serious video game . Journal of Diabetes Science and Technology , 1 , 907–917. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Thompson D., Bhatt R., Watson K. (2013). Physical activity problem-solving inventory for adolescents: Development and initial validation . Pediatric Exercise Science , 25 , 448–467. [ PubMed ] [ Google Scholar ]
  • Tobin G. A., Begley C. M. (2004). Methodological rigour within a qualitative framework . Journal of Advanced Nursing , 48 , 388–396. doi:10.1111/j.1365‐2648.2004.03207.x. [ PubMed ] [ Google Scholar ]
  • Tong A., Sainsbury P., Craig J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups . International Journal for Quality in Health Care , 19 , 349–357. doi:10.1093/intqhc/mzm042. [ PubMed ] [ Google Scholar ]
  • Tuckett A. G. (2004). Qualitative research sampling: The very real complexities . Nurse Researcher , 12 , 47–61. doi:10.7748/nr2004.07.12.1.47.c5930. [ PubMed ] [ Google Scholar ]
  • Valenzuela J. M., Buchanan C. L., Radcliffe J., Ambrose C., Hawkins L. A., Tanney M., Rudy B. J. (2011). Transition to adult services among behaviorally infected adolescents with HIV—a qualitative study . Journal of Pediatric Psychology , 36 , 134–140. doi:10.1093/jpepsy/jsp051. [ PubMed ] [ Google Scholar ]
  • West C. H., Bell J. M., Woodgate R. L., Moules N. J. (2015). Waiting to return to normal: An exploration of family systems intervention in childhood cancer . Journal of Family Nursing , 21 , 261–294. doi:10.1177/1074840715576795. [ PubMed ] [ Google Scholar ]
  • Whittemore R., Chase S. K., Mandle C. L. (2001). Validity in qualitative research . Qualitative Health Research , 11 , 522–537. doi:10.1177/104973201129119299. [ PubMed ] [ Google Scholar ]
  • Yin R. K. (2015). Qualitative research from start to finish (2nd ed.). New York, NY: Guilford Press. [ Google Scholar ]

7 Qualitative Data Examples and Why They Work

' src=

Qualitative data presents information using descriptive language, images, and videos instead of numbers.

To help make sense of this type of data—as opposed to quantitative data, which is all about numbers—we’ve compiled a list for you. It features some of the best examples of qualitative data around. 

So what makes these examples so great? 

They use qualitative data to tell a story. 

#1. The FBI Vault

The Freedom of Information Act (FOIA) Library, aka The Vault , is a fascinating first stop on our journey to getting to know top-notch qualitative data. 

Because of FOIA requests, the FBI has been required by law to release information about all sorts of cases. On The Vault, you’ll find everything from interview transcripts with serial killers and crime lords to safety plans for Princess Diana and Queen Elizabeth’s visits to the United States. 

how to write result in qualitative research

Source: FBI Records: The Vault — Al Capone

I could get lost in The Vault for hours, just poking around in different cases. Even after spending just half an hour reading different case files, I came away with all sorts of knowledge I didn’t have before. 

Like the fact that Steven Paul Jobs—as in the Steve Jobs—was once considered as a candidate for an appointed position on the U.S. President’s Export Council. And that the FBI did a thorough background investigation of Jobs in 1991 as part of the process. 

The Vault’s case file on Jobs reveals intriguing information. Like that some of Jobs’ former employees alleged he could “distort the truth” and let ambition get in the way of relationships with employees and peers. 

how to write result in qualitative research

Source: FBI Records: The Vault — Steven Paul Jobs

This data tells a story. It pulls you in. And it leaves you with additional questions to explore.

That’s some rich qualitative data right there.

And The Vault is full of it—mostly in the form of letters, interview transcripts, investigator observations, newspaper articles, and case summaries. 

#2. The Comments Section (AKA Reddit, Quora, and Other Social Media)

There’s a reason we find the comments section—or forums like Reddit, which are basically all one big comments section—so fascinating. 

They’re full of qualitative data. 

In a 2015 study published in Information, Communication & Society , German researchers Nina Springer, Ines Englemann, and Christian Pfaffinger set out to find out why the comments section has such a magnetic pull. 

The researchers began with a baseline understanding that “user comments allow ‘annotative reporting’ by embedding users’ viewpoints within an article’s context, providing readers with additional information to form opinions, which can potentially enhance deliberative processes.”

In other words, user comments on an article or a forum post are attractive because they offer extra information to help readers form opinions, as a group, with other commenters. 

To dig deeper, the study surveyed “650 commenters, lurkers, and non-users” in Germany. 

The results are surprising and show how different the comments-section experience is for contributors, lurkers, and non-participants. 

Contributors, according to the study, appear to mostly engage for the sake of “social-interactive motives to participate in journalism, and to discuss with other users.”

Lurkers, on the other hand, are there both for “cognitive and entertainment motives.” The lower the quality of the comments section discussion, the lower the lurker’s satisfaction. 

And non-participants? They’re just annoyed that the forums and comments sections exist. 

Here’s the thing: if you’re reading forums and comments sections as a qualitative researcher, you’re participating as a lurker. So the study’s results about lurkers primarily getting satisfaction from quality discussions make a lot of sense. 

You don’t just want fluff comments. You want rich data from those posts. 

And if you take your time to read through a bunch of comments, you can find it. 

If you want to learn about the ugly and good parts of marriage, head over to r/Marriage , where people routinely post primary sources, like this post of a spouse’s shopping list.

how to write result in qualitative research

Source: Reddit r/Marriage

What gets me is the combination of shrimp and Beyond Meat. But the original poster, or OP, is lamenting the horizontal nature of the shopping list. 

The comments section is rich with additional qualitative data:

  • “Nevermind the horizontal… who writes an S like that?!”
  • “You guys are going to be zig zagging all over that grocery store.”
  • “This is the content i come to r/marriage for”
  • “This girl is gonna boil your rabbit.”
  • “My question is, Who in god’s name writes their S like that? The horizontal list didn’t bother me half as much as your wife’s disturbing and weird handwriting. I need a professional handwriting analyst to find out when her next murder spree will be.”

how to write result in qualitative research

If you were studying marital satire or shopping styles in marriage/partnership, this post would be a perfect place to find qualitative data for your research. 

So would any Facebook groups, Quora posts, Instagram Reels, and TikTok videos on grocery shopping, marriage, partnership, and marital humor. 

#3. Market Research Survey Responses 

If you’re conducting market research, there are all sorts of places you can go to gain insights on your products. PickFu is one of them. It’s a market research tool where you can test different versions of your products and ads and get objective, written feedback on them. 

Tools like SurveyMonkey, Jotform, Qualtrics, and Typeform can all give you survey responses like this, too. 

But here’s an example of what I’m talking about. In the image below, an Amazon seller is asking 30 female Amazon Prime subscribers which package would inspire them to click through. 

how to write result in qualitative research

Source: PickFu

The respondents chose the second option, but that’s just quantitative data. It tells us that of the 30 respondents, 20 people voted for Option B, versus Option A’s 10 votes. If there were no survey responses, the Amazon seller wouldn’t know why the majority of respondents picked the second option. 

The qualitative data, on the other hand, reveals the answer: the eyes on the packaging design, the clear information, and the product name on Option B are more intriguing for most respondents. 

how to write result in qualitative research

Here’s a written recap of some of the comments arguing in favor of Option B: 

  • “B is my choice. It looks interesting. Something I might want to use in the kitchen. A looks like a pesticide I’d sprinkle on the windows to keep ants out of the house. The container looks complicated and I don’t like it. It looks like a cleaner, like Ajax, or yeah, a pesticide to sprinkle around to kill bugs.”
  • “The writing on B can be seen more clearly, especially the text “sleepy chocolate” which made me intrigued. I wanted to know more about why those words were there because I normally associate chocolate with staying awake. Therefore B more successfully got me to click to find out more.”
  • “The image with the sleepy eyes gets my attention and as I read the information off both images, B tells me enough to know it would be my choice.”
  • “Those eyes definitely get my attention! And calling it ‘sleepy hot chocolate’ is perfect! I’d definitely click through to learn more about it!”

You can run surveys using tools like this—and you can either comb through the data yourself or use tools like ATLAS.ti and Nvivo to help you analyze your qualitative data . 

#4. Pew Research Center Survey Analysis

The Pew Research Center is a fascinating trove of quantitative data, but it also offers qualitative data in the form of survey analysis. If you’re studying how internet use among teenagers changed as smartphones became ubiquitous, for instance, you’ll find quantitative data on the Pew Research Center. But you’ll also find an analysis of the quantitative data, which reaches deeper into the numbers and responses to bring you the researchers’ observations or conclusions. This is qualitative data. 

For example, this Pew Research Center study on teen internet use shows that most U.S. teenagers use the internet every day, with some using it almost constantly—and they’re not on Facebook. You’re more likely to find teen girls on TikTok, Snapchat, and Instagram and teen boys on Reddit and Twitch.

how to write result in qualitative research

Source: Pew Research Center

Instead of using the numbers gleaned from this piece—or perhaps in addition to that—you could use the observations and analysis to support or inform your choices. 

The methodology section of these surveys is also a great place to find qualitative research because it shows you how the survey was conducted.

how to write result in qualitative research

The moral of the story here is that you can, and should, look for the qualitative data that results from quantitative research—whether it’s on Pew Research Center or somewhere else. 

It’s there, and it can help guide your research.

#5. Government Policies and Information

This might seem like a boring place to find qualitative data, but any government website is rich with descriptive, reliable, authoritative information. 

Take the Alaska State Legislature’s website, akleg.gov, for instance. On it, you can find the full text of various bills and laws, plus the history of how they were enacted (or not). 

how to write result in qualitative research

If you’re researching anything related to state, local, or federal government policy, government websites are rich with qualitative data in the form of documents, audio files, videos, and notes. 

#6. Scholarly and Scientific Research

Google Scholar is one of my favorite websites for finding peer-reviewed, scholarly qualitative data. There’s also an entire section devoted to case law. All you have to do is put in a keyword or keyphrase and you’ll get hundreds of authoritative sources to choose from. 

how to write result in qualitative research

You can find research on all sorts of topics, but especially medical, educational, and political studies. Some are paywalled, but you can also look at the Directory of Open Access Journals (DOAI) for non-paywalled, academic, qualitative data.  

how to write result in qualitative research

Source: DOAI

Here’s what the data will typically look like on Google Scholar after you run a search—which, by the way, you can customize according to date, which means you’ll get the freshest data if you want it. 

how to write result in qualitative research

Source: Google Scholar

You’ll notice that most of this research has quantitative data too, which means it’s mixed-method. But it’s full of qualitative analysis and observations as well. 

Also, since the results you get from Google Scholar can be anything from books to articles to journals, some sources have more qualitative data than others. 

When you need secondary research that’s also qualitative, Google Scholar is an ideal place to find it. 

#7. Photos, Videos, and Audio Files

You know how law enforcement takes crime scene photos at the scene of a crime? That’s because the photos offer evidence in the form of qualitative data. The rooms describe a scene in a way that words can’t. 

Audio files use something other than written language to describe information or sounds. And video files can combine sound with images to provide detailed information about an event. 

In the image below, you’ll see a series of non-graphic images taken at the crime scene of the murder of Marilyn Sheppard in 1954. The police used them as evidence during Sam Sheppard’s trial—he was accused of committing the murder—and Cleveland State University now uses the images to help law students study case law. 

how to write result in qualitative research

Source: CSU Ohio

That’s another feature of excellent qualitative data—it can be applied in more than one way, used for more than one situation or research objective. 

In short, qualitative data offers nuance, flexibility, and knowledge you can’t get from quantitative data.  That said, both are valuable and have their place in research. Our guide to qualitative vs quantitative data can give you more insight into how the dance between the two works.

Make your website better. Instantly.

Keep reading about user experience.

how to write result in qualitative research

Qualitative data presents information using descriptive language, images, and videos instead of numbers. To help make sense of this type of data—as opposed to quantitative…

how to write result in qualitative research

The 5 Best Usability Testing Tools Compared

Usability testing helps designers, product managers, and other teams figure out how easily users can use a website, app, or product.  With these tools, user…

how to write result in qualitative research

5 Qualitative Data Analysis Methods + When To Use Each

Qualitative data analysis is the work of organizing and interpreting descriptive data. Interview recordings, open-ended survey responses, and focus group observations all yield descriptive—qualitative—information. This…

how to write result in qualitative research

The 5 Best UX Research Tools Compared

UX research tools help designers, product managers, and other teams understand users and how they interact with a company’s products and services. The tools provide…

how to write result in qualitative research

Qualitative vs. Quantitative Data: 7 Key Differences

Qualitative data is information you can describe with words rather than numbers.  Quantitative data is information represented in a measurable way using numbers.  One type…

how to write result in qualitative research

6 Real Ways AI Has Improved the User Experience

It seems like every other company is bragging about their AI-enhanced user experiences. Consumers and the UX professionals responsible for designing great user experiences are…

how to write result in qualitative research

12 Key UX Metrics: What They Mean + How To Calculate Each

UX metrics help identify where users struggle when using an app or website and where they are successful. The data collected helps designers, developers, and…

how to write result in qualitative research

5 Key Principles Of Good Website Usability

Ease of use is a common expectation for a site to be considered well designed. Over the past few years, we have been used to…

increase website speed

20 Ways to Speed Up Your Website and Improve Conversion in 2024

Think that speeding up your website isn’t important? Big mistake. A one-second delay in page load time yields: Your site taking a few extra seconds to…

Why Usability Test

How to Do Usability Testing Right

User experience is one of the most important aspects of having a successful website, app, piece of software, or any other product that you’ve built. …

website navigation

Website Navigation: Tips, Examples and Best Practices

Your website’s navigation structure has a huge impact on conversions, sales, and bounce rates. If visitors can’t figure out where to find what they want,…

how to write result in qualitative research

How to Create a Heatmap Online for Free in Less Than 15 Minutes

A heatmap is an extremely valuable tool for anyone with a website. Heatmaps are a visual representation of crucial website data. With just a simple…

Website Analysis: Our 4-Step Process Plus the Tools and Examples You Need to Get Started

Website Analysis: Our 4-Step Process Plus the Tools and Examples You Need to Get Started

The problem with a lot of the content that covers website analysis is that the term “website analysis” can refer to a lot of different things—and…

How to Improve My Website: Grow Engagement and Conversions by Fixing 3 Common Website Problems

How to Improve My Website: Grow Engagement and Conversions by Fixing 3 Common Website Problems

Here, we show you how to use Google Analytics together with Crazy Egg’s heatmap reports to easily identify and fix 3 common website problems.

Comprehensive Guide to Website Usability Testing (With Tools and Software to Help)

Comprehensive Guide to Website Usability Testing (With Tools and Software to Help)

We share the 3-step process for the website usability testing we recommend to our customers, plus the tools to pull actionable insights out of the process.

Over 300,000 websites use Crazy Egg to improve what's working, fix what isn't and test new ideas.

Last Updated on September 14, 2021

Qualitative data analysis: how market researchers can crack the code

What is qualitative data, what are the ingredients of a good qualitative data analysis, how to conduct an enlightening qualitative data analysis, the pros and cons of qualitative data analysis, get great results from qualitative data analysis.

When numbers fall short and you need the full story, qualitative data analysis comes to the rescue. Instead of following assumptions based on numerical data, qualitative data analysis methods let you dig deeper. Qualitative data analysis examines non-numerical data – words, images, and observations, to uncover themes, patterns, and meanings. 

And in this article, we’ll tell you exactly how to do it yourself, in-house. 

Qualitative data analysis uncovers the stories and feelings behind numbers. Qualitative methods gain information from conversations, interviews, and observations, capturing what people think and why they act a certain way. Unlike hard numbers, qualitative data helps us see the color and texture of people’s opinions, experiences, and emotions. 

Examples of the textual data that often makes up qualitative data pieces are a user’s detailed feedback on a mobile app’s usability, a shopper’s narrative about choosing eco-friendly products, or observational notes on customer behavior in a retail setting. 

This type of qualitative data collection helps us understand real feelings and thoughts, and goes beyond numbers and assumptions.

how to write result in qualitative research

Get qual research with Video Responses

Unlock the voice of the consumer with qualitative insights. Get fast, reliable Video Responses straight from your target customers.

There’s a big difference between knowing that 50% of customers prefer your new product and understanding the nuanced reasons behind that preference.

It’s easy to get blinded by shiny numbers. In this case, a preference signals that you’re doing something great. But not knowing what, means you can’t replicate it, or double down on it to crank up that 50% even more.

So what you’ll need to do is dig into the ‘why’ behind the ‘what’. And we mean really dig. A strong qualitative data analysis process really aims at not putting words inside your customers mouths but letting them speak for themselves.

Another example is when a company finds out through a quick quantitative data survey that customers rate their service 4 out of 5. Which isn’t bad. But how can they improve it – or even work to maintain it? Guesswork is lethal here, yet it’s what so many companies resort to.

Which leads to obvious follow-up actions that are usually not customer-centric. Let’s say that this company assumes people are mostly happy because of their quick response times. So, they implement chatbots to take care of the first part of conversations, to speed things up even more. What could be wrong with that? 

But what if through in-depth interviews, they could have discovered that the personal touch from the staff right from the get-go is what customers really value? 

In consumer research, these nuances are gold. They allow your team to make finely tuned adjustments that resonate deeply with your audience. It’s what helps you move beyond the one-size-fits-all approach suggested by quantitative data. 

So if you want to start making experiences and products that feel personal and relevant to each customer, here are some ways to approach qualitative data research.

Content analysis: unveiling customer sentiments

What it is: Content analysis involves examining texts, reviews, and comments to identify frequently occurring words and sentiments, providing a quantitative measure of qualitative feedback.

Good to know:

  • Focus on reviews, comments, and social media posts.
  • Look for repeating words and sentiments to identify trends.
  • Helps prioritize actions based on frequently mentioned topics by customers.

Chances are, you already have a lot of content that can be analyzed for qualitative data research. In that case, content analysis is your go-to approach to getting started. Content analysis means zooming in on recurring words, phrases, and sentiments scattered across reviews and comments.

Dig into reviews, comments, and emails and start flagging words and phrases that keep coming back. These can help you identify areas for improvement, but also show you what really is working.

This way, content analysis offers a quantitative measure of qualitative feedback, enabling you to prioritize actions based on what’s most mentioned by your customers, when they’re not prompted or asked anything specifically.

By systematically categorizing and quantifying this feedback, you’ll be able to make informed decisions on product features, marketing messages, and even future design innovations.

Narrative analysis: connecting through stories

What it is: Narrative analysis delves into customers’ stories to understand their experiences, decisions, and emotions throughout their journey with your brand.

  • Analyze customer stories from initial contact to purchase.
  • Focus on customers’ thoughts and feelings at each stage.
  • Useful for identifying communication and support opportunities.

A lot of times brands are mostly interested in the beginning and end of a customer journey: how do I get in front of customers, and how do I get in their shopping basket?

But the story of what happens between those two moments is just as, if not more important. And with narrative analysis, you can help connect the dots.

You won’t just be looking at the touchpoints there were, but also what customers were thinking and feeling at each stage. By interpreting qualitative data, you can create a full story from start to finish on how customers think and feel and make decisions in your market.

And that is so much more than just a nice story. Narrative analysis shows you where you can swoop in, where you should change your communications or where you should offer more support — for a happy ever after.

Discourse analysis: shaping perceptions through conversation

What it is: Discourse analysis examines language and communication on platforms like social media to understand how they influence public perception and consumer behavior.

  • Explore broader conversations around topics relevant to your brand.
  • Understand cultural, social, and environmental contexts.
  • Align your messaging with audience values and lead discussions.

Discourse analysis looks at the broader conversation around topics relevant to your brand. This qualitative data analysis method looks at how language and communication on platforms like social media shape public perception and influence consumer behavior.

Discourse analysis not just about what’s being said about your brand and products; it’s about understanding the cultural, social, and environmental currents that drive these conversations.

For example, when customers discuss “sustainability,” they’re not just talking about your specific packaging; they’re engaging in a larger dialogue about corporate responsibility, environmental impact, and ethical consumption.

Discourse analysis helps you grasp the nuances of these discussions, revealing how your brand can authentically contribute to and lead within these conversations.

This strategic insight allows you to align your messaging with your audience’s values, build credibility, and position your brand as a leader in meaningful sustainability efforts.

By engaging with and influencing the discourse, you can adapt to current consumer expectations but you can even take it a step further, and shape future trends and behaviors in alignment with your brand’s values and goals.

Thematic analysis: finding overlapping themes in chaos

What it is: Thematic analysis seeks to find common themes within qualitative data, moving beyond individual opinions to uncover broader patterns.

  • Organize feedback into distinct themes.
  • Requires systematic data collection and coding.
  • Offers clear, actionable insights for different business areas.

Plenty of brands are already sitting on qualitative data from thousands of customer interactions, which might seem like a jumble of individual opinions and experiences.

You might look at them and think ‘ ha, humans really all want or value different things ’. But there will be overlap, and that is where the real value lies.

Thematic analysis aims at finding common themes in this qualitative data. You move beyond surface-level chaos by categorizing all pieces of feedback into distinct themes.

These themes could range from specific product features, such as “battery life” in electronics, to broader experiential factors, like “customer service excellence” or “ease of use.” By identifying these recurring patterns, you gain a clearer, more organized understanding of your customers’ priorities and pain points.

One of the benefits of thematic analysis is that it helps you organize a wide range of feedback into clear, actionable insights for each team in your business. You may uncover themes about the product, about communication, or other parts of your business that customers get exposed to. In other words: every business could benefit from some thematic analysis.

Grounded theory: building strategies from real feedback

What it is: Grounded theory uses early feedback from users to develop theories and strategies that meet their needs, focusing on continuous improvement.

  • Start with feedback from early users or testers.
  • Engage deeply with feedback to guide product development.
  • Ideal for new services or products, ensuring they align with customer expectations.

For those launching a new service, grounded theory takes feedback from early users and starts building from there. It uses real, raw customer thoughts to shape a strategy that better meets their needs.

This approach isn’t just about collecting data; it’s about letting qualitative data direct your next moves, ensuring your innovations are not just shots in the dark but informed, strategic decisions aimed at fulfilling genuine customer needs.

When you adopt grounded theory, you commit to a process of continuous improvement and adaptation. As feedback starts rolling in from those first users or beta testers, you’re given a unique opportunity to see your product through the eyes of those it’s meant to serve.

This early-stage feedback is gold—unfiltered, direct, and incredibly insightful. It tells you what’s resonating with your audience, what’s missing the mark, and, crucially, how to adjust your offering for better alignment with customer expectations.

Bear in mind that when done right, grounded theory goes beyond merely reacting to feedback. It’s about proactively seeking it out and engaging with it. This means not just reading comments or reviews, but diving deeper through follow-up questions, interviews, or focus groups to really understand the why behind the feedback. 

Diving into qualitative data analysis can feel like a big task for many brands. There’s often worry about how much time it’ll take. Or how much money. And then there’s the question of whether all that detail might lead you off track instead of to clear answers.

After all, businesses move fast these days, and spending a lot of time on a research project doesn’t always fit the schedule.

But those worries don’t have to stop you. With the right plan and the best tools, you can dodge those issues. Start by creating a roadmap, so you know what the next few days, weeks or months will look like. See? It’s less daunting already.

Below, we’ll break the whole process down into simple steps. We’re going to walk through how to tackle qualitative data analysis without getting bogged down.

1. Transcribing interviews and collecting qualitative survey data

When it comes to qualitative research, if something’s said, it’s crucial. And that means you gotta write it down. Or at least have a tool to do it for you.

‘ ’I don’t wanna miss a thing’ ’ is your theme song for this step.

Every chuckle, pause, or sigh can give you insights into what your customers really think and feel. Now, I know what you’re thinking: “Transcribing interviews sounds like a lot of work. Let alone conducting all of them!” 

But here’s the good news—using Attest makes this step a pleasant breeze on a hot summer night. With Attest, you can send out surveys that dive deep into all the qualitative questions you’ve been itching to ask. Our platform is designed to capture rich, detailed responses in a way that is easy to search and analyze. 

This means you don’t have to worry about spending hours transcribing interviews. The responses are already there in writing, ready for you to analyze. This doesn’t just save time; it ensures accuracy. You’re getting the unfiltered voice of your customer, directly and conveniently. No more playing detective with hours of audio recordings.

2. Organize data and identify common patterns

Next, sift through your transcribed interviews, survey responses, and notes. Your goal here is to spot patterns or themes that crop up repeatedly.

This could be similar sentiments about a product feature or shared experiences with your service. Organizing data helps you identify themes that move from scattered bits of feedback to clear, common threads that tell a bigger story.

3. Using tools to make the process easier

There are plenty of software tools out there designed to help with qualitative data analysis. These tools can help you code your qualitative data, which means tagging parts of the text with keywords or themes, making it easier to organize and analyze textual data. They can save you a heap of time and help you stay accurate and consistent in your analysis.

That’s where Attest’s innovative Video Responses come into play, offering a seamless and impactful way to gather and analyze qualitative data directly from your target audience – all in the same platform as your quantitative data.

Here’s how we transform qualitative research:

  • Easy to use : Attest’s platform lets you quickly add video questions to surveys, making it straightforward to collect in-depth feedback.
  • Fast insights : With automated transcriptions, you can swiftly analyze video responses, identifying key themes without the wait.
  • Reliable data : Attest ensures feedback comes from a diverse, representative audience, giving you confidence in the insights you gather.
  • Rich context : Video responses capture the full spectrum of customer emotions and nuances, providing a deeper understanding than text alone.
  • Seamless integration : Mix qualitative and quantitative data effortlessly, for a comprehensive view of your customer base.
As consumer behaviors and preferences continue to evolve at lightning speed, it’s products like Video Responses that will help brands win more based on decisions made with a deeper understanding of their customers. Jeremy King, CEO and Founder of Attest

4. Highlight context alongside data where relevant

Understanding the context in which feedback is provided is crucial in qualitative analysis. It’s not just about what your customers are saying; it’s also about why they’re saying it at that particular moment. This deeper layer of insight can significantly impact how you interpret and act on the data you collect.

Why context matters:

  • Timing : Feedback given right after a new product launch can contain initial impressions that might evolve over time. Similarly, responses collected during a major sale or promotion might be influenced by the excitement or urgency of the moment.
  • External factors : Consider the broader environment. For example, feedback during a major social event, a public holiday, or even a global crisis can be colored by the emotions and experiences of that time. This can shift priorities or change the way people interact with your brand.
  • Customer journey stage : The stage of the customer journey at which feedback is given can also provide important context. Early-stage feedback might focus on first impressions and expectations, while later-stage feedback could offer deeper insights into user experience and satisfaction.

How to account for context in your qualitative analysis:

  • Document the circumstances : When collecting data, make a note of the timing and any relevant external factors.
  • Consider the source : Different platforms can also provide context. For instance, feedback from a public social media post might differ from what’s shared in a private survey due to the public nature of the medium.
  • Use context to guide action : Let the context inform how you prioritize and respond to feedback. Initial excitement might warrant a quick thank-you message, while deeper, contextual insights might lead to product or service improvements.

5. Seek participant validation

Once you’ve got some preliminary findings, it’s a good idea to circle back to your participants. This could mean confirming your interpretations with them or diving deeper into certain areas.

This will help you be sure your analysis aligns with your respondents’ intended meanings and experiences. Plus, it shows respect for their contributions and can uncover even richer insights.

6. Compile a final report with a mix of data and visualization techniques

Finally, bring your analysis to life in a report that mixes clear, concise writing with visual elements like charts, graphs, and quotes.

Visualization helps make complex insights more accessible, engaging, and persuasive. Your report should not only present what you’ve found but also tell the story of how these research findings can influence decisions and strategies.

7. Put insights into action

The real value of qualitative data analysis lies in its application. Use the insights to inform decisions, refine strategies, and better meet your customers’ needs. This is where your analytical journey makes a tangible impact on your business.

Previously when we’ve had to do qualitative research, it’s taken months and months. Attest gets the information that we need quickly. By the very next day we’re able to implement some of the changes and then go back for round two. Simon Gray, Head of Marketing, Zzoomm

Qualitative data analysis looks at the human side of data. It offers insights that numbers alone can’t provide. But like all research methods, even qualitative data analysis methods have their strengths and weaknesses, especially when it comes to shaping a marketing plan that hits the mark.

Advantages of qualitative data analysis

Bringing qualitative data into your strategy brings about transformative advantages that can significantly transform how your business connects with your audience and adapts to the market. Without further ado, let’s look at the benefits it brings.

Qualitative data gives you truly rich insights

Want to go beyond meeting the explicit needs of your customers, and also address their unspoken desires and creating experiences that truly matter to them? Qualitative analysis offers an unparalleled depth of understanding by capturing the subtleties and complexities of customer behavior and sentiment. 

By engaging directly with your audience through interviews, focus groups, or social media interactions, you gain nuanced perspectives that quantitative data alone cannot provide. These rich insights enable you to craft marketing strategies and product innovations that resonate on a deeper level with your audience. 

Qualitative data is a lot more flexible than numbers

Numbers can be quite limiting. The benefit of qualitative analysis is that you’re not confined to a predetermined set of questions or outcomes. 

Instead, you have the freedom to explore new directions, probe interesting findings further, and let the data guide your research process. This flexibility means your research process can evolve in real-time, responding to unexpected insights or shifting market dynamics. 

Qualitative data is great for strategic decision-making

The insights gained from qualitative analysis can significantly inform strategic decision-making. By understanding the nuances of customer feedback, you can make informed and detailed choices about where to allocate resources, which product features to prioritize, and how to position your business in the market.

You can go beyond generic moves in the right direction and make sure you hit the nail on the head on the first try, instead of slowly creeping towards it.

Qualitative research data fuels innovation and differentiation

Businesses are always looking for ways to innovate, but where to look? It’s often less obvious and loud than you think. And innovation doesn’t always have to be massively disrupting or a big pivot. Sometimes small changes made by listening to your customers’ unmet needs and emerging desires will tell you everything you need to know for your next product launch.

Innovation that brings information in from customers is often much more to-the-point than innovation that comes from inside the business, where people tend to be focused on the product and possibilities around it a lot. But try a different approach every once in a while. Listen to the people that use your product, not just the ones who create it.

Qualitative research data will fuel a customer-centric culture

Qualitative data puts your customer’s voice front and center. It highlights their stories, opinions, and feelings, making your marketing strategy more empathetic and customer-focused. This will allow you to build stronger connections with your audience.

Not by any marketing gimmicks, creating online communities or carefully curated UGC campaigns, but by speaking directly to customers’ experiences and emotions. Using qualitative data across your organization brings transformative effects, deeply embedding a culture of attentiveness, adaptability, and unwavering focus on the customer at every level of your business.

This approach does more than just inform product development or marketing strategies—it reshapes the very foundation of how your business operates and interacts with the people it was created for. 

Disadvantages of qualitative data analysis

We’re not going to pretend that qualitative data analysis is something you can do on autopilot. But while qualitative data analysis brings its set of challenges, understanding these can help you navigate through them more effectively.

Moreover, with the right tools and strategies, the benefits you gain far outweigh any of the potential drawbacks we’ve listed below. Here’s a closer look at these challenges and how to turn them into opportunities:

Qualitative data analysis can time-consuming

Yes, qualitative analysis often* demands time and resources. The depth it requires—from collecting detailed narratives to transcribing and interpreting vast amounts of text—can seem daunting. However, this investment in time is what uncovers the nuanced insights that quantitative methods might miss.

*… but not always. With Attest’s Video Responses, you get reliable qual insights fast, alongside your quantitative data!

Qualitative data analysis is pretty subjective

Of course, the interpretive nature of qualitative data analysis does introduce the risk of subjectivity and bias. But ignoring all opinions and thoughts around your product or brand is arguably worse. What this challenge underscores all the more is the importance of a structured, systematic approach to analysis.

By implementing standardized procedures for coding and analyzing data, and employing tools that facilitate consistency across the process, you can mitigate the risks of subjective bias.

And if you involve a diverse team in the analysis process and make sure you pick a representative set of respondents, qualitative research can enable a deeper, more empathetic understanding of ALL your customers; experiences and perspectives.

Qualitative data analysis methods come with scaling issues

Qualitative data collection can indeed be tricky to scale and generalize across a broader market. But who said you can only do qualitative research with in-person interviews? With the right survey tool, like Attest, you can ask quantitative questions at scale, to an audience that is large and diverse.

Our participant audience consists of 125 million people spread across 59 countries, and once you send out a survey, results can come back in mere minutes or hours. So if scalability is holding you back, online surveys with video responses are the answer.

Unlock the full potential of qualitative data analysis with Attest. Gain actionable insights, bridge the gap between raw data and emotional intelligence, and make informed decisions. Discover how Attest can support your journey to deeper consumer understanding at Attest for insights professionals and learn about our commitment to data quality .

how to write result in qualitative research

Customer Research Manager 

Related articles

How to find gaps in the market – 8 important steps, market analysis, consumer research guide: best process and examples, hybrid and flexible working at attest, subscribe to our newsletter.

Fill in your email and we’ll drop fresh insights and events info into your inbox each week.

* I agree to receive communications from Attest. Privacy Policy .

You're now subscribed to our mailing list to receive exciting news, reports, and other updates!

  • Open access
  • Published: 06 August 2024

Adaptation and validation of the evidence-based practice profile (EBP 2 ) questionnaire in a Norwegian primary healthcare setting

  • Nils Gunnar Landsverk 1 ,
  • Nina Rydland Olsen 2 ,
  • Kristine Berg Titlestad 4 ,
  • Are Hugo Pripp 3 &
  • Therese Brovold 1  

BMC Medical Education volume  24 , Article number:  841 ( 2024 ) Cite this article

40 Accesses

Metrics details

Access to valid and reliable instruments is essential in the field of implementation science, where the measurement of factors associated with healthcare professionals’ uptake of EBP is central. The Norwegian version of the Evidence-based practice profile questionnaire (EBP 2 -N) measures EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior. Despite its potential utility, the EBP 2 -N requires further validation before being used in a cross-sectional survey targeting different healthcare professionals in Norwegian primary healthcare. This study assessed the content validity, construct validity, and internal consistency of the EBP 2 -N among Norwegian primary healthcare professionals.

To evaluate the content validity of the EBP 2 -N, we conducted qualitative individual interviews with eight healthcare professionals in primary healthcare from different disciplines. Qualitative data was analyzed using the “text summary” model, followed by panel group discussions, minor linguistic changes, and a pilot test of the revised version. To evaluate construct validity (structural validity) and internal consistency, we used data from a web-based cross-sectional survey among nurses, assistant nurses, physical therapists, occupational therapists, medical doctors, and other professionals ( n  = 313). Structural validity was tested using a confirmatory factor analysis (CFA) on the original five-factor structure, and Cronbach’s alpha was calculated to assess internal consistency.

The qualitative interviews with primary healthcare professionals indicated that the content of the EBP 2 -N was perceived to reflect the constructs intended to be measured by the instrument. However, interviews revealed concerns regarding the formulation of some items, leading to minor linguistic revisions. In addition, several participants expressed that some of the most specific research terms in the terminology domain felt less relevant to them in clinical practice. CFA results exposed partial alignment with the original five-factor model, with the following model fit indices: CFI = 0.749, RMSEA = 0.074, and SRMR = 0.075. Cronbach’s alphas ranged between 0.82 and 0.95 for all domains except for the Sympathy domain (0.69), indicating good internal consistency in four out of five domains.

The EBP 2 -N is a suitable instrument for measuring Norwegian primary healthcare professionals’ EBP knowledge, attitudes, confidence, and behavior. Although EBP 2 -N seems to be an adequate instrument in its current form, we recommend that future research focuses on further assessing the factor structure, evaluating the relevance of the items, and the number of items needed.

Registration

Retrospectively registered (prior to data analysis) in OSF Preregistration. Registration DOI: https://doi.org/10.17605/OSF.IO/428RP .

Peer Review reports

Evidence-based practice (EBP) integrates the best available research evidence with clinical expertise, patient characteristics, and preferences [ 1 ]. The process of EBP is often described as following the five steps: ask, search, appraise, integrate, and evaluate [ 1 , 2 ]. Practicing the steps of EBP requires that healthcare professionals hold a set of core competencies [ 3 , 4 ]. Lack of competencies such as EBP knowledge and skills, as well as negative attitudes towards EBP and low self-efficacy, may hinder the implementation of EBP in clinical practice [ 5 , 6 , 7 , 8 , 9 , 10 ]. Measuring of EBP competencies may assist organizations in defining performance expectations and directing professional practice toward evidence-based clinical decision-making [ 11 ].

Using well-designed and appropriate measurement instruments in healthcare research is fundamental for gathering precise and pertinent data [ 12 , p. 1]. Access to valid and reliable instruments is also essential in the field of implementation science, where conducting consistent measurements of factors associated with healthcare professionals’ uptake of EBP is central [ 13 ]. Instruments measuring the uptake of EBP should be comprehensive and reflect the multidimensionality of EBP; they should be valid, reliable, and suitable for the population and setting in which it is to be used [ 14 ]. Many instruments measuring different EBP constructs are available today [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]. However, the quality of these instruments varies, and rigorous validation studies that aim to build upon and further develop existing EBP instruments are necessary [ 13 , 16 ].

The authors of this study conducted a systematic review to summarize the measurement properties of existing instruments measuring healthcare professionals’ EBP attitudes, self-efficacy, and behavior [ 16 ]. This review identified 34 instruments, five of which were translated into Norwegian [ 23 , 24 , 25 , 26 , 27 ]. Of these five instruments, only the Evidence-based practice profile questionnaire (EBP 2 ) was developed to measure various EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior [ 28 ]. In addition, EBP 2 was developed to be trans-professional [ 28 ]. Although not exclusively demonstrating high-quality evidence for all measurement properties, the review authors concluded that the EBP 2 was among the instruments that could be recommended for further use and adaption for use among different healthcare disciplines [ 16 ].

EBP 2 was initially developed by McEvoy et al. in 2010 and validated for Australian academics, practitioners, and students from different professions (physiotherapy, podiatry, occupational therapy, medical radiation, nursing, human movement) [ 28 ]. The instrument was later translated into Chinese and Polish and further tested among healthcare professionals in these countries [ 29 , 30 , 31 , 32 ]. The instrument was also translated into Norwegian and cross-culturally adapted into Norwegian [ 27 ]. The authors assessed content validity, face validity, internal consistency, test-retest reliability, measurement error, discriminative validity, and structural validity among bachelor students from nursing and social education and health and social workers from a local hospital [ 27 ]. Although the authors established the content validity of the EBP 2 -Norwegian version (EBP 2 -N), they recommended further linguistic improvements. Additionally, while they found the EBP 2 -N valid and reliable for three subscales, the original five-factor model could not be confirmed using confirmatory factor analysis. Therefore, they recommended further research on the instrument measurement properties [ 27 ].

We recognized the need for further assessment of measurement properties of the EBP 2 -N before using this instrument in a planned cross-sectional survey targeting physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors working with older people in Norwegian primary healthcare [ 33 ]. As our target population differed from the population studied by Titlestad et al. [ 27 ], the EBP 2 -N should be validated again, assessing content validity, construct validity and internal consistency [ 12 , p. 152]. The assessment of content validity evaluates whether the content of an instrument is relevant, comprehensive, and understandable for a specific population [ 34 ]. Construct validity, including structural validity and cross-cultural validity, can provide evidence on whether an instrument measures what it intends to do [ 12 , p. 169]. Furthermore, the degree of interrelatedness among the items (internal consistency) should be assessed when evaluating how items of a scale are combined [ 35 ]. Our objectives were to comprehensively assess content validity, structural validity, and internal consistency of the EBP 2 -N among Norwegian primary healthcare professionals. We hypothesized that the EBP 2 -N was a valid and reliable instrument suitable for use in Norwegian primary healthcare settings.

Study design

This study was conducted in two phases: Phase 1 comprised a qualitative assessment of the content validity of the EBP 2 -N, followed by minor linguistic adaptions and a pilot test of the adapted version. Phase 2 comprised an assessment of structural validity and internal consistency of the EBP 2 -N based on the result from a web-based cross-sectional survey.

The design and execution of this study adhered to the COSMIN Study Design checklist for patient-reported outcome measurement instruments, as well as the methodology for assessing the content validity of self-reported outcome measures [ 34 , 36 , 37 ]. Furthermore, this paper was guided by the COSMIN Reporting guidelines for studies on measurement properties of patient-reported outcome measures [ 38 ].

Participants and setting

Participants eligible for inclusion in both phases of this study were health personnel working with older people in primary healthcare in Norway, such as physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors. Proficiency in reading and understanding Norwegian was a prerequisite for inclusion. This study is part of a project called FALLPREVENT, a research project that aims to bridge the gap between research and practice in fall prevention in Norway [ 39 ].

Instrument administration

The EBP 2 -N consists of 58 self-reported items that are divided into five different domains: (1) Relevance (items 1–14), which refers to the value, emphasis, and importance respondents place on EBP; (2) Sympathy (items 15–21) which refers to the perceived compatibility of EBP with professional work; (3) Terminology (items 22–38), which refers to the understanding of common research terms; (4) Practice (items 39–47), which refers to the use of EBP in clinical practice and; (5) Confidence (items 48–58), which relates to respondents perception of their EBP skills [ 28 ]. All the items are rated on a five-point Likert scale (1 to 5) (see questionnaire in Additional file 1 ). Each domain is summarized, with higher scores indicating a higher degree of the construct measured in the domain in question. The items in the Sympathy domain are negatively phrased and need to be reversed before being summarized. The possible range in summarized scores (min-max) per domain are as follows: Relevance (14–70), Sympathy (7-35) , Terminology (17–85), Practice (9-45) , and Confidence (11–55).

Phase 1: content validity assessment

Recruitment and participant characteristics.

Snowball sampling was used to recruit participants in Eastern Norway, and possible eligible participants were contacted via managers in healthcare settings. The number of participants needed for the qualitative content validity interviews was based on the COSMIN methodology recommendations and was set to at least seven participants [ 34 , 37 ]. We recruited and included eight participants. All participants worked with older people in primary healthcare, and included two physical therapists, two occupational therapists, two assistant nurses, one nurse, and one medical doctor. The median age (min-max) was 35 (28–55). Two participants held upper secondary education, four held a bachelor’s degree, and two held a master’s degree. Six participants reported that they had some EBP training from their education or had attended EBP courses, and two had no EBP training.

Qualitative interviews

Before the interviews, a panel of four members (NGL, TB, NRO, and KBT) developed a semi-structured interview guide. Two panel members were EBP experts with extensive experience in EBP research and measurement (NRO and KBT). KBT obtained consent from the developer of the original EBP 2 questionnaire and translated the questionnaire into Norwegian in 2013 [ 27 ].

To evaluate the content validity of the EBP 2 -N for use among different healthcare professionals working in primary healthcare in Norway, we conducted individual interviews with eight healthcare professionals from different disciplines. Topics in the interview guide were guided by the standards of the COSMIN study design checklist and COSMIN criteria for good content validity, which include questions related to the following three aspects [ 34 , 37 ]: Whether the items of the instrument were perceived relevant (relevance), whether all key concepts were included (comprehensiveness), and whether the instructions, items, and response options were understandable (comprehensibility) [ 34 ]. The interview guide is presented in Additional File 2 . Interview preparations and training included a review of the interview guide and a pilot interview with a physical therapist not included in the study.

Eight interviews were conducted by the first author (NGL) in May and June 2022. All interviews were conducted in the participant’s workplaces. The interviews followed a “think-aloud” method [ 12 , p. 58, 40 , p. 5]. Hence, in the first part of the interview, the participants were asked to complete the questionnaire on paper while simultaneously saying aloud what they were thinking while responding to the questionnaire. Participants also had to state their choice of answer aloud and make a pen mark on the items or responses that either were difficult to understand or did not feel relevant to them. In the second part of the interviews, participants were asked to elaborate on why items were marked as difficult to understand or irrelevant, focusing on relevance and comprehensibility. In addition, the participants were asked to give their overall impression of the instrument and state if they thought any essential items (comprehensiveness) were missing. Only the second part of the interviews were audio-recorded.

Analysis and panel group meetings

After conducting the individual interviews, the first author immediately transcribed the recorded audio data. The subsequent step involved gathering and summarizing participants’ comments into one document that comprised the questionnaire instructions, items, and response options. Using the “text summary” model [ 41 , p.61], we summarized the primary “themes” and “problems” identified by participants during the interviews. These were then aligned with the specific item or section of the questionnaire to which the comments were related. For example, comments on the items’ comprehensibility were identified as one “theme”, and the corresponding “problem” was that the item was perceived as too academically formulated or too complex to understand. Comments on an item’s relevance was another “theme” identified, and an example of a corresponding “problem” was that the EBP activity presented in the item was not recognized as usual practice for the participant. The document contained these specific comments and summarized the participants’ overall impression of the instrument. Additionally, it included more general comments addressing the instrument’s relevance, comprehensibility, and comprehensiveness.

Next, multiple rounds of panel group discussions took place, and the final document with a summary of participants’ comments served as the foundation for these discussions. The content validity of the items, instructions, and response options underwent thorough examinations by the panel members. Panel members discussed aspects, such as relevance, comprehensiveness, and comprehensibility, drawing upon insights from interview participants’ comments and the panel members’ extensive knowledge about EBP.

Finally, the revised questionnaire was pilot tested on 40 master’s students (physical therapists) to evaluate the time used to respond, and the students were invited to make comments in free text adjacent to each domain in the questionnaire. The pilot participants answered a web-based version of the questionnaire.

Phase 2: Assessment of structural validity and internal consistency

Recruitment and data collection for the cross-sectional survey.

Snowball sampling was used to recruit participants. The invitation letter, with information about the study and consent form, was distributed via e-mail to healthcare managers in over 37 cities and municipalities representing the eastern, western, central, and northern parts of Norway. The managers forwarded the invitation to eligible employees and encouraged them to respond to the questionnaire. The respondents that consented to participation automatically received a link to the online survey. Our approach to recruitment made it impossible to keep track of the exact number of potential participants who received invitations to participate. As such, we were unable to determine a response rate.

Statistical methods

Statistical analyses were performed using STATA [ 42 ]. We tested the structural validity and internal consistency of the 58 domain items of the EBP 2 -N, using the same factor structure as in the initial evaluation [ 28 ] and the study that translated the questionnaire into Norwegian [ 27 ]. Structural validity was assessed using confirmatory factor analysis with maximum likelihood estimation to test if the data fit the predetermined original five-factor structure. Model fit was assessed by evaluating the comparative fit index (CFI), root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). Guidelines suggest that a good-fitting model should have a CFI of around 0.95 or higher, RMSEA of around 0.06 or lower, and SRMR of around 0.08 or lower [ 43 ]. Cronbach’s alpha was calculated for each of the five domains to evaluate whether the items within the domains were interrelated. It has been proposed that Cronbach’s alpha between 0.70 and 0.95 can be considered good [ 44 ].

The sample size required for a factor analysis was set based on COSMIN criteria for at least an “adequate” sample size, which is at least five times the number of items and > 100 [ 45 , 46 ]. Accordingly, the sample size required in our case was > 290 respondents. Regarding missing data, respondents with over 25% missing items on domain items were excluded from further analysis. Respondents with over 20% missing on one domain were excluded from the analysis of that domain. The Little’s MCAR test was conducted to test whether data were missing completely at random. Finally, for respondents with 20% or less missing data on one domain, the missing values were substituted with the respondent’s mean of other items within the same domain.

Ethical approval and consent to participate

The Norwegian Agency for Shared Services in Education and Research (SIKT) approved the study in March 2022 (ref: 747319). We obtained written informed consent from the participants interviewed and the cross-sectional survey participants.

The findings for Phase 1 and Phase 2 will be presented separately. Phase 1 will encompass the results of the qualitative content validity assessment, adaptions, and pilot testing of the EBP 2 -N. Phase 2 will encompass the results of assessing the structural validity and internal consistency of the EBP 2 -N.

Phase 1: Results of the content validity assessment

Comprehensiveness: whether key concepts are missing.

Only a few comments were made on comprehensiveness. Notably, one participant expressed the need for additional items addressing clinical experience and user perspectives.

Relevance: whether the items are perceived relevant

Overall, the participants commented that they perceived the instrument as relevant to their context. However, several participants pointed out some items that felt less relevant. The terminology domain emerged as a specific area of concern, as most participants expressed that this subscale contained items that felt irrelevant to clinical practice. Comments such as “I do not feel it’s necessary to know all these terms to work evidence-based,” and “The more overarching terms like RCT, systematic review, clinical relevance, and meta-analysis I find relevant, but not the more specific statistical terms,” captured the participants’ perspectives on the relevance of the terminology domain.

Other comments related to the terminology domain revealed that these items could cause feelings of demotivation or inadequacy: “One can become demotivated or feel stupid because of these questions” and “Many will likely choose not to answer the rest of the form, as they would feel embarrassed not knowing”. Other comments on relevance were related to items in other subscales, for example, critical appraisal items (i.e., items 20, 42, and 55), which were considered less relevant by some participants. One participant commented: “If one follows a guideline as recommended, there is no need for critical assessment”.

Comprehensibility: Whether instructions, items, and response options are understandable

All eight participants stated that they understood what the term EBP meant. The predominant theme from the participant’s comments was related to the comprehensibility of the EBP 2 -N. Most of the comments on comprehensibility revolved around the formulation of items. Participants noted challenges related to comprehensibility in 35 out of 58 items, either due to difficulty in understanding, readability issues, the length of items, lack of clarity, or overly academic language. For instance, item five in the Relevance domain, “I intend to develop knowledge about EBP”, received comments that expressed uncertainty about whether “EBP” referred to the five steps of EBP or evidence-based clinical interventions/practices (e.g., practices following recommendations in evidence-based guidelines). Items that were perceived as overly academic included phrases such as “intend to apply”, “intend to develop”, or “convert your information needs”. For these phrases, participants suggested simpler formulations in layperson’s Norwegian. Some participants deemed the instrument “too advanced,” “on a too high level,” or “too abstract”, and others expressed that they understood most of the instrument’s content, indicating a divergence among participants.

Examples of items considered challenging to read, too complex, or overly lengthy were items six and 12 in the relevance domain, 16 and 20 in the sympathy domain, and 58 in the confidence domain. The typical comments from participants revealed a preference for shorter, less complex items with a clear and singular focus. In addition, some comments referred to the formulation of response options. For instance, two response options in the confidence domain, “Reasonably confident” and “Quite confident”, were perceived as too similar in Norwegian. In the practice subscale, a participant pointed out that the term “monthly or less” lacked precision, as it could cover any frequency from once to twelve times a year, thus being perceived as imprecise.

Panel group meetings and instrument revision

The results of the interviews were discussed during several rounds of panel group meetings. After thoroughly examining the comments, 33 items underwent revisions during the panel meetings. These revisions primarily involved minor linguistic adjustments to preserve the original meaning of the items. For example, the Norwegian version of item 8 was considered complex and overly academically formulated and underwent revision. The phrase “I intend to apply” was replaced by “I want to use”, as the panel group considered this phrase easier to understand in Norwegian. Another example involved the term “Framework,” which some participants found vague or difficult to understand (i.e., in item 3, “my profession uses EBP as a framework”). The term “framework” was replaced with “way of thinking and working”, considered more concrete and understandable in Norwegian. The phrase “way of thinking and working” was also added to item 5 to clarify that “EBP” referred to the five steps of EBP, not interventions in line with evidence-based recommendations. Additionally, it was challenging to revise items that participants considered challenging to read, too complex, or overly lengthy (i.e., 6, 12, 16, 20, and 58), as it was difficult to shorten them without losing their original meaning. However, replacing overly academic words with simpler formulations made these examples less complex and more readable.

In terms of relevance of the items, no items were removed, and the terminology domain was retained despite comments regarding its relevance. Changing this domain would have impeded the opportunity to compare results from future studies using this questionnaire with previous studies using the same questionnaire. Regarding comprehensiveness, the panel group reached a consensus that the domains included all essential items concerning the constructs that the original instrument states to measure. Further, examples of minor linguistic changes and additional details on item revisions are reported in Additional File 3 .

The median time to answer the questionnaire was nine minutes. Students made no further comments to the questionnaire.

Participants’ characteristics and mean domain scores

A total of 313 responded to the survey. The respondents’ mean age (SD) was 42.7 years (11.4).The sample included 119 nurses, 74 assistant nurses, 64 physical therapists, 38 occupational therapists, three medical doctors, and 15 other professionals, mainly social educators. In total, 63.9% ( n  = 200) of the participants held a bachelor’s degree, 11.8% ( n  = 37) held a master’s degree, and 0.3% ( n  = 1) held a Ph.D. Moreover, 10.5% ( n  = 33) of the participants had completed upper secondary education, and 13.1% ( n  = 41) had tertiary vocational education. One hundred and eighty-five participants (59.1%) reported no formal EBP training, while among the 128 participants who had undergone formal EBP training, 31.5% had completed over 20 h of EBP training. The mean scores (SD) for the different domains were as follows: Relevance 80.2 (7.3), Sympathy 21.2 (3.6), Terminology 44.5 (15.3), Practice 22.2 (5.8), and Confidence 31.2 (9.2).

Missing data

Out of 314 respondents, one was excluded due to over 25% missing domain items, and three were excluded due to more than 20% missing data in specific domains. Twenty-six respondents had under 20% missing data on one domain, and these missing values were substituted with the respondent’s mean of the other items within the same domain. In total, 313 responses were included in the final analysis. Each domain item had at most 1.3% missing items in total. The percentage of missing data per domain was low and relatively similar across the five domains ( Relevance  = 0.05%, Sympathy  = 0.2%, Terminology  = 0.4%, Practice  = 0.6%, Confidence  = 0.6%). The Little’s MCAR test showed p-values higher than 0.05 for all domains, indicating that data was missing completely at random.

Structural validity results

A five-factor model was estimated based on the original five-factor structure (Fig.  1 ). The model was estimated using the maximum likelihood method. A standardized solution was estimated, constraining the variance of latent variables to 1. Correlation among latent variables was allowed. The results of the CFA showed the following model fit indices: CFI = 0.749, RMSEA = 0.074, and SRMR = 0.075. The CFI and RMSEA results did not meet the criteria for a good-fitting model set a priori (CFI of around 0.95 or higher, RMSEA of around 0.06 or lower). However, the SRMR value met the criteria around 0.08 or lower. All standardized factor loadings were over 0.32, and only five items loaded under 0.5. The range of standardized factor loadings was the following in the different domains: Relevance  = 0.47–0.79; Terminology  = 0.51–0.80; Practice  = 0.35–0.70, Confidence  = 0.43–0.86, and Sympathy  = 0.32–0.65 (Fig.  1 ).

figure 1

Confirmatory factor analysis, standardized solution of the EBP2-N. ( n  = 313). Note: Large circles = latent variables, Rectangles = measured items, small circles = residual variance

Internal consistency results

As reported in Table  1 , Cronbach’s alphas ranged between 0.82 and 0.95 for all domains except for the Sympathy domain, where Cronbach’s alpha was 0.69. Results indicate good internal consistency for four domains and close to the cut-off of good internal consistency (> 0.70) on Sympathy.

In this study, we aimed to assess the measurement properties of the EBP 2 -N questionnaire. The study population of interest was healthcare professionals working with older people in Norwegian primary healthcare, including physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors. The study was conducted in two phases: content validity was assessed in Phase 1, and construct validity and internal consistency were assessed in phase 2.

The findings from Phase 1 and the qualitative interviews with primary healthcare professionals indicated that the content of the EBP 2 -N was perceived to reflect the constructs intended to be measured by the instrument [ 28 ]. However, the interviews also revealed different perceptions regarding the relevance and comprehensibility of certain items. Participants expressed concerns about the formulation of some items, and we decided to make minor linguistic adjustments, aligning with previous recommendations to refine item wording through interviews [ 27 ]. Lack of content validity can have adverse consequences [ 34 ]. Irrelevant or incomprehensible items may make respondents tired of answering, leading to potentially biased answers [ 47 , 48 , p. 139]. Analysis of missing data showed that possible irrelevant or incomprehensible items did not lead to respondent fatigue, as the overall percentage of missing items was low (at most 1.3%), and the percentage of missing data did not vary across the domains. Irrelevant items may also impact other measurement properties, such as structural validity and internal consistency [ 34 ]. We believe that the minor linguistic revisions we made to some items made the questionnaire easier to understand. This assumption was supported by the pilot test of 40 master’s students, where no further comments regarding comprehensibility were added.

The overall relevance of the instruments was perceived positively. However, several participants expressed concerns about the terminology domain as some of the most specific research terms felt irrelevant to them in clinical practice. Still, the panel group decided to keep all items in the terminology domain to allow comparison of results among future studies on the same instrument and subscales. In addition, this decision was based on the fact that knowledge about research terminology, such as “types of data,” “measures of effect,” and “statistical significance,” are essential competencies to perform step three of the EBP process (critical appraisal) [ 3 ]. Leaving out parts of the terminology domain could, therefore, possibly make our assessment of the EBP constructs less comprehensive and complete [ 14 ]. However, since the relevance of some items in the terminology domain was questioned, we cannot fully confirm the content validity of this domain, and we recommend interpreting it with caution.

The confirmatory factor analysis (CFA) in Phase 2 of this study revealed that the five-factor model only partially reflected the dimensionality of the constructs measured by the instrument. The SRMR was the only model fit indices that completely met the criteria for a good-fitting model set a priori, yielding a value of 0.075. In contrast, the CFI at 0.749 and RMSEA at 0.074 fell short of the criteria for a good-fitting model (CFI ≥ 0.95, RMSEA ≤ 0.06). However, our model fit indices were closer to the criteria for a good-fitting model compared to Titlestad et al. (2017) [ 27 ] who demonstrated a CFI of 0.69, RMSEA of 0.089, and SRMR of 0.095. This tendency toward better fit in our study may be related to the larger sample size, in agreement with established recommendations of a minimum of 100–200 participants and at least 5–10 times the number of items to ensure the precision of the model and overall model fit [ 46 , p. 380].

Although our sample size met COSMIN’s criteria for an “adequate” sample size [ 45 ], the partially adequate fit indices suggest that the original five-factor model might not be the best-fitting model. A recent study on the Chinese adaptation of the EBP 2 demonstrated that item reduction and using a four-factor structure improved model fit (RMSEA = 0.052, CFI = 0.932) [ 30 ]. The same study removed eighteen items based on content validity evaluation (four from relevance , seven from terminology , and seven from sympathy ) [ 30 ]. In another study where the EBP 2 was adapted for use among Chinese nurses, thirteen items (two from sympathy , eight from terminology , one from practice , and two from confidence ) were removed, and an eight-factor structure was identified [ 29 ]. However, compared to our study, noticeably improved model fit was not demonstrated in this study [ 29 ]. The model fit indices of their 45-item eight-factor structure were quite similar to the one found in our study (RMSEA = 0.065, SRMR = 0.077, CFI = 0.884) [ 29 ]. The results from the two above mentioned studies suggest that a model including fewer items and another factor structure potentially could have applied to our population as well. Although the five-factor model only partially reflects the constructs measured by the EBP 2 -N in our population, it contributes valuable insights into the instrument’s performance in a specific healthcare setting.

Cronbach’s alpha results in this study indicate good internal consistency for four domains, being over 0.82. However, the alpha of 0.69 in the sympathy did not reach the pre-specified cut-off of good internal consistency (0.70) [ 44 ]. A tendency of relatively lower Cronbach’s alpha values on the sympathy domain, compared to the other four domains, has also been identified in previous similar studies [ 27 , 28 , 31 , 32 ]. Titlestad et al. (2017) reported Cronbach’s alpha to be 0.66 in the sympathy domain and above 0.90 in the other domains [ 27 ]. McEvoy et al. (2010), Panczyk et al. (2017), and Belowska et al. (2020) reported Cronbach’s alphas of 0.76–0.80 for the sympathy domain, and 0.85–0.97 for the other domains [ 28 , 31 , 32 ]. In these three cases, Cronbach’s alphas of the sympathy domain were all over 0.70, but the same tendency of this domain demonstrating lower alphas than the other four domains was evident. The relatively lower alpha values in the sympathy domain may be related to the negative phrasing of items [ 49 ], the low number of items in this domain compared to the others ( n  = 7) [ 12 , p. 84, 47 , p. 86], and a possible heterogeneity in the construct measured [ 47 , p. 232]. The internal consistency results of our study indicate that the items in the sympathy domain are less interrelated than the other domains. However, having a Cronbach’s alpha value of 0.69 indicates that the items do not entirely lack interrelatedness.

Limitations

Methodological limitations that could potentially introduce bias into the results should be acknowledged. Although the eight participants involved in the qualitative content validity interviews in Phase 1 covered all healthcare disciplines and education levels aimed to be included in the survey in Phase 2, it remains uncertain whether these eight participants demonstrated all potential variations in the population of interest. It is possible that those that agreed to participate in qualitative interviews regarding an EBP instrument held more positive attitudes toward EBP than the general practitioner would do. Another possible limitation pertains to the qualitative interviews and the fact that the interviewer (NGL) had limited experience facilitating “think-aloud” interviews. To reduce the potential risk of bias related to the interviewer, the panel group with extensive experience in EBP research took part in the interview preparation, and a pilot interview was conducted before the interviews to ensure training.

Furthermore, using a non-random sampling method and the unknown response rate in Phase 2 may have led to biased estimates of measurement properties and affected the representativeness of the sample included. Additionally, the characteristics of non-responders remain unknown, making it challenging to assess whether they differ from the responders and if the final sample adequately represents the variability in the construct of interest. Due to potential selection bias and non-response bias, there may be uncertainty regarding the accuracy of the measurement property assessment and whether the study sample fully represents the entire population of interest [ 50 , p. 205].

Conclusions

The EBP 2 -N is suitable for measuring Norwegian primary healthcare professionals’ EBP knowledge, attitudes, confidence, and behavior. Researchers can use the EBP 2 -N to increase their understanding of factors affecting healthcare professional’s implementation of EBP and to guide the development of tailored strategies for implementing EBP.

This study revealed positive perceptions of the content validity of the EBP 2 -N, though with nuanced concerns about the relevance and comprehensibility of certain items and uncertainty regarding the five-factor structure of the EBP 2 -N. The minor linguistic revisions we made to some items made the questionnaire more understandable. However, when EBP 2 -N is used in primary healthcare, caution should be exercised when interpreting the results of the terminology domain, as the relevance of some items has been questioned.

Future research should focus on further assessing the factor structure of the EBP 2 -N, evaluating the relevance of the items, and exploring the possibility of reducing the number of items, especially when applied in a new setting or population. Such evaluations could further enhance our understanding of the instrument’s measurement properties and potentially lead to improvements in the measurement properties of the EBP 2 -N.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

  • Evidence-based practice

The Evidence-based practice profile

The Norwegian version of the Evidence-based practice profile questionnaire

Consensus-based Standards for the Selection of Health Measurement Instruments

Confirmatory factor analysis

Comparative fit index

Root mean square error of approximation

Standardized square residual

The Norwegian Agency for Shared Services in Education and Research

Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, et al. Sicily statement on evidence-based practice. BMC Med Educ. 2005;5(1):1.

Article   Google Scholar  

Straus SE, Glasziou P, Richardson WS, Haynes RB, Pattani R, Veroniki AA. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Elsevier; 2019.

Google Scholar  

Albarqouni L, Hoffmann T, Straus S, Olsen NR, Young T, Ilic D, et al. Core competencies in evidence-based practice for Health professionals: Consensus Statement based on a systematic review and Delphi Survey. JAMA Netw Open. 2018;1(2):e180281.

Straus S, Glasziou P, Richardson W, Haynes R. Evidence-based medicine: how to practice and teach EBM. Fifth edition ed: Elsevier Health Sciences; 2019.

Paci M, Faedda G, Ugolini A, Pellicciari L. Barriers to evidence-based practice implementation in physiotherapy: a systematic review and meta-analysis. Int J Qual Health Care. 2021;33(2).

Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20(6):793–802.

da Silva TM, Costa Lda C, Garcia AN, Costa LO. What do physical therapists think about evidence-based practice? A systematic review. Man Ther. 2015;20(3):388–401.

Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(S6):S57–60.

Saunders H, Gallagher-Ford L, Kvist T, Vehviläinen-Julkunen K. Practicing Healthcare professionals’ evidence-based practice competencies: an overview of systematic reviews. Worldviews Evid Based Nurs. 2019;16(3):176–85.

Salbach NM, Jaglal SB, Korner-Bitensky N, Rappolt S, Davis D. Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Phys Ther. 2007;87(10):1284–303.

Saunders H, Vehvilainen-Julkunen K. Key considerations for selecting instruments when evaluating healthcare professionals’ evidence-based practice competencies: a discussion paper. J Adv Nurs. 2018;74(10):2301–11.

de Vet HCW, Terwee CB, Mokkink LB, Knol DL. Measurement in Medicine: a practical guide. Cambridge: Cambridge: Cambridge University Press; 2011.

Book   Google Scholar  

Tilson JK, Kaplan SL, Harris JL, Hutchinson A, Ilic D, Niederman R, et al. Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ. 2011;11:78.

Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako K, Boruff J et al. Challenges and future directions in the measurement of evidence-based practice: qualitative analysis of umbrella review findings. J Eval Clin Pract. 2022.

Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006;296(9):1116–27.

Landsverk NG, Olsen NR, Brovold T. Instruments measuring evidence-based practice behavior, attitudes, and self-efficacy among healthcare professionals: a systematic review of measurement properties. Implement Science: IS. 2023;18(1):42.

Hoegen PA, de Bot CMA, Echteld MA, Vermeulen H. Measuring self-efficacy and outcome expectancy in evidence-based practice: a systematic review on psychometric properties. Int J Nurs Stud Adv. 2021;3:100024.

Oude Rengerink K, Zwolsman SE, Ubbink DT, Mol BW, van Dijk N, Vermeulen H. Tools to assess evidence-based practice behaviour among healthcare professionals. Evid Based Med. 2013;18(4):129–38.

Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice. J Adv Nurs. 2014;70(10):2181–95.

Buchanan H, Siegfried N, Jelsma J. Survey instruments for Knowledge, skills, attitudes and Behaviour related to evidence-based practice in Occupational Therapy: a systematic review. Occup Ther Int. 2016;23(2):59–90.

Fernández-Domínguez JC, Sesé-Abad A, Morales-Asencio JM, Oliva-Pascual-Vaca A, Salinas-Bueno I, de Pedro-Gómez JE. Validity and reliability of instruments aimed at measuring evidence-based practice in physical therapy: a systematic review of the literature. J Eval Clin Pract. 2014;20(6):767–78.

Belita E, Squires JE, Yost J, Ganann R, Burnett T, Dobbins M. Measures of evidence-informed decision-making competence attributes: a psychometric systematic review. BMC Nurs. 2020;19:44.

Egeland KM, Ruud T, Ogden T, Lindstrom JC, Heiervang KS. Psychometric properties of the Norwegian version of the evidence-based practice attitude scale (EBPAS): to measure implementation readiness. Health Res Policy Syst. 2016;14(1):47.

Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The evidence-based practice attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implement Science: IS. 2017;12(1):44.

Grønvik CKU, Ødegård A, Bjørkly S. Factor Analytical Examination of the evidence-based practice beliefs scale: indications of a two-factor structure. scirp.org; 2016.

Moore JL, Friis S, Graham ID, Gundersen ET, Nordvik JE. Reported use of evidence in clinical practice: a survey of rehabilitation practices in Norway. BMC Health Serv Res. 2018;18(1):379.

Titlestad KB, Snibsoer AK, Stromme H, Nortvedt MW, Graverholt B, Espehaug B. Translation, cross-cultural adaption and measurement properties of the evidence-based practice profile. BMC Res Notes. 2017;10(1):44.

McEvoy MP, Williams MT, Olds TS. Development and psychometric testing of a trans-professional evidence-based practice profile questionnaire. Med Teach. 2010;32(9):e373–80.

Hu MY, Wu YN, McEvoy MP, Wang YF, Cong WL, Liu LP, et al. Development and validation of the Chinese version of the evidence-based practice profile questionnaire (EBP < sup > 2 Q). BMC Med Educ. 2020;20(1):280.

Jia Y, Zhuang X, Zhang Y, Meng G, Qin S, Shi WX, et al. Adaptation and validation of the evidence-based Practice Profile Questionnaire (EBP(2)Q) for clinical postgraduates in a Chinese context. BMC Med Educ. 2023;23(1):588.

Panczyk M, Belowska J, Zarzeka A, Samolinski L, Zmuda-Trzebiatowska H, Gotlib J. Validation study of the Polish version of the evidence-based Practice Profile Questionnaire. BMC Med Educ. 2017;17(1):38.

Belowska J, Panczyk M, Zarzeka A, Iwanow L, Cieślak I, Gotlib J. Promoting evidence-based practice - perceived knowledge, behaviours and attitudes of Polish nurses: a cross-sectional validation study. Int J Occup Saf Ergon. 2020;26(2):397–405.

Knowledge A. Confidence, and Behavior Related to Evidence-based Practice Among Healthcare Professionals Working in Primary Healthcare. Protocol of a Cross-sectional Survey [Internet]. OSF. 2023. Available from: https://doi.org/10.17605/OSF.IO/428RP

Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, et al. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study. Qual Life Res. 2018;27(5):1159–70.

Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.

Mokkink LB, de Vet HCW, Prinsen CAC, Patrick DL, Alonso J, Bouter LM, et al. COSMIN Risk of Bias checklist for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1171–9.

Mokkink LB, Prinsen CA, Patrick D, Alonso J, Bouter LM, Vet HCD et al. Cosmin Study design checklist for patient-reported outecome measurement instruments [PDF]. 2019. https://www.cosmin.nl/tools/checklists-assessing-methodological-study-qualities/ . https://www.cosmin.nl/wp-content/uploads/COSMIN-study-designing-checklist_final.pdf

Gagnier JJ, Lai J, Mokkink LB, Terwee CB. COSMIN reporting guideline for studies on measurement properties of patient-reported outcome measures. Qual Life Res. 2021;30(8):2197–218.

Bjerk M, Flottorp SA, Pripp AH, Øien H, Hansen TM, Foy R, et al. Tailored implementation of national recommendations on fall prevention among older adults in municipalities in Norway (FALLPREVENT trial): a study protocol for a cluster-randomised trial. Implement Science: IS. 2024;19(1):5.

Presser S, Couper MP, Lessler JT, Martin E, Martin J, Rothgeb JM et al. Methods for Testing and Evaluating Survey Questions. Methods for Testing and Evaluating Survey Questionnaires2004. pp. 1–22.

Willis GB. Analysis of the cognitive interview in Questionnaire Design. Cary: Cary: Oxford University Press, Incorporated;; 2015.

StataCorp. Stata Statistical Software. 18 ed. College Station, TX: StataCorp; 2023.

Hu L-t, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55.

Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42.

Mokkink LB, Prinsen CA, Patrick DL, Alonso J, Bouter LM, de Vet HC et al. COSMIN methodology for systematic reviews of Patient-Reported Outcome Measures (PROMs) – user manual. 2018. https://www.cosmin.nl/tools/guideline-conducting-systematic-review-outcome-measures/

Brown TA. Confirmatory Factor Analysis for Applied Research. New York: New York: Guilford; 2015.

Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. New York, New York: Oxford University Press; 2015.

de Leeuw ED, Hox JJ, Dillman DA. International handbook of survey methodology. New York, NY: Taylor & Francis Group/Lawrence Erlbaum Associates; 2008. x, 549-x, p.

Solís Salazar M. The dilemma of combining positive and negative items in scales. Psicothema. 2015;27(2):192–200.

Bowling A. Research methods in health: investigating health and health services. 4th ed. ed. Maidenhead: Open University, McGraw-Hill;; 2014.

Download references

Acknowledgements

The authors would like to thank all the participants of this study, and partners in the FALLPREVENT research project.

Open access funding provided by OsloMet - Oslo Metropolitan University. Internal founding was provided by OsloMet. The funding bodies had no role in the design, data collection, data analysis, interpretation of the results or decision to submit for publication.

Open access funding provided by OsloMet - Oslo Metropolitan University

Author information

Authors and affiliations.

Department of Rehabilitation Science and Health Technology, Faculty of Health Science, Oslo Metropolitan University, Oslo, Norway

Nils Gunnar Landsverk & Therese Brovold

Department of Health and Functioning, Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway

Nina Rydland Olsen

Faculty of Health Sciences, OsloMet - Oslo Metropolitan University, Oslo, Norway

Are Hugo Pripp

Department of Welfare and Participation, Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway

Kristine Berg Titlestad

You can also search for this author in PubMed   Google Scholar

Contributions

NGL, TB, and NRO initiated the study and contributed to the design and planning. NGL managed the data collection (qualitative interviews and the web-based survey) and conducted the data analyses. NGL, TB, NRO, and KBT formed the panel group that developed the interview guide, discussed the results of the interviews in several meetings, and made minor linguistic revisions to the items. AHP assisted in planning the cross-sectional survey, performing statistical analyses, and interpreting the results of the statistical analyses. NGL wrote the manuscript draft, and TB, NRO, and KBT reviewed and revised the text in several rounds. All authors contributed to, reviewed, and approved the final manuscript.

Corresponding author

Correspondence to Nils Gunnar Landsverk .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1:

The EBP2-N questionnaire

Supplementary Material 2:

The interview guide

Supplementary Material 3:

Details on item revisions

Supplementary Material 4:

Reporting guideline

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Landsverk, N.G., Olsen, N.R., Titlestad, K.B. et al. Adaptation and validation of the evidence-based practice profile (EBP 2 ) questionnaire in a Norwegian primary healthcare setting. BMC Med Educ 24 , 841 (2024). https://doi.org/10.1186/s12909-024-05842-z

Download citation

Received : 09 April 2024

Accepted : 30 July 2024

Published : 06 August 2024

DOI : https://doi.org/10.1186/s12909-024-05842-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Healthcare professional
  • Primary healthcare
  • Content validity
  • Construct validity
  • Structural validity
  • Internal consistency
  • Self-efficacy

BMC Medical Education

ISSN: 1472-6920

how to write result in qualitative research

This paper is in the following e-collection/theme issue:

Published on 8.8.2024 in Vol 8 (2024)

Exploring User Experiences of the Mom2B mHealth Research App During the Perinatal Period: Qualitative Study

Authors of this article:

Author Orcid Image

Original Paper

  • Ayesha-Mae Bilal 1, 2 , MSc   ; 
  • Konstantina Pagoni 1 , MSc   ; 
  • Stavros I Iliadis 3 , MD, PhD   ; 
  • Fotios C Papadopoulos 1 , MD, PhD   ; 
  • Alkistis Skalkidou 3 , MD, PhD   ; 
  • Caisa Öster 1 , PhD  

1 Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden

2 Centre for Women's Mental Health During the Reproductive Lifespan (WOMHER), Uppsala University, Uppsala, Sweden

3 Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden

Corresponding Author:

Ayesha-Mae Bilal, MSc

Department of Medical Sciences

Uppsala University

Academic Hospital

Entrance 10, Floor 4

Uppsala, 751 85

Phone: 46 737240915

Email: [email protected]

Background: Perinatal depression affects a significant number of women during pregnancy and after birth, and early identification is imperative for timely interventions and improved prognosis. Mobile apps offer the potential to overcome barriers to health care provision and facilitate clinical research. However, little is known about users’ perceptions and acceptability of these apps, particularly digital phenotyping and ecological momentary assessment apps, a relatively novel category of apps and approach to data collection. Understanding user’s concerns and the challenges they experience using the app will facilitate adoption and continued engagement.

Objective: This qualitative study explores the experiences and attitudes of users of the Mom2B mobile health (mHealth) research app (Uppsala University) during the perinatal period. In particular, we aimed to determine the acceptability of the app and any concerns about providing data through a mobile app.

Methods: Semistructured focus group interviews were conducted digitally in Swedish with 13 groups and a total of 41 participants. Participants had been active users of the Mom2B app for at least 6 weeks and included pregnant and postpartum women, both with and without depression symptomatology apparent in their last screening test. Interviews were recorded, transcribed verbatim, translated to English, and evaluated using inductive thematic analysis.

Results: Four themes were elicited: acceptability of sharing data, motivators and incentives, barriers to task completion, and user experience. Participants also gave suggestions for the improvement of features and user experience.

Conclusions: The study findings suggest that app-based digital phenotyping is a feasible and acceptable method of conducting research and health care delivery among perinatal women. The Mom2B app was perceived as an efficient and practical tool that facilitates engagement in research as well as allows users to monitor their well-being and receive general and personalized information related to the perinatal period. However, this study also highlights the importance of trustworthiness, accessibility, and prompt technical issue resolution in the development of future research apps in cooperation with end users. The study contributes to the growing body of literature on the usability and acceptability of mobile apps for research and ecological momentary assessment and underscores the need for continued research in this area.

Introduction

Perinatal depression (PND) impacts anywhere from 12% to 20% of women during pregnancy and after birth [ 1 ]. In Sweden, universal screening for PND takes place during a postpartum visit to the children’s health center and is done using the Edinburgh Postnatal Depression Scale (EPDS) [ 2 ]. Although efforts are being made to improve screening in the perinatal period, there are many barriers at both the patient and system level that prevent timely detection and intervention [ 3 - 5 ]. As such, early identification remains a challenge, with one Swedish study reporting that anywhere between 30% and 45% of women do not get screened, with some groups being at greater risk than others of being missed [ 6 ]. Early identification of individuals at risk of depression in the perinatal period is imperative for the implementation of timely and cost-effective interventions and improved prognosis [ 7 ].

Technological advancements in mobile health (mHealth) apps offer the opportunity to overcome barriers in health care provision. In 2019, over 90% of the population in Sweden owned a smartphone [ 8 ]. Their ubiquity and ability to unobtrusively amass large amounts of data in real time regarding the user’s functions and behaviors in their everyday life make them feasible tools to monitor mental health symptoms and identify users at risk of poor well-being.

Data collected can include both passive data from smartphone sensors, logs, and metadata as well as ecological momentary assessments [ 9 ], which are in situ, real-time data collection methods, such as app-based self-report scales. These data can be leveraged to develop social, behavioral, and cognitive phenotypes of individuals, which can subsequently be used to infer the user’s psychological state and other health indicators in a process termed “digital phenotyping” [ 10 ]. Smartphone-based digital phenotyping maintains the objectivity and the temporal and contextual integrity of diagnostically relevant information, as it overcomes the reliance on retrospective self-reporting from patients [ 9 ]. This enables the collection of rich, multivariable, and large-scale data sets that can be combined with advanced machine learning techniques to personalize health care, improve diagnostic validity, and predict disease and treatment outcomes [ 11 ].

Smartphone apps are being increasingly used as tools for digital phenotyping in psychiatry to support diagnosis and screening, as they enable the collection of data from both smartphone sensors and logs as well as subjective self-reports, cognitive tests, and other participation-based tasks. Recent studies have used the data collected from such digital phenotyping apps to apply machine learning methods to predict symptoms of mental illness, such as depression and anxiety [ 12 - 14 ], bipolar disorder [ 15 , 16 ], psychosis [ 17 ], and schizophrenia [ 18 , 19 ], and have focused on various vulnerable groups, such as veterans [ 20 ], students [ 21 ], as well as women in the perinatal period [ 22 - 25 ].

However, there are significant practical, social, and ethical challenges that impact users’ acceptance of and continued engagement with the app and raise concerns regarding privacy and data security [ 26 , 27 ]. Understanding the users’ diverse needs and priorities will enable us to alleviate these challenges and provide appropriate incentives. Exploring the issue of low engagement is especially relevant in populations that are experiencing depression, as its symptoms can diminish the influence of incentives [ 28 ]. These issues can lead to missing data, which can create biases in and reduce the accuracy of prediction models that can result from these data [ 17 ]. Although a few studies have investigated the feasibility and user experience of digital phenotyping apps [ 20 , 24 , 29 ], continued research is needed to explore user perspectives in more diverse populations and with more complex apps. Doing so will enable us to incorporate this knowledge in the initial stages of the app and study design process to enhance the acceptability and feasibility of such apps.

Mom2B (Uppsala University) is a smartphone app–based research study that aims to collect digital phenotyping data to ultimately develop and evaluate prediction models for PND [ 30 ]. The Mom2B app was developed as a means to collect digital phenotyping data from research participants. Data collected include active data, in the form of in-app self-report surveys and voice recording tasks, and passive data, that is, data collected from smartphone sensors and logs regarding the user’s mobility and sleep patterns, internet and smartphone use, and social media activity. Privacy protection measures are put in place to ensure that GPS data only concern relative movement, not actual location, and social media data only include the frequency of activity, not actual content. Depression symptoms are assessed as the outcome measure using the EPDS at various time points throughout the perinatal period. The array of data collected from this app is subsequently used to develop and evaluate prediction models for PND [ 31 ]. The models are not used within the app itself but can be evaluated in clinical settings in future studies.

Participants can enable or disable any type of data from being collected as part of their consent preferences at any point in the study. Apart from the surveys and voice recording tasks, the main interactive features of the app are the weekly information reports and the statistical graphs. More information about these features as well as an overview of the main pages, content, and features of the Mom2B are presented in Figure 1 .

how to write result in qualitative research

This study aimed to explore the experiences and attitudes of Mom2B app users during the perinatal period. We particularly sought to investigate the acceptability of the app and participants’ concerns about providing data in this way.

Study Design

To explore users’ experiences with and attitudes toward using the Mom2B app, a qualitative focus group study was conducted. Focus group interviews were chosen to gather a larger amount of information with limited time resources [ 32 ]. We used inductive thematic analysis as described by Braun and Clarke [ 33 ] to analyze the data. The 32-item COREQ (Consolidated Criteria for Reporting Qualitative Studies) checklist [ 34 ] was followed as a guide to ensure the quality of reporting.

Ethical Considerations

The study was approved by the Swedish Ethical Review Authority (2020/06645), and all participants provided informed consent. Participants were provided with participant information, where they were informed about their right to withdraw from the study at any time. The option to withdraw anytime was emphasized again at the start of the interviews. All interviews were kept confidential, with transcripts pseudoanonymized to remove any identifiable details. Additionally, to safeguard against potential identification, interview transcripts were not uploaded to public data repositories, and the data remained within the research team. Participants did not receive any compensation for their involvement.

Participants

Users of the Mom2B app were recruited from the existing Mom2B cohort between December 2021 and May 2022. Participants had to have been active users of the app for at least 6 weeks in the perinatal period they were recruited to representative and must have completed at least 1 of their last 3 EPDS surveys. Women who had not updated their delivery date post partum, withdrawn from the study, or not consented to being contacted for participation in substudies (like this one) were excluded. To ensure a representative participant group and capture perspectives on the full scope of the app, including features only available to women exhibiting depression symptoms or to women in the pregnancy or postpartum period, we elected to use purposive random sampling. We stratified the cohort into 4 categories based on participants’ perinatal status (pregnant or postpartum at the time of recruitment) and whether or not they reported recently experiencing depression symptoms (women were considered depressed if their latest EPDS score on the app was 12 or above and considered as not depressed if the score was 10 or below).

We aimed for focus groups of 5 to 6 participants, with an equal distribution of women from all 4 stratification categories in each focus group. In total, 65 women consented to participate in the study; however, 24 dropped out before the interview, most often because of reasons related to their newborn infant. We continued to recruit participants to new focus groups until we agreed that information saturation had been reached [ 35 ].

A total of 41 participants were interviewed in the form of 13 focus groups, ranging in size from 2 to 5, and 1 was interviewed individually due to other expected participants in that group dropping out. Participants’ duration of app use ranged from 16 to 130 weeks. Additional participant characteristics are detailed in Tables 1 and 2 .

CharacteristicsValues, n (%)

18-293 (7)

30-3420 (49)

35-4518 (44)

Sweden40 (98)

Other Nordic country1 (2)

Postsecondary education35 (85)

Secondary or lower5 (12)

Unknown1 (2)

Full-time24 (59)

Part-time3 (7)

Student2 (5)

Unknown12 (29)

Primigravida18 (44)

Multigravida21 (51)

Unknown2 (5)
Stratification categoriesValues, n (%)EPDS score


Mean (SD)Range

Depressive symptoms present7 (17)13.8 (2.1)12-17

No depressive symptoms6 (15)3 (2.3)1-7

Depressive symptoms present7 (17)14.1 (2)12-18

No depressive symptoms21 (51)3.8 (2.4)0-8

a EPDS: Edinburgh Postnatal Depression Scale.

Participants were recruited via email invitation and signed consent forms digitally. They were then able to select the focus group that best suited their availability and were sent 2 reminder emails before the interview with a brief of the interview topics to allow participants to have some time to reflect and gather their thoughts before the interview. The first author (AMB, female) served as a moderator, recorded the session, and took notes, while the second author (KP, female) conducted the interviews in Swedish over video conference.

Data Collection

An interview guide ( Multimedia Appendix 1 ) with semistructured questions was developed by the research team and used to prompt topics of discussion in the focus group. The first 2 focus group interviews were considered pretests to allow the research team to make any necessary revisions. Based on these interviews, minor changes were made to the recruitment procedures and the wording of some questions in the interview guide to improve clarity. The pretests were judged as contributing relevant data and were included in the analysis. The participants were first given background information on the Mom2B app–based research study and an overview of the purpose of this interview study. Finally, participants were debriefed about how their answers would be used and where to reach out with questions or concerns. Interviews lasted for 20-50 minutes and were recorded, and the audios were submitted for transcription.

Data Analysis

We carefully considered the research question and its focus on capturing the voices and perspectives of participants, as well as the relative novelty of user experience research with digital phenotyping apps, and deemed inductive thematic analysis to be the most appropriate approach. We analyzed the English-translated transcripts following Braun and Clarke’s model for reflexive thematic analysis based on the model “Codebook” analysis [ 33 , 36 ]. The analysis was performed in NVivo (version 13; Lumivero) by 3 of the authors.

The first author (AMB) thoroughly read and reread all transcripts to familiarize with the data and then systematically analyzed the data to generate initial codes using an open, semantic approach, as we were interested in exploring users’ stated opinions and impressions. The first and last author (AMB and CÖ) frequently discussed their different perspectives and revised codes in an iterative process as transcripts continued to be coded, as well as after coding was completed.

Preliminary themes were then generated based on meaningful patterns emerging within the coded data. Themes were initially largely categorical to allow for more intuitive sorting, and many codes were sorted into more than 1 theme at this point. The second author (KP) independently evaluated the codes to assess their validity and identify themes, and then all 3 authors (AMB, KP, and CÖ) came together to discuss how the codes can be refined and to further develop the themes so that they are sufficiently unique, make sense in the context of the data set, and truly reflect the cruces of the focus group discussions. Finally, themes and subthemes were defined in an iterative process.

In total, 4 themes were identified and are described below (in no particular order) with subthemes and supporting quotations. Figure 2 gives an overview of the identified themes and subthemes.

how to write result in qualitative research

Acceptability of Sharing Data

Given the large amount of data (with much of it being sensitive health and personal information) being collected, users’ perceptions on sharing that data are an important consideration.

Invasiveness

Participants had mixed opinions about all the different types of data they had to consent to. Some reported initially feeling uncomfortable about the idea of the app accessing their social media or GPS data or giving access to their medical records. Others felt that the data collected was reasonable, considering that they were part of a research study. However, participants were reassured by their control over what data they chose to share as well as the understanding that social media and GPS data tracked activity and mobility, as opposed to content and location.

It would have been a deal-breaker...[but] it doesn’t keep track of what I write in social media, but just that I interact, how much I, for example, liked something... [Participant 41, focus group 14]

Perceived Trustworthiness and Credibility

The app being affiliated with Uppsala University and the data being collected for research purposes and being handled by researchers were important mitigating factors for users’ willingness to share sensitive and personal data. Participants trusted how their data are stored and used as well as the information they get from the app.

I wouldn’t have agreed to [the consent forms] if it wasn’t a research study, or that it wasn’t from a university or the healthcare system or something. I wouldn’t have agreed to this if it was the private sector. That made me also trust that [my data] was handled correctly... [Participant 41, focus group 14]

Transparency and Information

Closely related to participant’s perceptions of invasiveness and trust was their expressed desire to know more about why that data were collected and what they had been used for. It was more so a matter of curiosity than concern; however, it may still impact their motivation to submit data.

These audio recordings, for example, in what way can it be used in research? I’m a little curious about how you can use it. [Participant 5, focus group 1]
I would have liked to see even more information about it, like, how to use the results and how it can benefit others. [Participant 27, focus group 9]

In some cases, the information participants wanted was, in fact, available in the participant information and consent forms; however, it appeared that users’ perceptions of trust in the study had led them to skim through these forms and miss relevant information.

I didn’t read everything in detail, but kind of felt that in a study conducted by a serious group, I trust that the information will be used in a way that is safe for me...I probably skimmed most of it. [Participant 2, focus group 1]

Concerns About Validity of Data Provided

Participants expressed concern about whether the data they provided, particularly on mood-related surveys, accurately reflected the truth. Many participants felt that poor scores on mood surveys were more reflective of the effects of social isolation during the COVID-19 pandemic, as well as preexisting mental health conditions, rather than being pregnant or having given birth.

For me, my mood was more based on the fact that I was kind of trapped, because I wasn’t allowed to go to work, and it was a bit misleading because the pregnancy itself wasn’t a problem, but it was more the circumstance... [Participant 4, focus group 1]
I have a background of fatigue syndrome [a Sweden-specific diagnosis equivalent to burnout] and a neuropsychiatric disorder, ADHD combined, which makes my mood automatically fluctuate and maybe is worsened in certain situations in life, just like childbirth...the research is not adapted to people with for example anxiety problems or a neuropsychiatric diagnosis, and then it becomes misleading in the research because it shows that I’m suffering from for example depression, although I’m not. [Participant 40, focus group 13]

When informed that the research team accounts for extraneous factors that impact their mood, users reiterated their desire to know more about how the data are used. Issues with accurate tracking of physical activity patterns were another concern for users. In general, participants’ uncertainties about the quality of their data led to hesitations about continuing participation.

...you get a little worried [that you don’t actually contribute] if you think that “I’m collecting a lot of data here, but [I] don’t feel that it might be right.” [Participant 27, focus group 9]

Motivators and Incentives

Motivators and incentives refer to factors that drive users to join the study and continue participating.

Contributing to Research

Participants unanimously described their initial motivation to download the app being the desire to contribute to research, particularly on women’s health. Participants felt confident in the value and credibility of the findings that would result from their participation, which also made them feel good about themselves.

The reason why I downloaded the app was precisely to answer these research questions and to be part of the research study itself, so for me it was just to sort of answer the questions. [Participant 1, focus group 1]
[This app] is not just trying to sell us products and buy more, but this really has value on a higher level, which hopefully can help others. [Participant 33, focus group 11]
I have a sister who got postpartum depression, and so I thought it was kind of good to be part of a study like that and to keep an eye on yourself as well. [Participant 9, focus group 2]

Advantages of App-Based Research

Since the app is continuously present on the phone and sends notifications when new surveys arrive, participants reflected on how that afforded them flexibility and convenience, especially for new mothers or participants with other children. They found it less effortful than submitting data by other means of collection, such as email, paper, or in-person surveys.

It feels more accessible than getting a link in an email that you have to open. But the surveys are there when you open the app, and then there is a reminder that “you have a survey to answer in the app.” [Participant 29, focus group 10]
It’s nice to be able to answer when you can and do it from home, and not have to set aside so much time each time, but you can sort of start and then pause if you don’t get it done and then it stays. [Participant 36, focus group 12]

Furthermore, participants reported feeling more comfortable and answering more honestly on PND questionnaires when done on the app versus in person with the midwife.

[My midwife] is not really judgmental, but...I would find it difficult to answer anything other than very positively to a survey that you fill in while someone is staring at you. [Participant 6, focus group 2]

Personal Health Benefits

One participant described the experience as an “information exchange,” as users benefitted from both general and personalized information and support for their well-being in exchange for providing data. Participants valued getting a statistical overview of their mood and activity patterns based on their data over time, as it enabled them to reflect on their well-being and how to improve it. It incentivized participants to respond to surveys more seriously, knowing that it helps them as much as it would help the research team.

It has been valuable to me both during the pregnancy and after because I’ve had tendencies towards postpartum depression, and I was also a bit vulnerable before the birth...it has been interesting to be able to see [your statistics] and use it in your self-analysis... [Participant 12, focus group 3]
It was very clear to me how [my mood] was connected with sleep and so on, then it felt easy, getting it so black and white, it made it easier to sort of plan or prioritize...and keep an eye on my mood a bit. [Participant 14, focus group 4]

Furthermore, answering questions about mood prompted users to reflect on their mental well-being and check in with themselves regularly. It was particularly constructive for new mothers and participants with other children to be reminded to self-reflect. In fact, some participants disliked that these surveys became less frequent after birth and would have liked to continue answering them regularly. Moreover, seeing surveys concerning mental health helped normalize and reduce the stigma surrounding poor well-being. Participants felt that “because [the researchers] ask this, there must be others too” (participant 33, focus group 11), and just having the app made them feel less alone through their perinatal journey.

Most of all, users valued the notification they received when their scores on the EPDS were high, as it forced them to acknowledge and take their symptoms seriously and to consider seeking help.

I don’t think I understood myself that I felt as bad as I really did...so for me it has been the absolute best thing about this app that you get detected, so there was still a purpose to follow how you feel... [Participant 21, focus group 7]
I was diagnosed with depression during pregnancy...even starting to seek treatment for it at all, it was a combination of the app signaling it, and then you were given the opportunity to talk to someone on the app...the person I spoke to on the Mom2B app said “get in touch with your midwife because you’re not feeling well,” so that led me to seek care... [Participant 39, focus group 13]

However, some participants felt that the 5-point scale for the well-being surveys was unable to capture the nuances in their moods, and as such, they felt they incorrectly received notifications to seek help.

In addition, participants found the weekly information reports fun to read and educational and described them as “reliable” and “factual.” However, there was a general dissatisfaction with their conciseness, and desire for them to be more detailed and informative. As such, users did not consider Mom2B as their primary source of general perinatal information but would have preferred to do so to “have everything in one place,” especially since they trusted it more than a commercial app. It also allowed participants to keep track of what perinatal week they are in, which was otherwise confusing for some. Some participants would have liked to get more practical tips and advice from the reports and relevant content for those having multiple pregnancies.

It’s not as comprehensive as a lot of other apps are, so if I want to know something about the baby’s development, or what’s happening in that week of pregnancy, I’ll go to some other app. [Participant 31, focus group 10]

Barriers to Task Completion

The completeness of data collected is a vital characteristic of its quality. Mom2B participants had to complete surveys and voice recordings regularly, and our results highlight 4 main reasons that hindered them from doing so.

Unwillingness to Share Data

Women described abandoning surveys due to either not wanting to answer or not knowing or remembering information. The majority of responses were related to recording weight, and it was clear that women found the task of weighing themselves distressful and wished to avoid it.

I don’t know my weight, and don’t want to know my weight, it’s not good for me to know my weight, and then I can’t answer them...it would have been nice to just write “I don’t know” instead of not being able to answer that survey... [Participant 14, focus group 4]

Practical Barriers

Participants found it difficult to complete voice recording tasks, as they struggled to find the time or a quiet environment. This was especially the case for women with a newborn infant or with other young children at home and was exacerbated by the stay-at-home policies imposed due to the COVID-19 pandemic.

Unclear Instructions

Participants experienced confusion with completing certain tasks that they felt lacked clear instructions. One participant described uncertainty in how fast to speak or what tone to use when recording voice. Another described how it “wasn’t entirely clear when to use periods or commas when entering weight” (participant 26, focus group 9).

With frequently recurring surveys in particular, such as the weekly well-being checks, some participants reported the monotony of the questions inhibited them from reflecting on how they really felt.

I really tried to stop and think “how am I really feeling? How has the last week been?” it was a great way to pause, but I’m afraid that somewhere subconsciously I still answered a little habitually...a week goes by so fast, so it feels as if you have just answered them. [Participant 15, focus group 4]

Participants agreed that the number of surveys they need to complete in any given week can feel overwhelming and daunting. One participant reported feeling “constantly behind.” On the other hand, participants also found the individual surveys to be short and easy to answer and appreciated that the surveys were categorized by priority so they knew what is most important to answer.

User Experience

Users discussed usability, customizability, and accessibility as impactful determinants of their user experience.

Usability refers to how easily and frictionlessly the user interacts with the app and uses its features. For the most part, participants described the app as user-friendly and “easy to understand.” Although some felt the interface was a little unsophisticated and boring, others felt that its simplicity made the app feel secure and serious.

I don’t feel that the app is really bad, but when time, money and resources are available, you can improve it. [Participant 21, focus group 7]

Participants often felt an inadequacy of guidance and information in the app for performing tasks and resolving common issues. Uncertainty about the length and expiry time of tasks often led to hesitations to start the task or to miss it.

...it’s good to have some kind of time indication...so that you can think “okay, I have two days, I might not be able to do it right now, but I’ll still try to do it within two days.” [Participant 1, focus group 1]

Insufficient information may have also affected the discoverability of features and content in the app, as several users were unaware they could adjust their labor date, view statistics based on their data, and continue the study after birth or if they had a second child.

There are several features in the app that I only noticed now that I looked through it a little more closely to be part of this interview, like statistics...in other apps it’s a bit more smooth-flowing, it’s hard to miss functions. [Participant 6, focus group 2]

Participants would have liked to have more and clearer information available related to resolving technical issues and frequently experienced problems to avoid the inconvenience of waiting for email responses from the support team. One participant described switching to a new phone and being unable to log back into the app for 2 months while waiting for a response from technical support:

There are quite a lot of people who change their phones often, so it’s a very unnecessary omission [of information] when it’s so common. [Participant 20, focus group 6]

Technical issues were a major source of friction, and it was apparent that they affected participants’ motivations to continue, especially when the issue was related to providing data. Social media and movement data were not recorded correctly for most participants, which lowered the incentivizing impact of the statistics graphs. Users often had difficulties logging back into their accounts, for example, if they switched to a new phone, and found that their previous activity had not been saved on the app.

I was in contact with you [the support team], and you said that “just ignore that [tasks] you have already answered, because that data is sent in, just continue to answer the new ones that come in.” But then there was so much now, and it wasn’t really possible to tell them apart. [Participant 36, focus group 12]

Other issues participants described as frustrating were difficulties logging in, app draining battery or crashing, and glitches with surveys. Some participants experienced incorrectly occurring notifications to answer surveys; however, most participants were content with the frequency of notifications and considered them necessary reminders.

Customizability

Customizability refers to enabling the users to personalize the app according to their needs and priorities. Participants with multiple pregnancies or health conditions wished for information from the app that felt more relevant to them. Participants also wanted the option to connect the app with smartwatches or pedometers to track movement better. The task of recording weight was dividing in particular. Most found it undesirable and expressed the need for opt-out response options such as “Prefer not to answer” so that they could remove such surveys and also not have to simply abandon them. Others wanted to track it more often and proposed being able to record weight manually as a solution.

In order for it not to be triggering and if you yourself want more statistics, you could make it so that you could add them more often yourself. [Participant 12, focus group 3]
I felt like “God, I can’t even send [the survey] away...and there isn’t an alternative”... [Participant 32, focus group 10]

Accessibility

Accessibility refers to how comfortably users with different needs and abilities are able to use the app and its features. One participant described the font as being too small to read comfortably. Two others commented on the complexity of the text for people with reading disabilities and suggested having the option to choose simplified Swedish.

You [should be able to] choose whether you want simplified text or not, because there are a lot of people who have hidden dyslexia, and may not understand all the concepts. [Participant 40, focus group 13]

One participant also noted how maternity clothes often lack pockets to carry one’s phone in, which makes it problematic to share movement data:

even if I had [been physically active], or I have a job where I stand and walk a lot at times, it kind of didn’t show up in the app at all because the phone was on a bench. [Participant 28, focus group 9]

Principal Findings

App-based digital phenotyping is a rapidly growing method in health care research with little to no studies evaluating user experience and the barriers and facilitators of user engagement [ 37 ]. This study explored pregnant and postpartum women’s views and experiences with the Mom2B app, including how they perceived various features, and the factors that impacted their continued use of the app. Overall, participants deemed app-based digital phenotyping as an acceptable and feasible method of sharing data for research, especially longitudinal research, as it afforded them convenience and flexibility while also allowing them to benefit personally from the data they share by monitoring their well-being. Our findings highlight a duality in how the Mom2B app is perceived by users as both a tool for research and an mHealth app. While data collection for research is the primary function of the app and plays a bigger role in the initial acquisition of users, the health features are what motivate the long-term retention and continued engagement of users, which are essential for minimizing the risk of missing data.

It is important to consider the cultural context of this study, being focused on the Swedish population. Sweden, like other Nordic countries, has one of the highest rates of smartphone penetration in the world within all age groups as well as high rates of digital health care practices among the general population [ 8 ]. This makes smartphone-based digital phenotyping exceptionally efficient in this population due to the commonplaceness of the technology and its use in health care activities. Trust and engagement in research as well as openness to technological developments and use also make the Swedish population uniquely easy to implement such technologies with [ 38 ], although the barriers and challenges to usability and continued engagement are not very different from other populations.

Transparency, as a characteristic of digital phenotyping research, was valued by all participants. Participants evaluated this research positively for transparency but expressed their desire to better understand why each type of data is needed and how exactly it is being used in the research, as it related directly to their willingness to consent to sharing different types of personal data. Participants also appreciated the control they have over deciding which type of data they want to share or not, which is consistent with studies showing that users prefer dynamic and flexible consent models that give them more control [ 39 ].

These findings also aligned with previous research [ 40 , 41 ], showing that participants felt more willing to share data with and use an app that was developed by university-affiliated researchers, as it led to better expectations of protection of their data in comparison with commercially developed apps. Moreover, despite concerns regarding the sensitive and personal nature of data requested from participants, studies have shown that they are generally motivated to consent to sharing data for the purpose of research and improving health care provision [ 24 , 39 ]. In fact, the majority of the participants in this study were motivated primarily by the desire to support the research effort and possibly help other women. While these findings agree with previous studies done in various countries, it is also important to note here the exceptionally high public trust and commitment to research in Sweden. A 2022 report by the Swedish nonprofit organization, Vetenskap & Allmänhet (Public & Science), shows that 89% of women in Sweden have high confidence in researchers and universities and believe it is important to be involved in research [ 38 ]. However, for continued engagement with the app, more direct and personal incentives are important for users [ 20 ]. Three features were particularly incentivizing for our participants to continue sharing data and engaging with the app: the statistics and the high EPDS score alert, which enabled users to self-monitor their well-being throughout the perinatal period, as well as the weekly reports, which participants found enjoyable, interesting, and educational.

Fundamentally, it appears that women are motivated by a sense of social responsibility, concern for their health, and curiosity and interest. Intrinsically motivated behaviors have been described in the literature as generating persistence and long-term stability in behavior [ 20 , 42 ], which is especially valuable in longitudinal studies like Mom2B. Furthermore, self-monitoring mechanisms in mHealth apps have been shown to motivate long-term use of such tools because of the value of understanding one’s own psychological well-being [ 20 , 24 , 43 ]. These findings emphasize the importance of designing features that provide clear personal benefits to the user to increase the perceived use of the app. Considering the perceived duality of this app, it is important to keep in mind that although the primary function of this app is to conduct research and acquire data from users, it is the personal benefits they get from the app that largely motivate them to share data and engage with the app over time. Engagement with the app is needed for the continuous collection of passive data, as long periods of inactivity can compromise or stop passive data collection altogether [ 44 ].

Participants in this study offered several suggestions on how the Mom2B app could be improved, as the general preference was to have a single app from a trustworthy source that met all expectations in terms of features, instead of having to use other commercially developed apps that participants considered less reliable. Weekly information reports should be at least on par with commercial apps in terms of the detail and length of information and be customizable to women experiencing multiple pregnancies. Customizability of the app is an important area of improvement, as giving users a sense of control over the app directly impacts the perceived ease of use, efficiency, and user satisfaction [ 29 , 41 , 45 ]. One feature users wanted more control over was weight tracking, which received mixed reactions. Enabling users to additionally input weight manually would amplify interaction and engagement from those who wish to track it more often. On the other hand, for those who find it undesirable to track weight, enabling them to skip weight questions would minimize frustration and perceived task load due to unwanted lingering surveys. In general, task load and survey repetition should be carefully determined in mobile research apps, as too many surveys accumulating after brief periods of inactivity were overwhelming for participants and deterred participation. Giving users alternative response options that allow them to skip certain sensitive surveys and remove them from their task list can reduce the perceived task load and improve user experience.

Notifications were an important feature for participants and were evaluated as sufficient and facilitative. Participants were only notified when new surveys or weekly reports became available in the app, which was quite frequent. As such, the Mom2B app does not send reminder notifications, as they may pose a risk of being considered bothersome. Finding the right balance for notification frequency can be complicated, which is an important reason for customizability and is enabling users to alter notification preferences within the app [ 29 , 46 ]. Another issue is that of technical problems and system errors, which can decrease the perceived ease of use and the motivation to continue using the app. It is important for users that system errors are appropriately explained and that a solution is available without much effort or that support for technical issues is easy to access and resolves the issue quickly [ 29 ].

According to our findings, most participants found it difficult to make voice recordings after birth when the infant’s needs and frequent crying can be a barrier to record. Implementing accessible designs is especially necessary for user groups such as women in the perinatal period, due to the various barriers and limitations they may experience. Pregnancy clothing often not having any pockets is another limitation experienced by participants, preventing them from accurately recording and tracking their mobility. Designing for inclusivity can be facilitated by testing the designs with users or including them in the design process. Our findings emphasize the importance of user testing of the app in an early stage, as it would also refine the overall usability and improve the acceptability of the app and the study.

Strengths and Limitations

This study included a large number of women interviewed about their perspectives on the Mom2B app as users. We made an effort to recruit a participant group that was representative of women who had used the app in both the pregnancy and the postpartum period as well as both those who had and had not experienced symptoms of depression while using the app. This was done to ensure we captured user perspectives and experiences reflecting the full scope of app features, some of which may not have been available, for example, for those who did not display depression symptoms or did not participate during pregnancy, as well as to ensure that women displaying depressive symptoms were sufficiently represented. Not surprisingly, women with experiences of depression were a relatively small group due to a higher rate of participation cancellation; however, purposive sampling may have prevented this group from completely being lost to attrition.

Furthermore, although focus group interviews are traditionally conducted in person, findings from recent studies substantiate that output, engagement, and participant satisfaction are not affected by engaging remotely [ 47 , 48 ]. In fact, remote interviews were especially suited for our population, as they allowed us to recruit from a more diverse pool of participants in Sweden and build a more representative sample. Face-to-face participation would have been challenging for our participants, as most were either in the late stages of pregnancy or newly delivered mothers, and the inconvenience of unnecessarily traveling even short distances would have likely led to far more dropouts.

Ultimately, the number of participants in most focus groups was still less than what is generally considered the ideal (5-8 participants) [ 32 ]. However, this was observed to be advantageous for this particular group, as it afforded each participant more time to share and discuss their experiences given their limited availability. Nevertheless, it is possible that smaller focus groups may have been deprived of achieving the same quality of discussion as larger groups. One focus group interview turned into an individual interview due to other participants dropping out, which resulted in a lack of group discussion. We decided to include this interview anyway, as the participant shared unique insights on their experience that we considered important.

Moreover, since an open invitation to participate was sent to Mom2B participants, we considered the possibility that the participants may have predominantly been users who are more technologically savvy and frequent users; however, based on our conversations with interview participants, this did not seem to be the case. Interviews were conducted by a female research assistant, which was considered important to allow the participating women to feel at ease and reduce participant bias.

One important limitation to consider is the lack of usability testing in this study. Having participants actively use and explore the Mom2B app during the interview as well as giving them tasks to perform such as answering a survey, checking their monthly activity, or reading the recent weekly report may have enhanced the detail and specificity of the feedback they provided as well as triggered memories of past experiences. Future studies are encouraged to use usability testing in conjunction with focus group interviewing when exploring user experiences of such apps.

Conclusions

This study adds to the limited literature examining user experiences and attitudes toward digital phenotyping apps in the area of mental health research, particularly in the perinatal period. Participants shared their insights on barriers and facilitators of app use and study participation as well as suggestions for the improvement of features and user experience. These results serve as a foundation for app developers and health care researchers in creating apps for research and contribute to our understanding of the opportunities and challenges in designing and implementing apps to support longitudinal research using digital phenotyping.

Acknowledgments

The authors thank all the participants for their time and contribution. The study was supported by grants from Uppsala Region, the Swedish Association of Local Authorities and Regions, the Swedish Research Council (grant 2020-01965), as well as the Swedish Brain Foundation, the Swedish Medical Association (Söderström-Königska, grant SLS-940670), and Uppsala University WOMHER School. The authors would also like to acknowledge the Swedish Network for National Clinical Studies for their support with the dissemination of study information as well as the National Academic Infrastructure for Supercomputing in Sweden and the Swedish National Infrastructure for Computing for providing resources that enabled the data handling for the Mom2B and associated studies.

Data Availability

The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

FCP and AS conceived of the study, and FCP and CÖ designed the interview guide. KP interviewed participants with the help of AMB. AMB analyzed and interpreted data, with the assistance of CÖ and KP. AMB drafted the final manuscript, and all authors participated in critical revisions of the manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

None declared.

Interview guide (English translation).

  • Gavin NI, Gaynes BN, Lohr KN, Meltzer-Brody S, Gartlehner G, Swinson T. Perinatal depression: a systematic review of prevalence and incidence. Obstet Gynecol. 2005;106(5 Pt 1):1071-1083. [ CrossRef ] [ Medline ]
  • Cox JL, Holden JM, Sagovsky R. Detection of postnatal depression. Development of the 10-item Edinburgh Postnatal Depression Scale. Br J Psychiatry. 1987;150:782-786. [ CrossRef ] [ Medline ]
  • Massoudi P, Wickberg B, Hwang P. Screening for postnatal depression in Swedish child health care. Acta Paediatr. 2007;96(6):897-901. [ CrossRef ] [ Medline ]
  • Yonkers KA, Smith MV, Gotman N, Belanger K. Typical somatic symptoms of pregnancy and their impact on a diagnosis of major depressive disorder. Gen Hosp Psychiatry. 2009;31(4):327-333. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Byatt N, Biebel K, Lundquist RS, Moore Simas TA, Debordes-Jackson G, Allison J, et al. Patient, provider, and system-level barriers and facilitators to addressing perinatal depression. J Reprod Infant Psychol. 2012;30(5):436-449. [ FREE Full text ] [ CrossRef ]
  • Bränn E, Fransson E, Wikman A, Kollia N, Nguyen D, Lilliecreutz C, et al. Who do we miss when screening for postpartum depression? a population-based study in a Swedish region. J Affect Disord. 2021;287:165-173. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Connor E, Senger CA, Henninger ML, Coppola E, Gaynes BN. Interventions to prevent perinatal depression: evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2019;321(6):588-601. [ CrossRef ] [ Medline ]
  • Deloitte global mobile consumer survey 2019: the Nordic cut. Deloitte. 2019. URL: https:/​/www2.​deloitte.com/​content/​dam/​Deloitte/​se/​Documents/​technology-media-telecommunications/​Global-Mobile-Consumer-Survey-2019-Nordic-Cut.​pdf [accessed 2024-07-12]
  • Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1-32. [ CrossRef ] [ Medline ]
  • Torous J, Kiang MV, Lorme J, Onnela J. New tools for new research in psychiatry: a scalable and customizable platform to empower data driven smartphone research. JMIR Ment Health. 2016;3(2):e16. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Callahan A, Shah NH. Machine learning in healthcare. In: Key Advances in Clinical Informatics. London. Academic Press; 2017:279-291.
  • Moshe I, Terhorst Y, Opoku Asare K, Sander LB, Ferreira D, Baumeister H, et al. Predicting symptoms of depression and anxiety using smartphone and wearable data. Front Psychiatry. 2021;12:625247. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wahle F, Kowatsch T, Fleisch E, Rufer M, Weidt S. Mobile sensing and support for people with depression: a pilot trial in the wild. JMIR Mhealth Uhealth. 2016;4(3):e111. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pratap A, Atkins DC, Renn BN, Tanana MJ, Mooney SD, Anguera JA, et al. The accuracy of passive phone sensors in predicting daily mood. Depress Anxiety. 2019;36(1):72-81. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Busk J, Faurholt-Jepsen M, Frost M, Bardram JE, Vedel Kessing L, Winther O. Forecasting mood in bipolar disorder from smartphone self-assessments: hierarchical Bayesian approach. JMIR Mhealth Uhealth. 2020;8(4):e15028. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grünerbl A, Muaremi A, Osmani V, Bahle G, Ohler S, Tröster G, et al. Smartphone-based recognition of states and state changes in bipolar disorder patients. IEEE J Biomed Health Inform. 2015;19(1):140-148. [ CrossRef ] [ Medline ]
  • Benoit J, Onyeaka H, Keshavan M, Torous J. Systematic review of digital phenotyping and machine learning in psychosis spectrum illnesses. Harv Rev Psychiatry. 2020;28(5):296-304. [ CrossRef ] [ Medline ]
  • Wang R, Aung M, Abdullah S, Brian R, Campbell A, Choudhury T. CrossCheck: toward passive sensing and detection of mental health changes in people with schizophrenia. 2016. Presented at: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 12-16, 2016:886-897; Heidelberg, Germany. [ CrossRef ]
  • Barnett I, Torous J, Staples P, Sandoval L, Keshavan M, Onnela J. Relapse prediction in schizophrenia through digital phenotyping: a pilot study. Neuropsychopharmacology. 2018;43(8):1660-1666. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Betthauser LM, Stearns-Yoder KA, McGarity S, Smith V, Place S, Brenner LA. Mobile app for mental health monitoring and clinical outreach in veterans: mixed methods feasibility and acceptability study. J Med Internet Res. 2020;22(8):e15506. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Asselbergs J, Ruwaard J, Ejdys M, Schrader N, Sijbrandij M, Riper H. Mobile phone-based unobtrusive ecological momentary assessment of day-to-day mood: an explorative study. J Med Internet Res. 2016;18(3):e72. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Marcano Belisario JS, Doherty K, O'Donoghue J, Ramchandani P, Majeed A, Doherty G, et al. A bespoke mobile application for the longitudinal assessment of depression and mood during pregnancy: protocol of a feasibility study. BMJ Open. 2017;7(5):e014469. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Faherty LJ, Hantsoo L, Appleby D, Sammel MD, Bennett IM, Wiebe DJ. Movement patterns in women at risk for perinatal depression: use of a mood-monitoring mobile application in pregnancy. J Am Med Inform Assoc. 2017;24(4):746-753. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Doherty K, Barry M, Marcano-Belisario J, Arnaud B, Morrison C, Car J, et al. A mobile app for the self-report of psychological well-being during pregnancy (BrightSelf): qualitative design study. JMIR Ment Health. 2018;5(4):e10007. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jiménez-Serrano S, Tortajada S, García-Gómez JM. A mobile health application to predict postpartum depression based on machine learning. Telemed J E Health. 2015;21(7):567-574. [ CrossRef ] [ Medline ]
  • Tomičić A, Malešević A, Čartolovni A. Ethical, legal and social issues of digital phenotyping as a future solution for present-day challenges: a scoping review. Sci Eng Ethics. 2021;28(1):1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Simblett S, Greer B, Matcham F, Curtis H, Polhemus A, Ferrão J, et al. Barriers to and facilitators of engagement with remote measurement technology for managing health: systematic review and content analysis of findings. J Med Internet Res. 2018;20(7):e10480. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sherdell L, Waugh CE, Gotlib IH. Anticipatory pleasure predicts motivation for reward in major depression. J Abnorm Psychol. 2012;121(1):51-60. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Simblett S, Matcham F, Siddi S, Bulgari V, Barattieri di San Pietro C, Hortas López J, et al. Barriers to and facilitators of engagement with mHealth technology for remote measurement and management of depression: qualitative analysis. JMIR Mhealth Uhealth. 2019;7(1):e11325. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bilal AM, Fransson E, Bränn E, Eriksson A, Zhong M, Gidén K, et al. Predicting perinatal health outcomes using smartphone-based digital phenotyping and machine learning in a prospective Swedish cohort (Mom2B): study protocol. BMJ Open. 2022;12(4):e059033. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhong M, van Zoest V, Bilal A, Papadopoulos F, Castellano G. Unimodal vs. multimodal prediction of antenatal depression from smartphone-based survey data in a longitudinal study. 2022. Presented at: Proceedings of the 2022 International Conference on Multimodal Interaction; November 7-11, 2022:455-467; Bengaluru, India. [ CrossRef ]
  • Krueger RA. Focus Groups: A Practical Guide for Applied Research. Thousand Oaks, CA. Sage Publications; 2014.
  • Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101. [ CrossRef ]
  • Tong A, Sainsbury P, Craig J. Consolidated Criteria for Reporting Qualitative Research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349-357. [ CrossRef ] [ Medline ]
  • Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2016;26(13):1753-1760. [ CrossRef ] [ Medline ]
  • Braun V, Clarke V. One size fits all? what counts as quality practice in (reflexive) thematic analysis? Qual Res Psychol. 2020;18(3):328-352. [ CrossRef ]
  • Torous J, Wisniewski H, Bird B, Carpenter E, David G, Elejalde E, et al. Creating a digital health smartphone app and digital phenotyping platform for mental health and diverse healthcare needs: an interdisciplinary and collaborative approach. J Technol Behav Sci. 2019;4(2):73-85. [ FREE Full text ] [ CrossRef ]
  • Rådmark L, Bohlin G, Falk E. The VA barometer 2023/24. Vetenskap & Allmänhet. 2023. URL: https://vetenskapallmanhet.se/2024/02/va-barometer-2023-2024-in-english/ [accessed 2024-07-09]
  • Kassam I, Ilkina D, Kemp J, Roble H, Carter-Langford A, Shen N. Patient perspectives and preferences for consent in the digital health context: state-of-the-art literature review. J Med Internet Res. 2023;25:e42507. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Martinez-Martin N, Insel TR, Dagum P, Greely HT, Cho MK. Data mining for health: staking out the ethical territory of digital phenotyping. NPJ Digit Med. 2018;1:68. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lau Y, Wong SH, Cheng LJ, Lau ST. Exploring experiences and needs of perinatal women in digital healthcare: a meta-ethnography of qualitative evidence. Int J Med Inform. 2023;169:104929. [ CrossRef ] [ Medline ]
  • Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. 2000;55(1):68-78. [ CrossRef ] [ Medline ]
  • Lee K, Kwon H, Lee B, Lee G, Lee JH, Park YR, et al. Effect of self-monitoring on long-term patient engagement with mobile health applications. PLoS One. 2018;13(7):e0201166. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Currey D, Torous J. Increasing the value of digital phenotyping through reducing missingness: a retrospective review and analysis of prior studies. BMJ Ment Health. 2023;26(1):e300718. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hui SLT, See SL. Enhancing user experience through customisation of UI design. Procedia Manuf. 2015;3:1932-1937. [ CrossRef ]
  • Villalobos-Zúñiga G, Cherubini M. Apps that motivate: a taxonomy of app features based on self-determination theory. Int J Hum-Comput Stud. 2020;140:102449. [ CrossRef ]
  • Forrestal SG, D’Angelo AV, Vogel LK. Considerations for and lessons learned from online, synchronous focus groups. Surv Pract. 2015;8(3):1-8. [ CrossRef ]
  • Kite J, Phongsavan P. Insights for conducting real-time focus groups online using a web conferencing service. F1000Res. 2017;6:122. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Consolidated Criteria for Reporting Qualitative Studies
Edinburgh Postnatal Depression Scale
mobile health
perinatal depression

Edited by A Mavragani; submitted 11.10.23; peer-reviewed by C Barnum, J Brooke; comments to author 08.12.23; revised version received 27.02.24; accepted 26.05.24; published 08.08.24.

©Ayesha-Mae Bilal, Konstantina Pagoni, Stavros I Iliadis, Fotios C Papadopoulos, Alkistis Skalkidou, Caisa Öster. Originally published in JMIR Formative Research (https://formative.jmir.org), 08.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.

This paper is in the following e-collection/theme issue:

Published on 7.8.2024 in Vol 26 (2024)

Exploring the Experiences of Community-Dwelling Older Adults on Using Wearable Monitoring Devices With Regular Support From Community Health Workers, Nurses, and Social Workers: Qualitative Descriptive Study

Authors of this article:

Author Orcid Image

Original Paper

  • Arkers Kwan Ching Wong 1 , PhD   ; 
  • Jonathan Bayuo 1 , PhD   ; 
  • Jing Jing Su 1 , PhD   ; 
  • Karen Kit Sum Chow 2 , MSc   ; 
  • Siu Man Wong 2 , MSc   ; 
  • Bonnie Po Wong 2 , BSc   ; 
  • Athena Yin Lam Lee 3 , BSc   ; 
  • Frances Kam Yuet Wong 1 , PhD  

1 School of Nursing, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)

2 Hong Kong Lutheran Social Service, Kowloon, China (Hong Kong)

3 Department of Health, The Government of the Hong Kong Special Administrative Region, Hong Kong Island, China (Hong Kong)

Corresponding Author:

Arkers Kwan Ching Wong, PhD

School of Nursing

The Hong Kong Polytechnic University

China (Hong Kong)

Phone: 852 34003805

Email: [email protected]

Background: The use of wearable monitoring devices (WMDs), such as smartwatches, is advancing support and care for community-dwelling older adults across the globe. Despite existing evidence of the importance of WMDs in preventing problems and promoting health, significant concerns remain about the decline in use after a period of time, which warrant an understanding of how older adults experience the devices.

Objective: This study aims to explore and describe the experiences of community-dwelling older adults after receiving our interventional program, which included the use of a smartwatch with support from a community health workers, nurses, and social workers, including the challenges that they experienced while using the device, the perceived benefits, and strategies to promote their sustained use of the device.

Methods: We used a qualitative descriptive approach in this study. Older adults who had taken part in an interventional study involving the use of smartwatches and who were receiving regular health and social support were invited to participate in focus group discussions at the end of the trial. Purposive sampling was used to recruit potential participants. Older adults who agreed to participate were assigned to focus groups based on their community. The focus group discussions were facilitated and moderated by 2 members of the research team. All discussions were recorded and transcribed verbatim. We used the constant comparison analytical approach to analyze the focus group data.

Results: A total of 22 participants assigned to 6 focus groups participated in the study. The experiences of community-dwelling older adults emerged as (1) challenges associated with the use of WMDs, (2) the perceived benefits of using the WMDs, and (3) strategies to promote the use of WMDs. In addition, the findings also demonstrate a hierarchical pattern of health-seeking behaviors by older adults: seeking assistance first from older adult volunteers, then from social workers, and finally from nurses.

Conclusions: Ongoing use of the WMDs is potentially possible, but it is important to ensure the availability of technical support, maintain active professional follow-ups by nurses and social workers, and include older adult volunteers to support other older adults in such programs.

Introduction

Technological advancements have facilitated the self-management of chronic diseases among community-dwelling older adults. Wearable monitoring devices (WMDs), such as smartwatches, are among the common technological tools that assist older adults with health monitoring, physical and cognitive training, medication reminders, and fall prevention [ 1 , 2 ]. The literature shows that WMDs are effective at reducing the risk of developing cardiovascular diseases [ 3 ], increasing the physical activity levels [ 4 ], and improving the quality of life [ 5 ] of older adults. However, despite the benefits and high adoption rate of these wearable devices, there is a lack of studies demonstrating the adherence rate of older adults in maintaining consistent use of WMDs [ 6 - 8 ]. A survey with a sample of >4000 Canadian adults revealed that 33% of the participants did not use WMDs to monitor their health on a regular basis [ 9 ]. Similarly, another survey conducted in Australia reported an abandonment rate of 29% for WMDs, without specifying the population [ 10 ]. Physical disability, a lack of knowledge about the functions of wearable devices, and technological anxiety were summarized as notable reasons for poor adherence to these devices among older adults [ 11 - 13 ].

Self-determination theory has highlighted that long-term behavioral change is determined by one’s intrinsic motivation, which is defined as one’s action driven by the enjoyment and interest in the activity itself and is affected by 3 factors: competence, autonomy, and relatedness [ 14 ]. When an individual has a sense of competence and autonomy in adopting a new behavior and has someone who is socially and psychologically connected (relatedness) to support the behavior, they are more likely to adhere to, and maintain, the behavior over the long term. Recent studies have focused on providing training sessions to help older adults familiarize themselves with the functions of WMDs and enhance their competence and autonomy. However, the results showed no difference in adherence between the participants who received the training sessions and those who did not [ 15 , 16 ]. Older adults have expressed in a qualitative study that 1 preintervention training session is not sufficient to enhance their knowledge of WMDs or resolve their technological anxiety [ 17 ]. It was suggested that nursing or peer support, with the simultaneous provision of social support, might be necessary throughout the health program to increase the intrinsic motivation of older adults to adopt WMDs [ 15 ]. However, there have been limited studies on the offering of nursing or peer support for older adults in the use of WMDs. In the study by Farivar et al [ 11 ], nursing feedback was provided to older adults when their real-time step counts, which were displayed on the WMDs, were unsatisfactory. The program was found to be feasible and acceptable to older adults, but it encountered challenges such as infrequent updates of the WMDs and low engagement and retention rates. Another study, which designed a similar program that provided nursing support to older adults when abnormal vital signs were detected in the WMDs, demonstrated a high dropout rate of 21% and short-term adherence to the WMDs [ 18 ]. Recent studies emphasize the importance of implementing a clear nursing service model, such as a case management model, that encompasses problem identification, goal setting, and regular follow-up. This model aims to enhance the intrinsic motivation of older adults to consistently use new technologies, such as mobile health apps and WMDs [ 7 , 19 ], rather than relying solely on providing training sessions to them or intervening only when abnormalities in vital signs are detected through WMDs by the older adults.

Because of the perception that they might be causing trouble to others, older adults tended not to actively seek help from health care professionals and peers even when they faced technical problems or when they did not comprehend the medical jargon displayed on the device [ 11 , 13 ]. They were also concerned about their health data being transferred from WMDs to health care professionals but not receiving regular feedback [ 20 ]. In view of this, this study was to have a nurse case manager (NCM) work with the older adults to identify factors that could facilitate or hinder their use of a smartwatch and recommended that the older adults use those features of the smartwatch that are linked with their health and social problems, provided suggestions on the duration and frequency of the use of the smartwatch, and provided instructions on how to use these features in their daily routines during the 3-month intervention period. Older adults had the autonomy to adjust or modify their own schedules to ensure that they could use the features of the smartwatch efficiently and effectively. The NCM also encouraged the family members or primary caregivers of the older adults to participate and provide feedback and support. This paper describes the perceptions and experiences of community-dwelling older adults after receiving our interventional program. More specifically, we explored the challenges that they experienced during their use of the WMD, the benefits of using the WMD, and suggestions on how to promote their sustained use of the WMD. The results may provide useful insights for developing a program that can promote the continued adoption of WMDs and, in turn, improve the long-term benefits of the WMDs on health self-management among older adults.

Study Design

A qualitative descriptive design was adopted for this study [ 21 ]. This approach is not associated with any philosophical or theoretical orientation but draws on naturalistic inquiry to understand and describe how people experience a phenomenon [ 21 ]. The qualitative descriptive study is the method of choice when straight descriptions of phenomena are desired, which made it appropriate for this study. This study is reported according to the SRQR (Standards for Reporting Qualitative Research) checklist [ 22 ].

Setting and Participants

This study was conducted between June 2022 and March 2023 in collaboration with 5 community centers run by a local nongovernmental organization in Hong Kong. Using a purposive sampling approach, the members of these community centers who were interested in this program were screened and recruited into this study if they (1) were aged ≥60 years [ 23 ], (2) owned a smartphone, (3) were able to communicate in Cantonese or Mandarin, and (4) had internet access. They were excluded if they (1) had been diagnosed with cognitive impairment, (2) were bedbound, (3) owned a smartwatch, and (4) were involved in other studies using WMDs.

Recruitment Process

Staff working at the collaborating community centers invited their members to join the program using Facebook Live. Those who were interested provided their name to the staff, and trained research assistants screened them via telephone. Eligible members were invited to meet the research assistants at the community centers to receive an explanation of the details of the study and to give their written consent to take part in it. All participants received a health monitoring package that included a smartwatch with an alarm setting, a prepaid SIM card, a blood pressure monitor, and a pulse oximeter.

Interventional Program

Before the program, a 1-hour web-based training session and a practical test were delivered to all participants to explain the basic operation of the WMD. The number of a telephone manned during office hours by staff of the community center was provided to participants to call should they face any technical problems during use.

The participants were provided with a package that included a WMD (ProVista Care smartwatch), a prepaid SIM card, a blood pressure monitor, and a pulse oximeter. ProVista Care was selected as the WMD for this study due to its validated performance, affordability, and comparable functionality to other similar devices. These functions encompass fall detection; location and activity tracking; blood pressure, pulse, and oxygen saturation monitoring; medication and appointment reminders; and calls to preset numbers and SOS calls. This selection enhances the applicability of the study’s findings to real-world implementation. Data collected from ProVista Care can be synchronized and transferred to the server via the ProVista Care app installed on participants’ personal smartphones. The WMD was designed to be worn on the wrist, securely fastened with an elastic band. Participants were instructed to wear the WMD as frequently and for as long as possible throughout the study duration.

Ten trained community health workers (CHWs), NCMs, and social workers were the interventionists in this 3-month program. The participants in the intervention group received a home visit by a CHW and the NCM in the first month and biweekly telephone calls by the CHW from the 3rd to the 12th weeks. In the first home visit, using the Omaha System, the NCM explored the features of the smartwatch that each participant might find beneficial. The Omaha System is a comprehensive assessment-intervention-evaluation instrument for community-based practice [ 24 ]. There were 21 health and social problems listed in the Omaha System that were relevant to the features of the smartwatch used in this study; for example, the feature of fall detection in the smartwatch might help participants with musculoskeletal problems or lower limb weakness. The NCM empowered the participants to set goals and action plans in the first meeting, while the CHWs followed up, recalling the goals and action plans with the participants and, in subsequent telephone calls, motivating the participants to regularly use the smartwatch. The NCM also monitored the vital signs of the participants that were automatically uploaded by the smartwatches at the backend. When abnormal vital signs were detected by the smartwatch, the NCM would call the participants via telephone and provide appropriate interventions, such as a referral to a social worker, based on the validated protocols.

Data Collection

A total of 6 in-depth focus group discussions with 22 participants were conducted at the end of this program. In-depth focus group interviews are conducted to evaluate participants’ experiences after an intervention through group interactions [ 25 ]. For studies using focus group discussions, it has been suggested that groups ranging from 2 to 40 may be adequate to attain data saturation depending on the phenomenon under investigation [ 25 ]. Thus, 6 groups were considered adequate for this study to attain data saturation. All discussions were conducted with a guide developed by the research team. The focus group interviews were conducted in Cantonese and each session lasted for 25 to 65 minutes. All interviews were audio recorded with the consent of the participants. Interview transcripts were written up by members of the research team. To ensure the consistency of coding and interpretation of data, an audit trail was conducted, and all discrepancies were resolved through discussion and consensus.

Data Analysis

All data collected from the focus group discussions were analyzed inductively using the approach to constant comparison analysis formulated by Maykut and Morehouse [ 26 ], who proposed a 4-step approach to the constant comparison of focus group data: inductive categorization, refinement of categories, exploring relationships across the categories, and integration of data [ 26 ]. In inductive categorization, AKCW and JB read and reread the interview transcripts in both English (JB) and Cantonese (AKCW) to identify recurring concepts independently. Next, overlapping concepts across the groups were categorized and combined by the 2 independent coders (AKCW and JB) to formulate provisional codes. In the second stage, that is, refining the categories of codes, the provisional list of codes was concurrently examined alongside a review of the interview transcripts. The process of categorization was undertaken through discussion with the wider research group to attain consensus. Subsequently, similar codes were grouped to formulate categories for each group. The emerging categories were then concurrently compared across the groups, with recurring categories further refined and grouped. For the third stage, we further refined the categories by grouping them under a distinct umbrella. Categories with common elements were grouped to make broader groups or emerging themes. With these themes, we worked through each group and the associated categories to attain a complete understanding and create patterns of meaning from the data.

Methodological Rigor and Trustworthiness

The trustworthiness of this study was evaluated according to four criteria: (1) credibility, (2) dependability, (3) confirmability, and (4) transferability [ 27 ]. To enhance credibility and dependability, the summarized results were sent to those participants who had agreed to check them for further clarification and to give feedback on the researchers’ interpretation. Audit trails and peer debriefings were also conducted during the analysis of data to ensure the consistency of the interpreted data to achieve confirmability. A thick description was ensured in reporting the study to enhance transferability. To attain analytical rigor, we ensured that analyses were undertaken in both Cantonese and English and compared to ensure consistency. The authors responsible for this section were fluent in Cantonese and English. The iterative approach to analysis also ensured consistent coding, with an audit trail on the decisions on coding and categorization. In addition, the constant comparison approach ensured that our focus was not only on individual-level analyses but also on analyses within and across the groups.

Ethical Considerations

This study was conducted under the principles of the Declaration of Helsinki and approved by the ethics committee of the Hong Kong Polytechnic University (HSEARS20220429001). All eligible participants were given the right to refuse participation and the right to withdraw from the interview at any time. Written informed consent was collected from all participants. To protect the participants’ privacy, all data collected from this program were kept confidential and anonymized and were only accessible to the members of the research team.

Sociodemographic Data

A total of 22 community-dwelling older adults were involved in 6 focus group discussions. Of these, 5 (23%) were male, and 17 (77%) were female, with ages ranging from 62 to 78 years. Only 1 (5%) participant had experience in using a smartwatch before inclusion in the study. A total of 17 (77%) had a primary level of education, and 5 (23%) had a secondary level of education or higher. The clinical characteristics of the participants have been reported in a previous study [ 19 ].

Emerging Themes and Categories

Three themes and 7 categories emerged from the focus group data, as shown in Textbox 1 .

Challenges associated with the use of the wearable monitoring device (WMD)

  • Individual-related challenges
  • System-related challenges

Perceived benefits of using the WMD

  • Self-monitoring and health promotion

Convenience

Strategies to promote use of the WMD

  • Availability of technical support
  • Ongoing follow-up professional support
  • Peer and family support

Emerging Theme 1: Challenges Associated With the Use of the WMD

This theme describes challenges and concerns that affected the participants’ use of the smartwatch. The emerging categories were (1) individual-related challenges and (2) system-related or technical challenges.

Individual-Related Challenges

Participants across all groups emphasized that they were slow in learning to use the WMD and required a great deal of face-to-face instruction to be fully oriented to the device and its functionalities before being able to use it effectively. This issue particularly resonated with those who were using such a device for the first time. Some of the older adults either could not understand the instructions or needed more time to assimilate them. It took weeks to months for the older adults to become familiar with the device:

This is my first time wearing a smartwatch. When you wear the watch for the first time, you will definitely not know how to use all the functions. So, I wanted to ask everyone if they have experienced this situation before. [Participant 1]
Well...at first, it was difficult to use. But after using it for a while, it was basically okay, and we can use it on our own.... Hmm yes, actually, if someone teaches you face-to-face, you can learn it clearly first. [Participant 15]
How much time? I think three to four months to learn to use it well. [Participant 20]
For me, at first, it took a long time to use it. Sometimes, I just could not get it to work. But after a while, it became much better. For example, measuring blood oxygen and blood pressure readings became much easier for me with time. [Participant 4]
The first time I tried, I did not know how to turn on the device or turn it off. It was difficult at first. [Participant 7]

The participants also highlighted the issue of forgetfulness, which affected how well they used the device. They noted that with their increasing age, forgetfulness was a common occurrence. Some of the participants mentioned forgetting how to operate the device and the functions available during the initial period of use, although over time they were able to become better at using the device consistently:

I’m so dumb sometimes that I forget what I am doing. I sometimes cannot even figure out how to wrap a scarf around my head, not to mention how to use the watch. [Participant 13]
I am already in my late years. If you even ask me what I ate yesterday, I cannot remember. [Participant 2]

System-Related or Technical Challenges

System-related or technical challenges were encountered across all groups. The size of the WMD was considered an issue. Participants described the WMD as big, which made it difficult to wear regularly. Occasionally, the size of the watch was considered a source of ridicule. Despite the potential for being ridiculed, some of the older adults noted that they were more concerned about the functionality and capacity of the watch than its size. In addition, the smooth, glass surface of the watch’s touchscreen became slippery and unresponsive when used by the older adults in cold weather, creating usability issues:

The watch can measure blood pressure and blood oxygen levels. Your watch looks much better and looks great. Our watches are big, like big turtles, and sometimes people make fun of it. [Participant 9]
You can see that your watch is smaller than ours. Our watch is bigger, and it obstructs a lot. But even if it’s still bulky and unattractive, I think we can still wear it because it will help us. [Participant 20]
It feels really troublesome to use during cold weather. There is no problem in hot weather. [Participant 7]

The power capacity of the WMD presented a significant challenge for the participants. Participants wanted to use the WMD, but the need for frequent charging made it rather inconvenient to do so. In some cases when they wanted to use it when going out, they noticed that the WMD was low on battery. Coupled with the previously mentioned issue of forgetfulness, this meant that they missed the opportunity to charge it before going out. The participants also reported that the need to frequently charge the WMD prevented them from using it for a longer period throughout the day. This issue deterred some of the older adults from using the WMD altogether on some occasions:

Hmmm, if you know everything, the main problem with the watch is the frequent charging. That battery needs to be charged frequently—like every day. If you don’t charge it, it will just run down fast. Yes, it is so fast and when there is no electricity, things will become difficult. The need to charge is too frequent for us. [Participant 5]
Oh, so you realize that the battery is down when you wear it and then you must put it back to charge for a while. Yes, that’s right. It is very troublesome to do this every day before going out. The battery runs out quickly all the time. [Participant 9]

Another technical issue that was identified was the fact that some of the participants felt that the WMD had several functions they did not know how to use. Interestingly, other older adults still struggled to navigate through even the few functions that they were taught to use, and they occasionally experienced digital fatigue after constant use:

There may be some functions we cannot use. The watch seems to have many functions, but we do not know them all and also don’t know how to use them all. [Participant 16]
But, I realized there are so many functions on that watch that we cannot use them all. Also, some functions that were possible to use before, people found it annoying to continue to use them. That is why we do not use them frequently anymore. [Participant 9]

All participants were enthusiastic about the ability of the device to count their steps as they walked about. However, the older adults mentioned that the device gave them incorrect step counts. In 1 group, the participants mentioned that the step count function also did not display correctly. Occasionally, they used their mobile phones to obtain correct step counts. In addition to this challenge, some of the participants reported occasional challenges with uploading or transmitting data on their vital parameters:

The pedometer was malfunctioning and gave incorrect figures. When you count how many steps you take yourself and then check the watch, it doesn’t match at all. The watch and the phone both have incorrect counts all the time. [Participant 8]
It shows only a few steps, even though I walked quite a lot. Yes, our watches cannot measure many steps. My phone shows 10,000 steps, but my watch shows 2000 or 3000 steps. To be honest, the watch is not accurate when it comes to the step count. The step count displayed on my phone is not the same as the one on my phone. Yes, that is how it is. There are some differences, yes. [Participant 1]
Actually the step count is important, but it is not accurate at all. I often check it myself. Usually, I check how many steps I have taken, especially since I sit in an office for most of the day. But it is not correct when I check the watch and the phone. [Participant 16]
I tell him about my blood pressure on that day. I tell him about my blood pressure and how many steps I took that day. Sometimes, the watch cannot display the values correctly. [Participant 9]

Some of the participants also found the device to be extremely sensitive, which occasionally caused discomfort because the alarm went off immediately when it sensed a slight movement:

But the watch is too sensitive. Sometimes when I move my hands or feet, it shakes and triggers the alarm. And then it keeps telling me how long it has been and what to do. [Participant 5]

Emerging Theme 2: Perceived Benefits of Using the WMD

Regardless of the notable challenges, participants highlighted the benefits of using the WMD. These were (1) self-monitoring and health promotion and (2) convenience.

Self-Monitoring and Health Promotion

Participants across all groups stated that the WMDs offered them an opportunity to self-monitor some vital parameters, such as blood pressure and oxygen saturation levels. The older adults found this feature to be particularly helpful because it helped them to record their parameters, track them, and share them with health care professionals and to ascertain whether they were maintaining a good health status. Indeed, the use of the device boosted the confidence of older adults across all groups in their ability to actively participate in self-management, particularly because the NCM actively followed up to enable them to attain their health goals:

Um, measure blood pressure and blood oxygen levels at the same time. Well, we know now. We know our blood pressure and blood oxygen levels. It helps us to maintain our health by making us aware of the condition of our body and whether it is normal or not.... At night, I have a blood pressure machine and I can measure my blood pressure every night. [Participant 2]
Also, it gives a different perspective on managing your health with more information available to you. For instance, I know how much I walked today. [Participant 8]
Yes, definitely. Using the watch gave me a lot of confidence. I wear it at home and when I go out. The nurse also reminded me to walk a certain amount of time every day, and even though I forget, I still try my best to walk more. The most important thing is to try and walk more. [Participant 4]
And at home, I don’t know how high or low the blood sugar is. If I know, I can control it by myself at home. If it is high, I will eat less. It is good to be clear about the blood sugar. For the nurses, it would be helpful if they could find my place and remind me of something regularly. [Participant 14]
The best function would be to be able to monitor your health and detect any potential illnesses. [Participant 19]

The participants expressed a desire for more regular follow-ups by the nurses and an option to monitor their blood glucose levels in addition to blood pressure and blood oxygen levels:

I just think it would have been helpful if the device can also help you to monitor your blood sugar levels just like it helps to monitor blood pressure and blood oxygen levels. [Participant 5]
Oh, she sometimes follows up on us with home visits and phone interviews. Yes, but what about the rest of the time? If the nurse does not contact you, you won’t actively look for her, right? Besides, the nurse does not come to the center every day. The nurse is also busy with her work, so where would she have the time? So, in some instances, if you are not feeling better, you go and see a doctor. [Participant 13]

Although the step count feature of the WMD was described as inaccurate, the participants felt that it was still helpful to know how many steps they had taken because that motivated them to go out more often rather than stay at home. Being able to compare their step counts with others gave them a sense of accomplishment, especially if they found them to be higher than those of their colleagues:

But I don’t really care so much about how many steps I take in a day. However, it can still calculate something for you. For example, if the doctor tells you how many steps you need to take in a day, the watch can help you to keep track of it. Maybe we don’t really need it because our phones can also count the steps. [Participant 2]
I take so many steps every day. Many people can vouch for me. I am the best here; I take so many steps. After finishing my chores at home, I come down and do some healthy dancing, and walk around the center. According to them, I am the best. [Participant 14]
Because you can show off to others, like the person you are exercising with, and say, look I have burned this much fat, right? [Participant 9]

Participants also mentioned that the device helped them to not only record their vital parameters but also to view these records regularly. Regarding the promotion of health, the participants noted that the device helped them to participate in regular exercises and to build the confidence they need to meet health-related goals:

So, wearing a watch can make you want to do more exercise, right? Because when you wear a watch, you want to see how many steps you have taken, which makes you want to move more. [Participant 6]

For participants across the groups in this study, the WMD offered a sense of convenience in being able to monitor their vital parameters, record the values, track them, and share them with the nurse if required. The notion of convenience was also noted to be related to the ease with which the older adults navigated the device to inform their self-management strategies. In addition, that they did not have to be in a hospital to assess these basic vital parameters was something the older adults considered very convenient. In fact, they could monitor the basic vital parameters from the comfort of their home and even when moving about in the community:

With the watch, there is a guide, and I am afraid to be lazy about moving around and not walking around. But when I think about the watch, I have the confidence to do it. In the past, I just sat at home all day, talking on the mobile phone about how many thousands of steps I have to walk, and now I just go out and do it myself. [Participant 10]

In addition, the aspect of being able to reach out and interact with a nurse or having a nurse follow up on an older adult whose parameters were outside the normal range was considered to be convenient. This may be related to the fact that the participants felt that they were not only using the device but also being professionally supported by a nurse:

Yes, if the nurse thinks the blood oxygen is low, she will remind you to do it before and again. Then if something happens to you, you will know to see a doctor. [Participant 5]
At least, the blood pressure can be seen by the nurse. And the blood oxygen levels can be seen with a press of the finger. However, the step count is not accurate. [Participant 15]

The social aspect of the watch, such as being able to take pictures and share these with families and friends, was considered helpful and made life more convenient for the older adults. In other words, it added a bit of fun to using the device:

I discovered a new function or new feature. It is completely possible to use the watch to take a photo and share. Yes, so it is so much more convenient. [Participant 3]
It is best if there is nothing wrong with it. The best thing to do is to take a photo of that watch and the stick together after we finish the test, and it will be the most accurate. It is comfortable and makes life more convenient, I think. [Participant 5]

Another source of convenience was noted to be related to the fact that the wearable devices afforded older adults or their families a unique opportunity to track their whereabouts. The older adults found this feature particularly helpful because they considered themselves to be forgetful on occasion, and this feature helped them to retrace their steps to their original location or helped others to know where they were:

The best feature of the device is the tracking. Some people have a poor memory, or they may not be able to find their way home. In that case, their family members can locate them using the tracking feature. [Participant 21]
Mr Choi once tracked us. I got lost and could not find my way home. I got scared and started sweating. Mr Choi tracked my watch and found me at the Che Kung Temple. [Participant 2]
This is where technology has advanced. The most useful thing is when someone is lost. If he wears the watch, you can find him and track where he has been. Then you can find him using the tracking function. [Participant 14]
Sometimes when I go somewhere far, I don’t know where I am, and I cannot see clearly due to my glaucoma. One time, I had to go to the other side for the lunar new year, but I took the wrong bus and did not know where I was. Luckily, I was able to use the watch to track my way. [Participant 6]

Emerging Theme 3: Strategies to Promote Use of the WMD

This emerging theme discusses approaches observed in the data that highlight strategies to sustain continual use of the device. The following categories were captured: (1) availability of technical support, (2) ongoing follow-up professional support, and (3) peer and family support.

Availability of Technical Support

The plethora of technical issues emerging from the use of the device warrants ongoing availability of technical support. This great need was mentioned by participants in all groups and was particularly felt when the device developed a fault or broke down, or the participants needed more assistance in navigating through the functions:

The watch broke down and we did not know how to fix it. Someone at the center said he knew how to turn it back on. We tried for a while, but it still did not work. So, I said forget it. I did not wear it. I only wore it for ten days before, and just for measuring blood pressure at home. [Participant 14]

Although some of the participants sought assistance from the social workers, most older adults hesitated to disturb the personnel and therefore avoided seeking assistance altogether, regardless of the technical challenges that they were facing:

So, it is changed. Actually, you also changed and regarded it as a planned situation, and I did not dare to worry the nurse or the supervisor. So, if there is a problem with the watch, I must handle it on my own. [Participant 10]

In addition, the participants mentioned that they needed more technical support to access other functions on the watch because they found it difficult to perform this task:

And I don’t understand why so many functions need to be locked, except the panic button. I wondered if there was help for us to unlock these functions on the watch. [Participant 7]
We tried to figure it out but could not do it and we needed lots of help. In the end, it suddenly made a sound, and we could not figure out how long it had been, it just happened. [Participant 16]
There are too many things to handle. If you suddenly introduce ten functions for us to use, how can we remember them? You are not teaching a class, you won’t be able to remember them either. [Participant 11]

Ongoing Follow-Up Professional Support

Although the device was helpful in various ways, the older adults still preferred to have nurses follow up actively with them. For the participants across all groups, this form of support was generally limited, and they wished that they had interacted more with the nurses to be able to interpret the values that they obtained and to seek more health-related information. Perhaps the nurse support centered on following up on older adults with abnormal readings. Thus, those who maintained readings within normal ranges had limited contact with the nurses. The participants also felt that the limited support that they received from the nurses might have affected how well they met their health-related goals:

They [the nurses] do a good job when they call or visit you. With the watch, you set a goal with the nurse, which motivates you to do more. But they are not always there. It is helpful if they can find my place so that they can remind me of something I don’t know. [Participant 19]

Aside from ongoing professional support from nurses to keep the participants motivated in meeting their health-related goals, support from social workers is equally important to promote their continued use of the devices. Social workers played critical roles in promoting the use of the device by offering troubleshooting support, helping the participants to navigate through the device, and offering encouragement. In fact, it seemed as though the older adults who participated in the study trusted the social workers more than they did the nurses and were willing to always seek assistance from them. The older adults seem to have built a strong relationship with the social workers, which made it easier for them to seek assistance from them if required:

They do help us a lot and encourage us. Whenever there is a problem, we always look for him to help us out. He is the most reliable. He is very responsible, and he is always willing to help us. [Participant 5]
I did not even know how to turn off my phone. He said to turn off my phone, do it this way. He really taught us a lot of things. [Participant 10]

Peer and Family Support

Peer support from the CHWs also emerged as a critical factor to sustain the ongoing use of the WMD. These older adult volunteers or older adult ambassadors often offered encouragement to the participants to continue to use the device, record their values, and work toward meeting their health-related goals. Participants across the groups highlighted that it was occasionally difficult to gain access to a nurse; thus, the older adult volunteers or older adult ambassadors became the first point of call for assistance before reaching out to the social workers:

It is not so easy to find or see a nurse on some days. The volunteers have done this before, so we can reach out to them. There are days when you will forget to write the values, and they will remind you to do so. [Participant 16]

Aside from peer support, family support was also observed to be helpful in encouraging the older adults to use the WMD as required. Thus, older adults who resided alone with limited or no family support found it difficult to monitor and continually use the device to promote health:

They said that I fall frequently and have fallen several times before. I must be careful now that I am getting older. If anything happens to me, it would be troublesome because I live alone. [FG2]

Principal Findings

Emerging technologies such as wearable devices are advancing care and support for older adults in communities across the globe. Despite the plethora of literature highlighting the importance of wearable devices, significant concerns remain about the decline in use after a period of time. The world’s aging population is booming, but only a limited amount of work has been done to unearth the experiences of older adults regarding the use of wearable devices. This critical gap informed this study, which was part of a large trial program. The findings bring to the fore the challenges experienced by older adults regarding the use of wearable devices, which were identified as individual- and system-related challenges. The findings of the study further highlight the perceived benefits of the devices, particularly in the areas of self-monitoring, health promotion, and convenience. In addition, the study identified a hierarchical pattern of health-seeking behaviors of older adults when using the devices. Put together, the findings highlight that ongoing use of the devices is possible, although there is a critical need to ensure the availability of technical support and ongoing active professional follow-up by the health care team (notably nurses and social workers) and to include older adult volunteers to support other older adults in such programs.

Previous studies have uncovered various technical issues associated with using wearable devices. In a recent study, the authors identified interoperability, battery issues, the bulky nature of the device, a lack of personalization, and a lack of support as key issues that affected the use of the device [ 28 ]. Similar to this finding, our study also noted similar technical issues that affected the use of the devices. By contrast, however, it was noted in our study that regardless of these issues, older adults were willing to continue using the device because they believed that doing so was to their benefit. Despite this, we observed that issues related to individuals can also affect the use of wearable devices among older adults. For most of the older adults included in this study, this was the first time they had to use a wearable device, and they needed more time to become acquainted with it. Although issues such as forgetfulness may be considered part of the aging process, these findings suggest that aside from intensive training on how to use the device, there is still a need for ongoing technical support to boost its use. In addition, instruction manuals need to be more user-friendly and easily comprehensible for older adults. Comprehensibility is essential; we observed in this study that the user manuals were unclear, which may have had an impact on how well the participants made use of the WMD.

The inclusion of professional and peer support in this study is particularly noteworthy. An existing study showed that it might be necessary to provide nursing or peer support throughout the duration of the health program to increase the intrinsic motivation of older adults to adapt to WMDs and to provide social support at the same time [ 17 ]. In our study, which combined both professional and peer support, we observed that older adults did not want to disturb the nurses. Rather, they felt more comfortable consulting the older adult volunteers first, before reaching out to the social workers and, last of all, to the nurses if necessary. This hierarchical pattern of health-seeking behaviors may be an indication that older adults viewed the older adult volunteers as peers sharing similar experiences and conditions, which made it easier to relate to them than to the professionals. Nurses were perceived by older adults to be busy professionals. Thus, the participants would rather resort to seeking support from social workers, although they wished they had more interactions with the nurses. Put together, the findings suggest that nurses may need to take an active role in reaching out to older adults and being available when needed, regardless of whether they are using the wearable device. The concept of peer support also needs to be promoted further by engaging other older adults as volunteers to support older adults who are transitioning to using wearable devices. A recent study has shown that peer-to-peer support for community-dwelling older adults has the potential to not only promote adherence to therapeutic regimens but also to improve their quality of life, which warrants further exploration [ 29 ].

Furthermore, we observed that the ongoing availability of technical support and family support is also essential to promoting the use of wearable devices. It is possible that technical support may be available but unknown to the older adults or that they may not want to disturb others. Thus, older adults need to be encouraged to seek help if needed and should know where to obtain this help. Regarding family support, it remains a major cornerstone of support for older adults [ 30 ]. The absence of this form of critical support may lead to loneliness, which can exacerbate health issues and interfere with therapeutic regimens, including the use of wearable devices [ 31 ]. Undoubtedly, expanding on the notion of family support is beyond the scope of this study, but it is possible to recommend that older adults with limited or no family support need to be identified and appropriate strategies devised to assist them.

Moreover, we identified both individual- and system-related issues that can adversely impact the use of WMDs. Individual-related factors such as slow learning patterns and forgetfulness were highlighted by the participants as impacting how they initially navigated the WMD. Undoubtedly, aging is not a disease, although it can be associated with forgetfulness, which can impact activities of daily living [ 32 ]. Forgetfulness coupled with the slow learning patterns emphasize the need for continuous, gradual education to enable older adults to use WMDs effectively [ 33 ]. System-related challenges such as the size of the WMD and its limited power capacity are concerns that need to be addressed in subsequent design studies. In addition, concerns regarding the WMD generating incorrect readings also emerged as a system-related challenge. Previous studies have reported that a common problem with wearable devices is the likelihood of experiencing automatic loss of synchronization, making it difficult or impossible to update data or resulting in an incorrect report [ 34 , 35 ]. Although loss of synchronization was not examined in this study, it may have potentially contributed to the incorrect readings observed by the older adults in this study.

Strengths and Limitations

The strength of this study lies in the use of a rigorous approach to collecting and analyzing data with a focus on individual, within-group, and across-group variations to attain a thick description of what it means to experience the use of a wearable device. This strength notwithstanding, some limitations are noteworthy. First, the experiences of the participants are related to the use of a particular wearable device with distinct features. Thus, the findings may not necessarily be transferable to all wearable devices, although they may offer a useful resource on how older adults are likely to experience using the devices. Second, the study was undertaken in a region with distinct sociocultural features. The findings should therefore be interpreted taking these unique features into consideration. In addition, the nature of the program was such that the older adults were required to have some technological abilities. Thus, the findings may not be transferable to older adults who find it difficult to use technological applications.

Conclusions

Emerging technologies, such as wearable devices, for supporting community-dwelling older adults warrant more work on how users are experiencing these devices. The findings from this study bring to the fore the barriers and benefits of wearable devices and offer insight into strategies that can be considered to improve use. Because of issues that might emerge, it may be helpful to consider the availability of ongoing technical support, professional follow-up support, peer support, and family support. In fact, a personalized approach is needed to promote the use of wearable devices among older adults.

Acknowledgments

The authors would like to thank Hong Kong Lutheran Social Service for providing the smartwatches and participating in and contributing to this study. The study was funded by the Departmental General Research Fund, The Hong Kong Polytechnic University (G-UAQ2).

Data Availability

The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

AKCW and FKYW conceptualized the interventional program. AKCW, JB, JJS, FKYW, KKSC, BPW, SMW, and AYLL provided intellectual input on the study design, methodology, and evaluation. AKCW and JB drafted the manuscript. AKCW analyzed the data. All authors contributed to, reviewed, and approved the manuscript.

Conflicts of Interest

None declared.

  • Lu L, Zhang J, Xie Y, Gao F, Xu S, Wu X, et al. Wearable health devices in health care: narrative systematic review. JMIR Mhealth Uhealth. Nov 09, 2020;8(11):e18907. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Teixeira E, Fonseca H, Diniz-Sousa F, Veras L, Boppre G, Oliveira J, et al. Wearable devices for physical activity and healthcare monitoring in elderly people: a critical review. Geriatrics (Basel). Apr 07, 2021;6(2):38. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Roberts LM, Jaeger BC, Baptista LC, Harper SA, Gardner AK, Jackson EA, et al. Wearable technology to reduce sedentary behavior and CVD risk in older adults: a pilot randomized clinical trial. Clin Interv Aging. 2019;14:1817-1828. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brickwood KJ, Ahuja KD, Watson G, O'Brien JA, Williams AD. Effects of activity tracker use with health professional support or telephone counseling on maintenance of physical activity and health outcomes in older adults: randomized controlled trial. JMIR Mhealth Uhealth. Jan 05, 2021;9(1):e18686. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jang IY, Kim HR, Lee E, Jung HW, Park H, Cheon SH, et al. Impact of a wearable device-based walking programs in rural older adults on physical activity and health outcomes: cohort study. JMIR Mhealth Uhealth. Nov 21, 2018;6(11):e11335. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chang RC, Lu HP, Yang P, Luarn P. Reciprocal reinforcement between wearable activity trackers and social network services in influencing physical activity behaviors. JMIR Mhealth Uhealth. Jul 05, 2016;4(3):e84. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wong AK, Bayuo J, Wong FK, Chow KK, Wong SM, Lau AC. The synergistic effect of nurse proactive phone calls with an mHealth app program on sustaining app usage: 3-arm randomized controlled trial. J Med Internet Res. May 01, 2023;25:e43678. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shin G, Jarrahi MH, Fei Y, Karami A, Gafinowitz N, Byun A, et al. Wearable activity trackers, accuracy, adoption, acceptance and health impact: a systematic literature review. J Biomed Inform. May 2019;93:103153. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Paré G, Leaver C, Bourget C. Diffusion of the digital health self-tracking movement in Canada: results of a national survey. J Med Internet Res. May 02, 2018;20(5):e177. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gartner survey shows wearable devices need to be more useful. Gartner. Dec 7, 2016. URL: https:/​/www.​gartner.com/​en/​newsroom/​press-releases/​2016-12-07-gartner-survey-shows-wearable-devices-need-to-be-more-useful [accessed 2024-07-10]
  • Farivar S, Abouzahra M, Ghasemaghaei M. Wearable device adoption among older adults: a mixed-methods study. Int J Inf Manage. Dec 2020;55:102209. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kononova A, Li L, Kamp K, Bowen M, Rikard RV, Cotten S, et al. The use of wearable activity trackers among older adults: focus group study of tracker perceptions, motivators, and barriers in the maintenance stage of behavior change. JMIR Mhealth Uhealth. Apr 05, 2019;7(4):e9832. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moore K, O'Shea E, Kenny L, Barton J, Tedesco S, Sica M, et al. Older adults' experiences with using wearable devices: qualitative systematic review and meta-synthesis. JMIR Mhealth Uhealth. Jun 03, 2021;9(6):e23832. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. Jan 2000;55(1):68-78. [ CrossRef ] [ Medline ]
  • Kamei T, Kanamori T, Yamamoto Y, Edirippulige S. The use of wearable devices in chronic disease management to enhance adherence and improve telehealth outcomes: a systematic review and meta-analysis. J Telemed Telecare. Jun 2022;28(5):342-359. [ CrossRef ] [ Medline ]
  • Makai P, Perry M, Robben SH, Schers HJ, Heinen MM, Olde Rikkert MG, et al. Evaluation of an eHealth intervention in chronic care for frail older people: why adherence is the first target. J Med Internet Res. Jun 23, 2014;16(6):e156. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fjellså HM, Husebø AM, Storm M. eHealth in care coordination for older adults living at home: scoping review. J Med Internet Res. Oct 18, 2022;24(10):e39584. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Boyne JJ, Vrijhoef HJ, Spreeuwenberg M, De Weerd G, Kragten J, Gorgels AP. Effects of tailored telemonitoring on heart failure patients' knowledge, self-care, self-efficacy and adherence: a randomized controlled trial. Eur J Cardiovasc Nurs. Jun 2014;13(3):243-252. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wong AK, Wong FK, Chow KK, Wong SM, Bayuo J, Ho AK. Effect of a mobile health application with nurse support on quality of life among community-dwelling older adults in Hong Kong: a randomized clinical trial. JAMA Netw Open. Nov 01, 2022;5(11):e2241137. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mercer K, Giangregorio L, Schneider E, Chilana P, Li M, Grindrod K. Acceptance of commercially available wearable activity trackers among adults aged over 50 and with chronic illness: a mixed-methods evaluation. JMIR Mhealth Uhealth. Jan 27, 2016;4(1):e7. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sandelowski M. Whatever happened to qualitative description? Res Nurs Health. Aug 2000;23(4):334-340. [ CrossRef ] [ Medline ]
  • O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. Sep 2014;89(9):1245-1251. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ageing and health. World Health Organization. Oct 1, 2022. URL: https://www.who.int/news-room/fact-sheets/detail/ageing-and-health [accessed 2023-12-18]
  • Martin KS. The Omaha System: A Key to Practice, Documentation, and Information Management. Philadelphia, PA. Elsevier Saunders; 2005.
  • Morgan DL. Focus Groups as Qualitative Research. Thousand Oaks, CA. SAGE Publications, Inc; 1988.
  • Maykut P, Morehouse R. Beginning Qualitative Research: A Philosophical and Practical Guide. Milton Park, UK. Taylor & Francis; 1994.
  • Lincoln YS, Guba EG. But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Direct Progr Eval. Nov 04, 2004;1986(30):73-84. [ CrossRef ]
  • Olmedo-Aguirre JO, Reyes-Campos J, Alor-Hernández G, Machorro-Cano I, Rodríguez-Mazahua L, Sánchez-Cervantes JL. Remote healthcare for elderly people using wearables: a review. Biosensors (Basel). Jan 27, 2022;12(2):73. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Thombs BD, Carboni-Jiménez A. Peer-to-peer support for older adults-what do we know and where do we go? JAMA Netw Open. Jun 01, 2021;4(6):e2113941. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wong AK, Ng NP, Hui VC, Montayre J. Effect of a telecare-based intervention on stress levels in informal caregivers of older adults: protocol for a randomized controlled trial. Front Psychiatry. May 25, 2023;14:1167479. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Doblas JL, Palomares-Linares I, Martínez MS. Chapter 8: Loneliness in older adults: a comparative study of four southern European countries. In: Entrena-Durán F, Soriano-Miras RM, Duque-Calvache R, editors. Social Problems in Southern Europe: A Comparative Assessment. Cheltenham, UK. Edward Elgar Publishing; Aug 7, 2020:87-102.
  • Ballard J. Forgetfulness and older adults: concept analysis. J Adv Nurs. Jun 29, 2010;66(6):1409-1419. [ CrossRef ] [ Medline ]
  • Imbriano G, Beaudreau SA. Subjective cognitive difficulties may communicate more than forgetfulness. Int Psychogeriatr. Mar 24, 2023:1-8. [ CrossRef ]
  • Kaewkannate K, Kim S. A comparison of wearable fitness devices. BMC Public Health. May 24, 2016;16(1):433. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cho S, Ensari I, Weng C, Kahn MG, Natarajan K. Factors affecting the quality of person-generated wearable device data and associated challenges: rapid systematic review. JMIR Mhealth Uhealth. Mar 19, 2021;9(3):e20738. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

community health worker
nurse case manager
Standards for Reporting Qualitative Research
wearable monitoring device

Edited by T de Azevedo Cardoso; submitted 27.05.23; peer-reviewed by M Keivani, I Madujibeya, A AL-Asadi; comments to author 06.12.23; revised version received 14.01.24; accepted 24.05.24; published 07.08.24.

©Arkers Kwan Ching Wong, Jonathan Bayuo, Jing Jing Su, Karen Kit Sum Chow, Siu Man Wong, Bonnie Po Wong, Athena Yin Lam Lee, Frances Kam Yuet Wong. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 07.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. how to write up findings in qualitative research

    how to write result in qualitative research

  2. Results from qualitative research

    how to write result in qualitative research

  3. FREE 30+ Sample Research Reports in MS Word, Apple Pages, Google Docs

    how to write result in qualitative research

  4. Chapter 4: Data-collection in Qualitative Research

    how to write result in qualitative research

  5. Understanding Qualitative Research: An In-Depth Study Guide

    how to write result in qualitative research

  6. (PDF) Qualitative data analysis and writing results (workshop)

    how to write result in qualitative research

VIDEO

  1. How to write Result in Korean-결과[gyeol gwa] #handwritingkorean #hangul #koreanlanguage

  2. Developing Qualitative Research Titles: A Guide to It's Nature

  3. Research Methodology : Qualitative and Quantitative evaluation processes for sustainability program

  4. Qualitative Findings To Writing Conclusions

  5. Mastering Qualitative Research: A step by step guide. From beginner to advanced

  6. Crafting Precise Questions: The Art of Quantitative Research Inquiry

COMMENTS

  1. How to Write a Results Section

    Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...

  2. Dissertation Results & Findings Chapter (Qualitative)

    The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...

  3. Structuring a qualitative findings section

    Don't make the reader do the analytic work for you. Now, on to some specific ways to structure your findings section. 1). Tables. Tables can be used to give an overview of what you're about to present in your findings, including the themes, some supporting evidence, and the meaning/explanation of the theme.

  4. 23 Presenting the Results of Qualitative Analysis

    One of the most important parts of writing about qualitative research is presenting the data in a way that makes its richness and value accessible to readers. As the discussion of analysis in the prior chapter suggests, there are a variety of ways to do this. ... Of course, the main goal of writing up the results of a research project is to ...

  5. Research Results Section

    Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

  6. PDF Reporting Qualitative Research in Psychology

    how to best present qualitative research, with rationales and illustrations. The reporting standards for qualitative meta-analyses, which are integrative analy-ses of findings from across primary qualitative research, are presented in Chapter 8. These standards are distinct from the standards for both quantitative meta-analyses and

  7. Scientific Writing: A reporting guide for qualitative studies

    4. Provide a summary of the literature relating to the topic and what gaps there may be. Rationale for study. 5. Identify the rationale for the study. The rationale for the use of qualitative methods can be noted here or in the methods section. Objective. 6. Clearly articulate the objective of the study.

  8. Presenting Findings (Qualitative)

    Qualitative research presents "best examples" of raw data to demonstrate an analytic point, not simply to display data. Numbers (descriptive statistics) help your reader understand how prevalent or typical a finding is. Numbers are helpful and should not be avoided simply because this is a qualitative dissertation.

  9. Five Steps to Writing More Engaging Qualitative Research

    A-85). Successful writing requires a writer to pay quiet diligent attention to the construction of the genre they are working in. Each genre has its own sense of verisimilitude—the bearing of truth. Each places different constraints on the writer and has different goals, forms, and structure.

  10. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  11. Presenting and Evaluating Qualitative Research

    The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...

  12. How to Write the Dissertation Findings or Results

    2. Reporting Qualitative Findings. A notable issue with reporting qualitative findings is that not all results directly relate to your research questions or hypothesis. The best way to present the results of qualitative research is to frame your findings around the most critical areas or themes you obtained after you examined the data.

  13. A Front-to-Back Guide to Writing a Qualitative Research Article

    Purpose - This paper aims to offer junior scholars a front-to-back guide to writing an academic, theoretically positioned, qualitative research article in the social sciences. Design/methodology ...

  14. How to write the analysis and discussion chapters in qualitative research

    Charlesworth Author Services; 11 November, 2021; How to write the analysis and discussion chapters in qualitative (SSAH) research. While it is more common for Science, Technology, Engineering and Mathematics (STEM) researchers to write separate, distinct chapters for their data/results and analysis/discussion, the same sections can feel less clearly defined for a researcher in Social Sciences ...

  15. Presenting Your Qualitative Analysis Findings: Tables to Include in

    Tables to Present the Groups of Codes That Form Each Theme. As noted previously, most of our dissertation assistance clients use a thematic analysis approach, which involves multiple phases of qualitative analysis that eventually result in themes that answer the dissertation's research questions. After initial coding is completed, the analysis process involves (a) examining what different ...

  16. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  17. Writing Chapter 4 : Analysis & Results for Qualitative Research

    Chapter 4 for Qualitative Research carries different titles such as 'Analysis of Data', 'Results of Study', 'Analysis and Results'

  18. How to Write an Impressive Thesis Results Section

    Include all relevant results as text, tables, or figures. Report the results of subject recruitment and data collection. For qualitative research, present the data from all statistical analyses, whether or not the results are significant. For quantitative research, present the data by coding or categorizing themes and topics.

  19. Writing a qualitative results section

    Close the data and quotes. Write a bare bones generic description of the section. Go back to Atlas, skim each code to "check" my analytic summary, add specificity. Add 1-3 high-value (surprising, pithy, unusual) quotes to paragraphs. Choose 1-2 longer quotes on different themes for the table. As a result, I finished my task of writing the ...

  20. PDF 7th Edition Discussion Phrases Guide

    evaluate and interpret the results of your study or paper, draw inferences and conclusions from it, and communicate ... Use the present tense when writing the Discussion section. ... quantitative and qualitative research papers can be found in Sections 3.8 and 3.16 of the Publication Manual of the American Psychological Association

  21. Research Methodology

    Qualitative Research Methodology. ... How to Write Research Methodology. Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It's an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your ...

  22. InfoGuides: Quantative Analysis & Statistics: Write a Paper

    The Chicago Guide to Writing about Multivariate Analysis by Jane E. Miller Many different people, from social scientists to government agencies to business professionals, depend on the results of multivariate models to inform their decisions. Researchers use these advanced statistical techniques to analyze relationships among multiple variables, such as how exercise and weight relate to the ...

  23. Commentary: Writing and Evaluating Qualitative Research Reports

    Objective To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research.Methods A question and answer format is used to address considerations for writing and evaluating qualitative research.Results and Conclusions When producing qualitative research, individuals are encouraged to address the qualitative research ...

  24. 7 Qualitative Data Examples and Why They Work

    But it's full of qualitative analysis and observations as well. Also, since the results you get from Google Scholar can be anything from books to articles to journals, some sources have more qualitative data than others. When you need secondary research that's also qualitative, Google Scholar is an ideal place to find it. #7.

  25. Qualitative Data Analysis & How Market Researchers Use It

    The pros and cons of qualitative data analysis. Qualitative data analysis looks at the human side of data. It offers insights that numbers alone can't provide. But like all research methods, even qualitative data analysis methods have their strengths and weaknesses, especially when it comes to shaping a marketing plan that hits the mark.

  26. Student Experiences With Peer Review and Revision for Writing-to-Learn

    A portrait of MWrite as a research program: A review of research on writing-to-learn in STEM through the MWrite program. International Journal for the Scholarship of Teaching and Learning , 17(1), Article 18.

  27. Adaptation and validation of the evidence-based practice profile (EBP2

    Study design. This study was conducted in two phases: Phase 1 comprised a qualitative assessment of the content validity of the EBP 2-N, followed by minor linguistic adaptions and a pilot test of the adapted version. Phase 2 comprised an assessment of structural validity and internal consistency of the EBP 2-N based on the result from a web-based cross-sectional survey.

  28. JMIR Formative Research

    Background: Perinatal depression affects a significant number of women during pregnancy and after birth, and early identification is imperative for timely interventions and improved prognosis. Mobile apps offer the potential to overcome barriers to health care provision and facilitate clinical research. However, little is known about users' perceptions and acceptability of these apps ...

  29. Journal of Medical Internet Research

    The focus group discussions were facilitated and moderated by 2 members of the research team. All discussions were recorded and transcribed verbatim. We used the constant comparison analytical approach to analyze the focus group data. Results: A total of 22 participants assigned to 6 focus groups participated in the study.