Published by Grace Graffin at August 11th, 2021 , Revised On June 11, 2024
Each part of the dissertation is unique, and some general and specific rules must be followed. The dissertation’s findings section presents the key results of your research without interpreting their meaning .
Theoretically, this is an exciting section of a dissertation because it involves writing what you have observed and found. However, it can be a little tricky if there is too much information to confuse the readers.
The goal is to include only the essential and relevant findings in this section. The results must be presented in an orderly sequence to provide clarity to the readers.
This section of the dissertation should be easy for the readers to follow, so you should avoid going into a lengthy debate over the interpretation of the results.
It is vitally important to focus only on clear and precise observations. The findings chapter of the dissertation is theoretically the easiest to write.
It includes statistical analysis and a brief write-up about whether or not the results emerging from the analysis are significant. This segment should be written in the past sentence as you describe what you have done in the past.
This article will provide detailed information about how to write the findings of a dissertation .
As soon as you have gathered and analysed your data, you can start to write up the findings chapter of your dissertation paper. Remember that it is your chance to report the most notable findings of your research work and relate them to the research hypothesis or research questions set out in the introduction chapter of the dissertation .
You will be required to separately report your study’s findings before moving on to the discussion chapter if your dissertation is based on the collection of primary data or experimental work.
However, you may not be required to have an independent findings chapter if your dissertation is purely descriptive and focuses on the analysis of case studies or interpretation of texts.
If not, we can help. Our panel of experts makes sure to keep the 3 pillars of the Dissertation strong.
The best way to present your quantitative findings is to structure them around the research hypothesis or questions you intend to address as part of your dissertation project.
Report the relevant findings for each research question or hypothesis, focusing on how you analyzed them.
Analysis of your findings will help you determine how they relate to the different research questions and whether they support the hypothesis you formulated.
While you must highlight meaningful relationships, variances, and tendencies, it is important not to guess their interpretations and implications because this is something to save for the discussion and conclusion chapters.
Any findings not directly relevant to your research questions or explanations concerning the data collection process should be added to the dissertation paper’s appendix section.
Suppose your dissertation is based on quantitative research. In that case, it is important to include charts, graphs, tables, and other visual elements to help your readers understand the emerging trends and relationships in your findings.
Repeating information will give the impression that you are short on ideas. Refer to all charts, illustrations, and tables in your writing but avoid recurrence.
The text should be used only to elaborate and summarize certain parts of your results. On the other hand, illustrations and tables are used to present multifaceted data.
It is recommended to give descriptive labels and captions to all illustrations used so the readers can figure out what each refers to.
Here is an example of how to report quantitative results in your dissertation findings chapter;
Two hundred seventeen participants completed both the pretest and post-test and a Pairwise T-test was used for the analysis. The quantitative data analysis reveals a statistically significant difference between the mean scores of the pretest and posttest scales from the Teachers Discovering Computers course. The pretest mean was 29.00 with a standard deviation of 7.65, while the posttest mean was 26.50 with a standard deviation of 9.74 (Table 1). These results yield a significance level of .000, indicating a strong treatment effect (see Table 3). With the correlation between the scores being .448, the little relationship is seen between the pretest and posttest scores (Table 2). This leads the researcher to conclude that the impact of the course on the educators’ perception and integration of technology into the curriculum is dramatic.
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
PRESCORE | 29.00 | 217 | 7.65 | .519 | |
PSTSCORE | 26.00 | 217 | 9.74 | .661 |
N | Correlation | Sig. | ||
---|---|---|---|---|
PRESCORE & PSTSCORE | 217 | .448 | .000 |
Paired Differences | |||||||||
---|---|---|---|---|---|---|---|---|---|
Mean | Std. Deviation | Std. Error Mean | 95% Confidence Interval of the Difference | t | df | Sig. (2-tailed) | |||
Lower | Upper | ||||||||
Pair 1 | PRESCORE-PSTSCORE | 2.50 | 9.31 | .632 | 1.26 | 3.75 | 3.967 | 216 | .000 |
Also Read: How to Write the Abstract for the Dissertation.
A notable issue with reporting qualitative findings is that not all results directly relate to your research questions or hypothesis.
The best way to present the results of qualitative research is to frame your findings around the most critical areas or themes you obtained after you examined the data.
In-depth data analysis will help you observe what the data shows for each theme. Any developments, relationships, patterns, and independent responses directly relevant to your research question or hypothesis should be mentioned to the readers.
Additional information not directly relevant to your research can be included in the appendix .
Here is an example of how to report qualitative results in your dissertation findings chapter;
The last question of the interview focused on the need for improvement in Thai ready-to-eat products and the industry at large, emphasizing the need for enhancement in the current products being offered in the market. When asked if there was any particular need for Thai ready-to-eat meals to be improved and how to improve them in case of ‘yes,’ the males replied mainly by saying that the current products need improvement in terms of the use of healthier raw materials and preservatives or additives. There was an agreement amongst all males concerning the need to improve the industry for ready-to-eat meals and the use of more healthy items to prepare such meals. The females were also of the opinion that the fast-food items needed to be improved in the sense that more healthy raw materials such as vegetable oil and unsaturated fats, including whole-wheat products, to overcome risks associated with trans fat leading to obesity and hypertension should be used for the production of RTE products. The frozen RTE meals and packaged snacks included many preservatives and chemical-based flavouring enhancers that harmed human health and needed to be reduced. The industry is said to be aware of this fact and should try to produce RTE products that benefit the community in terms of healthy consumption.
Research prospect to the rescue then.
We have expert writers on our team who are skilled at helping students with dissertations across a variety of disciplines. Guaranteeing 100% satisfaction!
The dissertation findings chapter should provide the context for understanding the results. The research problem should be repeated, and the research goals should be stated briefly.
This approach helps to gain the reader’s attention toward the research problem. The first step towards writing the findings is identifying which results will be presented in this section.
The results relevant to the questions must be presented, considering whether the results support the hypothesis. You do not need to include every result in the findings section. The next step is ensuring the data can be appropriately organized and accurate.
You will need to have a basic idea about writing the findings of a dissertation because this will provide you with the knowledge to arrange the data chronologically.
Start each paragraph by writing about the most important results and concluding the section with the most negligible actual results.
A short paragraph can conclude the findings section, summarising the findings so readers will remember as they transition to the next chapter. This is essential if findings are unexpected or unfamiliar or impact the study.
Our writers can help you with all parts of your dissertation, including statistical analysis of your results . To obtain free non-binding quotes, please complete our online quote form here .
When crafting your findings, knowing how you will organize the work is important. The findings are the story that needs to be told in response to the research questions that have been answered.
Therefore, the story needs to be organized to make sense to you and the reader. The findings must be compelling and responsive to be linked to the research questions being answered.
Always ensure that the size and direction of any changes, including percentage change, can be mentioned in the section. The details of p values or confidence intervals and limits should be included.
The findings sections only have the relevant parts of the primary evidence mentioned. Still, it is a good practice to include all the primary evidence in an appendix that can be referred to later.
The results should always be written neutrally without speculation or implication. The statement of the results mustn’t have any form of evaluation or interpretation.
Negative results should be added in the findings section because they validate the results and provide high neutrality levels.
The length of the dissertation findings chapter is an important question that must be addressed. It should be noted that the length of the section is directly related to the total word count of your dissertation paper.
The writer should use their discretion in deciding the length of the findings section or refer to the dissertation handbook or structure guidelines.
It should neither belong nor be short nor concise and comprehensive to highlight the reader’s main findings.
Ethically, you should be confident in the findings and provide counter-evidence. Anything that does not have sufficient evidence should be discarded. The findings should respond to the problem presented and provide a solution to those questions.
The chapter should use appropriate words and phrases to present the results to the readers. Logical sentences should be used, while paragraphs should be linked to produce cohesive work.
You must ensure all the significant results have been added in the section. Recheck after completing the section to ensure no mistakes have been made.
The structure of the findings section is something you may have to be sure of primarily because it will provide the basis for your research work and ensure that the discussions section can be written clearly and proficiently.
One way to arrange the results is to provide a brief synopsis and then explain the essential findings. However, there should be no speculation or explanation of the results, as this will be done in the discussion section.
Another way to arrange the section is to present and explain a result. This can be done for all the results while the section is concluded with an overall synopsis.
This is the preferred method when you are writing more extended dissertations. It can be helpful when multiple results are equally significant. A brief conclusion should be written to link all the results and transition to the discussion section.
Numerous data analysis dissertation examples are available on the Internet, which will help you improve your understanding of writing the dissertation’s findings.
One of the problems to avoid while writing the dissertation findings is reporting background information or explaining the findings. This should be done in the introduction section .
You can always revise the introduction chapter based on the data you have collected if that seems an appropriate thing to do.
Raw data or intermediate calculations should not be added in the findings section. Always ask your professor if raw data needs to be included.
If the data is to be included, then use an appendix or a set of appendices referred to in the text of the findings chapter.
Do not use vague or non-specific phrases in the findings section. It is important to be factual and concise for the reader’s benefit.
The findings section presents the crucial data collected during the research process. It should be presented concisely and clearly to the reader. There should be no interpretation, speculation, or analysis of the data.
The significant results should be categorized systematically with the text used with charts, figures, and tables. Furthermore, avoiding using vague and non-specific words in this section is essential.
It is essential to label the tables and visual material properly. You should also check and proofread the section to avoid mistakes.
The dissertation findings chapter is a critical part of your overall dissertation paper. If you struggle with presenting your results and statistical analysis, our expert dissertation writers can help you get things right. Whether you need help with the entire dissertation paper or individual chapters, our dissertation experts can provide customized dissertation support .
How do i report quantitative findings.
The best way to present your quantitative findings is to structure them around the research hypothesis or research questions you intended to address as part of your dissertation project. Report the relevant findings for each of the research questions or hypotheses, focusing on how you analyzed them.
The best way to present the qualitative research results is to frame your findings around the most important areas or themes that you obtained after examining the data.
An in-depth analysis of the data will help you observe what the data is showing for each theme. Any developments, relationships, patterns, and independent responses that are directly relevant to your research question or hypothesis should be clearly mentioned for the readers.
No, It is highly advisable to avoid using interpretive and subjective phrases in the finding chapter. These terms are more suitable for the discussion chapter , where you will be expected to provide your interpretation of the results in detail.
NO, you must not be presenting results from other research studies in your findings.
Finding it difficult to maintain a good relationship with your supervisor? Here are some tips on ‘How to Deal with an Unhelpful Dissertation Supervisor’.
Wish that you had more time to write your dissertation paper? Here are some practical tips for you to learn “How to get dissertation deadline extension”.
Stuck on the recommendations section of your research? Read our guide on how to write recommendations for a research study and get started.
USEFUL LINKS
LEARNING RESOURCES
COMPANY DETAILS
By charlesworth author services.
While it is more common for Science, Technology, Engineering and Mathematics (STEM) researchers to write separate, distinct chapters for their data/ results and analysis/ discussion , the same sections can feel less clearly defined for a researcher in Social Sciences, Arts and Humanities (SSAH). This article will look specifically at some useful approaches to writing the analysis and discussion chapters in qualitative/SSAH research.
Note : Most of the differences in approaches to research, writing, analysis and discussion come down, ultimately, to differences in epistemology – how we approach, create and work with knowledge in our respective fields. However, this is a vast topic that deserves a separate discussion.
The ‘results’ of qualitative research can sometimes be harder to pinpoint than in quantitative research. You’re not dealing with definitive numbers and results in the same way as, say, a scientist conducting experiments that produce measurable data. Instead, most qualitative researchers explore prominent, interesting themes and patterns emerging from their data – that could comprise interviews, textual material or participant observation, for example.
You may find that your data presents a huge number of themes, issues and topics, all of which you might find equally significant and interesting. In fact, you might find yourself overwhelmed by the many directions that your research could take, depending on which themes you choose to study in further depth. You may even discover issues and patterns that you had not expected , that may necessitate having to change or expand the research focus you initially started off with.
It is crucial at this point not to panic. Instead, try to enjoy the many possibilities that your data is offering you. It can be useful to remind yourself at each stage of exactly what you are trying to find out through this research.
What exactly do you want to know? What knowledge do you want to generate and share within your field?
Then, spend some time reflecting upon each of the themes that seem most interesting and significant, and consider whether they are immediately relevant to your main, overarching research objectives and goals.
Suggestion: Don’t worry too much about structure and flow at the early stages of writing your discussion . It would be a more valuable use of your time to fully explore the themes and issues arising from your data first, while also reading widely alongside your writing (more on this below). As you work more intimately with the data and develop your ideas, the overarching narrative and connections between those ideas will begin to emerge. Trust that you’ll be able to draw those links and craft the structure organically as you write.
A key characteristic of qualitative research is that the researchers allow their data to ‘speak’ and guide their research and their writing. Instead of insisting too strongly upon the prominence of specific themes and issues and imposing their opinions and beliefs upon the data, a good qualitative researcher ‘listens’ to what the data has to tell them.
Again, you might find yourself having to address unexpected issues or your data may reveal things that seem completely contradictory to the ideas and theories you have worked with so far. Although this might seem worrying, discovering these unexpected new elements can actually make your research much richer and more interesting.
Suggestion: Allow yourself to follow those leads and ask new questions as you work through your data. These new directions could help you to answer your research questions in more depth and with greater complexity; or they could even open up other avenues for further study, either in this or future research.
As you analyse and discuss the prominent themes, arguments and findings arising from your data, it is very helpful to maintain a regular and consistent reading practice alongside your writing. Return to the literature that you’ve already been reading so far or begin to check out new texts, studies and theories that might be more appropriate for working with any new ideas and themes arising from your data.
Reading and incorporating relevant literature into your writing as you work through your analysis and discussion will help you to consistently contextualise your research within the larger body of knowledge. It will be easier to stay focused on what you are trying to say through your research if you can simultaneously show what has already been said on the subject and how your research and data supports, challenges or extends those debates. By drawing from existing literature , you are setting up a dialogue between your research and prior work, and highlighting what this research has to add to the conversation.
Suggestion : Although it might sometimes feel tedious to have to blend others’ writing in with yours, this is ultimately the best way to showcase the specialness of your own data, findings and research . Remember that it is more difficult to highlight the significance and relevance of your original work without first showing how that work fits into or responds to existing studies.
The discussion chapters form the heart of your thesis and this is where your unique contribution comes to the forefront. This is where your data takes centre-stage and where you get to showcase your original arguments, perspectives and knowledge. To do this effectively needs you to explore the original themes and issues arising from and within the data, while simultaneously contextualising these findings within the larger, existing body of knowledge of your specialising field. By striking this balance, you prove the two most important qualities of excellent qualitative research : keen awareness of your field and a firm understanding of your place in it.
Charlesworth Author Services , a trusted brand supporting the world’s leading academic publishers, institutions and authors since 1928.
To know more about our services, visit: Our Services
Visit our new Researcher Education Portal that offers articles and webinars covering all aspects of your research to publication journey! And sign up for our newsletter on the Portal to stay updated on all essential researcher knowledge and information!
Register now: Researcher Education Portal
Maximise your publication success with Charlesworth Author Services.
Sign up – stay updated.
We use cookies to offer you a personalized experience. By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.
Presenting your qualitative analysis findings: tables to include in chapter 4.
The earliest stages of developing a doctoral dissertation—most specifically the topic development and literature review stages—require that you immerse yourself in a ton of existing research related to your potential topic. If you have begun writing your dissertation proposal, you have undoubtedly reviewed countless results and findings sections of studies in order to help gain an understanding of what is currently known about your topic.
In this process, we’re guessing that you observed a distinct pattern: Results sections are full of tables. Indeed, the results chapter for your own dissertation will need to be similarly packed with tables. So, if you’re preparing to write up the results of your statistical analysis or qualitative analysis, it will probably help to review your APA editing manual to brush up on your table formatting skills. But, aside from formatting, how should you develop the tables in your results chapter?
In quantitative studies, tables are a handy way of presenting the variety of statistical analysis results in a form that readers can easily process. You’ve probably noticed that quantitative studies present descriptive results like mean, mode, range, standard deviation, etc., as well the inferential results that indicate whether significant relationships or differences were found through the statistical analysis . These are pretty standard tables that you probably learned about in your pre-dissertation statistics courses.
But, what if you are conducting qualitative analysis? What tables are appropriate for this type of study? This is a question we hear often from our dissertation assistance clients, and with good reason. University guidelines for results chapters often contain vague instructions that guide you to include “appropriate tables” without specifying what exactly those are. To help clarify on this point, we asked our qualitative analysis experts to share their recommendations for tables to include in your Chapter 4.
Demographics Tables
As with studies using quantitative methods , presenting an overview of your sample demographics is useful in studies that use qualitative research methods. The standard demographics table in a quantitative study provides aggregate information for what are often large samples. In other words, such tables present totals and percentages for demographic categories within the sample that are relevant to the study (e.g., age, gender, job title).
If conducting qualitative research for your dissertation, however, you will use a smaller sample and obtain richer data from each participant than in quantitative studies. To enhance thick description—a dimension of trustworthiness—it will help to present sample demographics in a table that includes information on each participant. Remember that ethical standards of research require that all participant information be deidentified, so use participant identification numbers or pseudonyms for each participant, and do not present any personal information that would allow others to identify the participant (Blignault & Ritchie, 2009). Table 1 provides participant demographics for a hypothetical qualitative research study exploring the perspectives of persons who were formerly homeless regarding their experiences of transitioning into stable housing and obtaining employment.
Participant Demographics
Participant ID | Gender | Age | Current Living Situation |
P1 | Female | 34 | Alone |
P2 | Male | 27 | With Family |
P3 | Male | 44 | Alone |
P4 | Female | 46 | With Roommates |
P5 | Female | 25 | With Family |
P6 | Male | 30 | With Roommates |
P7 | Male | 38 | With Roommates |
P8 | Male | 51 | Alone |
Tables to Illustrate Initial Codes
Most of our dissertation consulting clients who are conducting qualitative research choose a form of thematic analysis . Qualitative analysis to identify themes in the data typically involves a progression from (a) identifying surface-level codes to (b) developing themes by combining codes based on shared similarities. As this process is inherently subjective, it is important that readers be able to evaluate the correspondence between the data and your findings (Anfara et al., 2002). This supports confirmability, another dimension of trustworthiness .
A great way to illustrate the trustworthiness of your qualitative analysis is to create a table that displays quotes from the data that exemplify each of your initial codes. Providing a sample quote for each of your codes can help the reader to assess whether your coding was faithful to the meanings in the data, and it can also help to create clarity about each code’s meaning and bring the voices of your participants into your work (Blignault & Ritchie, 2009).
Table 2 is an example of how you might present information regarding initial codes. Depending on your preference or your dissertation committee’s preference, you might also present percentages of the sample that expressed each code. Another common piece of information to include is which actual participants expressed each code. Note that if your qualitative analysis yields a high volume of codes, it may be appropriate to present the table as an appendix.
Initial Codes
Initial code | of participants contributing ( =8) | of transcript excerpts assigned | Sample quote |
---|---|---|---|
Daily routine of going to work enhanced sense of identity | 7 | 12 | “It’s just that good feeling of getting up every day like everyone else and going to work, of having that pattern that’s responsible. It makes you feel good about yourself again.” (P3) |
Experienced discrimination due to previous homelessness | 2 | 3 | “At my last job, I told a couple other people on my shift I used to be homeless, and then, just like that, I get put into a worse job with less pay. The boss made some excuse why they did that, but they didn’t want me handling the money is why. They put me in a lower level job two days after I talk to people about being homeless in my past. That’s no coincidence if you ask me.” (P6) |
Friends offered shared housing | 3 | 3 | “My friend from way back had a spare room after her kid moved out. She let me stay there until I got back on my feet.” (P4) |
Mental health services essential in getting into housing | 5 | 7 | “Getting my addiction treated was key. That was a must. My family wasn’t gonna let me stay around their place without it. So that was a big help for getting back into a place.” (P2) |
Tables to Present the Groups of Codes That Form Each Theme
As noted previously, most of our dissertation assistance clients use a thematic analysis approach, which involves multiple phases of qualitative analysis that eventually result in themes that answer the dissertation’s research questions. After initial coding is completed, the analysis process involves (a) examining what different codes have in common and then (b) grouping similar codes together in ways that are meaningful given your research questions. In other words, the common threads that you identify across multiple codes become the theme that holds them all together—and that theme answers one of your research questions.
As with initial coding, grouping codes together into themes involves your own subjective interpretations, even when aided by qualitative analysis software such as NVivo or MAXQDA. In fact, our dissertation assistance clients are often surprised to learn that qualitative analysis software does not complete the analysis in the same ways that statistical analysis software such as SPSS does. While statistical analysis software completes the computations for you, qualitative analysis software does not have such analysis capabilities. Software such as NVivo provides a set of organizational tools that make the qualitative analysis far more convenient, but the analysis itself is still a very human process (Burnard et al., 2008).
Because of the subjective nature of qualitative analysis, it is important to show the underlying logic behind your thematic analysis in tables—such tables help readers to assess the trustworthiness of your analysis. Table 3 provides an example of how to present the codes that were grouped together to create themes, and you can modify the specifics of the table based on your preferences or your dissertation committee’s requirements. For example, this type of table might be presented to illustrate the codes associated with themes that answer each research question.
Grouping of Initial Codes to Form Themes
Theme Initial codes grouped to form theme | of participants contributing ( =8) | of transcript excerpts assigned |
Assistance from friends, family, or strangers was instrumental in getting back into stable housing | 6 | 10 |
Family member assisted them to get into housing | ||
Friends offered shared housing | ||
Stranger offered shared housing | ||
Obtaining professional support was essential for overcoming the cascading effects of poverty and homelessness | 7 | 19 |
Financial benefits made obtaining housing possible | ||
Mental health services essential in getting into housing | ||
Social services helped navigate housing process | ||
Stigma and concerns about discrimination caused them to feel uncomfortable socializing with coworkers | 6 | 9 |
Experienced discrimination due to previous homelessness | ||
Feared negative judgment if others learned of their pasts | ||
Routine productivity and sense of making a contribution helped to restore self-concept and positive social identity | 8 | 21 |
Daily routine of going to work enhanced sense of identity | ||
Feels good to contribute to society/organization | ||
Seeing products of their efforts was rewarding |
Tables to Illustrate the Themes That Answer Each Research Question
Creating alignment throughout your dissertation is an important objective, and to maintain alignment in your results chapter, the themes you present must clearly answer your research questions. Conducting qualitative analysis is an in-depth process of immersion in the data, and many of our dissertation consulting clients have shared that it’s easy to lose your direction during the process. So, it is important to stay focused on your research questions during the qualitative analysis and also to show the reader exactly which themes—and subthemes, as applicable—answered each of the research questions.
Below, Table 4 provides an example of how to display the thematic findings of your study in table form. Depending on your dissertation committee’s preference or your own, you might present all research questions and all themes and subthemes in a single table. Or, you might provide separate tables to introduce the themes for each research question as you progress through your presentation of the findings in the chapter.
Emergent Themes and Research Questions
Research question
| Themes that address question
|
RQ1. How do adults who have previously experienced homelessness describe their transitions to stable housing?
| Theme 1: Assistance from friends, family, or strangers was instrumental in getting back into stable housing Theme 2: Obtaining professional support was essential for overcoming the cascading effects of poverty and homelessness
|
RQ2. How do adults who have previously experienced homelessness describe returning to paid employment?
| Theme 3: Self-perceived stigma caused them to feel uncomfortable socializing with coworkers Theme 4: Routine productivity and sense of making a contribution helped to restore self-concept and positive social identity |
Bonus Tip! Figures to Spice Up Your Results
Although dissertation committees most often wish to see tables such as the above in qualitative results chapters, some also like to see figures that illustrate the data. Qualitative software packages such as NVivo offer many options for visualizing your data, such as mind maps, concept maps, charts, and cluster diagrams. A common choice for this type of figure among our dissertation assistance clients is a tree diagram, which shows the connections between specified words and the words or phrases that participants shared most often in the same context. Another common choice of figure is the word cloud, as depicted in Figure 1. The word cloud simply reflects frequencies of words in the data, which may provide an indication of the importance of related concepts for the participants.
As you move forward with your qualitative analysis and development of your results chapter, we hope that this brief overview of useful tables and figures helps you to decide on an ideal presentation to showcase the trustworthiness your findings. Completing a rigorous qualitative analysis for your dissertation requires many hours of careful interpretation of your data, and your end product should be a rich and detailed results presentation that you can be proud of. Reach out if we can help in any way, as our dissertation coaches would be thrilled to assist as you move through this exciting stage of your dissertation journey!
Anfara Jr., V. A., Brown, K. M., & Mangione, T. L. (2002). Qualitative analysis on stage: Making the research process more public. Educational Researcher , 31 (7), 28-38. https://doi.org/10.3102/0013189X031007028
Blignault, I., & Ritchie, J. (2009). Revealing the wood and the trees: Reporting qualitative research. Health Promotion Journal of Australia , 20 (2), 140-145. https://doi.org/10.1071/HE09140
Burnard, P., Gill, P., Stewart, K., Treasure, E., & Chadwick, B. (2008). Analysing and presenting qualitative data. British Dental Journal , 204 (8), 429-432. https://doi.org/10.1038/sj.bdj.2008.292
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.
Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.
Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.
Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.
Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.
Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.
Approach | What does it involve? |
---|---|
Grounded theory | Researchers collect rich data on a topic of interest and develop theories . |
Researchers immerse themselves in groups or organizations to understand their cultures. | |
Action research | Researchers and participants collaboratively link theory to practice to drive social change. |
Phenomenological research | Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences. |
Narrative research | Researchers examine how stories are told to understand how participants perceive and make sense of their experiences. |
Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.
Professional editors proofread and edit your paper by focusing on:
See an example
Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:
Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.
For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.
Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.
Most types of qualitative data analysis share the same five steps:
There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.
Approach | When to use | Example |
---|---|---|
To describe and categorize common words, phrases, and ideas in qualitative data. | A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps. | |
To identify and interpret patterns and themes in qualitative data. | A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity. | |
To examine the content, structure, and design of texts. | A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade. | |
To study communication and how language is used to achieve effects in specific contexts. | A political scientist could use discourse analysis to study how politicians generate trust in election campaigns. |
Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:
The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.
Data collection occurs in real-world contexts or in naturalistic ways.
Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.
Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.
Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:
The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.
Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.
Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .
Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
There are five common approaches to qualitative research :
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved August 7, 2024, from https://www.scribbr.com/methodology/qualitative-research/
Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".
I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”
After collecting and analyzing your research data, it’s time to write the results section. This article explains how to write and organize the thesis results section, the differences in reporting qualitative and quantitative data, the differences in the thesis results section across different fields, and the best practices for tables and figures.
The thesis results section factually and concisely describes what was observed and measured during the study but does not interpret the findings. It presents the findings in a logical order.
The opening paragraph of the thesis results section should briefly restate the thesis question. Then, present the results objectively as text, figures, or tables.
Quantitative research presents the results from experiments and statistical tests , usually in the form of tables and figures (graphs, diagrams, and images), with any pertinent findings emphasized in the text. The results are structured around the thesis question. Demographic data are usually presented first in this section.
For each statistical test used, the following information must be mentioned:
Qualitative research presents results around key themes or topics identified from your data analysis and explains how these themes evolved. The data are usually presented as text because it is hard to present the findings as figures.
For each theme presented, describe:
Relevant characteristics about your study subjects
Nevertheless, results should be presented logically across all disciplines and reflect the thesis question and any hypotheses that were tested.
The presentation of results varies considerably across disciplines. For example, a thesis documenting how a particular population interprets a specific event and a thesis investigating customer service may both have collected data using interviews and analyzed it using similar methods. Still, the presentation of the results will vastly differ because they are answering different thesis questions. A science thesis may have used experiments to generate data, and these would be presented differently again, probably involving statistics. Nevertheless, results should be presented logically across all disciplines and reflect the thesis question and any hypotheses that were tested.
In the Sciences domain (qualitative and experimental research), the results and discussion sections are considered separate entities, and the results from experiments and statistical tests are presented. In the HSS domain (qualitative research), the results and discussion sections may be combined.
There are two approaches to presenting results in the HSS field:
The use of figures and tables is highly encouraged because they provide a standalone overview of the research findings that are much easier to understand than wading through dry text mentioning one result after another. The text in the results section should not repeat the information presented in figures and tables. Instead, it should focus on the pertinent findings or elaborate on specific points.
Some popular software programs that can be used for the analysis and presentation of statistical data include Statistical Package for the Social Sciences (SPSS ) , R software , MATLAB , Microsoft Excel, Statistical Analysis Software (SAS) , GraphPad Prism , and Minitab .
The easiest way to construct tables is to use the Table function in Microsoft Word . Microsoft Excel can also be used; however, Word is the easier option.
Quantitative results example : Figure 3 presents the characteristics of unemployed subjects and their rate of criminal convictions. A statistically significant association was observed between unemployed people <20 years old, the male sex, and no household income.
Qualitative results example: Table 5 shows the themes identified during the face-to-face interviews about the application that we developed to anonymously report corruption in the workplace. There was positive feedback on the app layout and ease of use. Concerns that emerged from the interviews included breaches of confidentiality and the inability to report incidents because of unstable cellphone network coverage.
|
|
Ease of use of the app | The app was easy to use, and I did not have to contact the helpdesk |
I wish all apps were so user-friendly! | |
App layout | The screen was not cluttered. The text was easy to read |
The icons on the screen were easy to understand | |
Confidentiality | I am scared that the app developers will disclose my name to my employer |
Unstable network coverage | I was unable to report an incident that occurred at one of our building sites because there was no cellphone reception |
I wanted to report the incident immediately , but I had to wait until I was home, where the cellphone network signal was strong |
Table 5. Themes and selected quotes from the evaluation of our app designed to anonymously report workplace corruption.
Results are presented in three sections of your thesis: the results, discussion, and conclusion.
A thesis is the most crucial document that you will write during your academic studies. For professional thesis editing and thesis proofreading services , visit Enago Thesis Editing for more information.
Get free updates.
Subscribe to our newsletter for regular insights from the research and publishing industry!
Have you completed all data collection procedures and analyzed all results ?
Have you included all results relevant to your thesis question, even if they do not support your hypothesis?
Have you reported the results objectively , with no interpretation or speculation?
For quantitative research, have you included both descriptive and inferential statistical results and stated whether they support or contradict your hypothesis?
Have you used tables and figures to present all results?
In your thesis body, have you presented only the pertinent results and elaborated on specific aspects that were presented in the tables and figures?
Are all tables and figures correctly labeled and cited in numerical order in the text?
What file formats do you accept for plagiarism check service +.
We accept all file formats, including Microsoft Word, Microsoft Excel, PDF, Latex, etc.
Please upload your research manuscript when you order Plagiarism Check Service . If you want to include the tables, charts, and figure legends in the plagiarism check, please ensure that all content is in editable formats and in one single document.
Acceptable repetition rate varies by journal but aim for low percentages (usually <5%). Avoid plagiarism (including self-plagiarism), cite sources, and use detection tools. Plagiarism can lead to rejection, reputation damage, and serious consequences. Consult your institution for guidance on addressing plagiarism concerns.
We can help you rewrite and paraphrase text in your manuscript to ensure it is not plagiarized under our Developmental Content Rewriting Service. You can provide specific passages or sentences that you are concerned about, and we can assist you in rephrasing them or citing the source materials in a proper format.
iThenticate searches for content matches in the following 30 languages: Chinese (Simplified and Traditional), Japanese, Thai, Korean, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Hungarian, Italian, Norwegian (Bokmal, Nynorsk), Polish, Portuguese, Romanian, Serbian, Slovak, Slovenian, Spanish, Swedish, Arabic, Greek, Hebrew, Farsi, Russian, and Turkish. Please note that iThenticate will match your text with text of the same language.
This is a missive from the trenches of research. I’m trying to write up half of the results section of a qualitative paper from the outline I’ve drafted. In qualitative research, the writing is not just reporting results but part of the research itself .
I‘m sharing this example because I’ve been doing qualitative research for 15 years, at this point, and I still need to find ways to manage the different mind traps of writing. This is one of two qualitative papers I'm writing up this year, and one of many I've written thus far in my career. With time and experience, I’m getting faster at identifying the mind trap and having strategies to get out of it. Maybe someday I’ll even avoid them all together. But if you are newer to qualitative research, I want you to know you are not alone, and give you ideas for how you can manage your own writing process.
I need to confirm prior iterations of analysis and write it up in a way that’s not just a list. I’m also trying not to overwrite by 1000 words or more. I’m aiming for the proverbial “crappy first draft” that I will improve over time and with the help of my (many, and interprofessional) coauthors
In my attempt not to overwrite the length, I give myself some boundaries based on word count. Of the 3500 words for the ultimate draft for a clinical journal, I’ll probably use 300 for intro, 500-800 for methods, 1-2k for results, and 800-1200 for discussion & conclusion. If I aim for 2k for results right now (knowing I could edit things down), it would mean 1k or less for the first section of the results reporting challenges. We identified 4 types (themes? subthemes?) of challenges, so that’s 250 words per “flavor”. Within each type of challenge, like disease related challenges, there’s usually 5-6 sub-elements, so basically each element gets a sentence each. Some elements can get quotes but not all.
I started by both skimming the coded data to confirm the take-aways we have outlined but then I kept finding different awesome quotes and my brain tried to re-adjudicate the analysis and I wrote 500 words where I needed 100.
So I stopped and checked in with a coauthor and peer qualitative expert. She validated this stage of the process, and agreed with the following plan:
Close the data and quotes.
Write a bare bones generic description of the section.
Go back to Atlas, skim each code to “check” my analytic summary, add specificity.
Add 1-3 high-value (surprising, pithy, unusual) quotes to paragraphs.
Choose 1-2 longer quotes on different themes for the table.
As a result, I finished my task of writing the generic description – 800 words so far - in the same time that it took me to overwrite the first half of the first challenge type.
Sometimes in the process of doing this you realize you don’t have the story straight yet. This also happened to me recently. Though ideally I’d do this before trying to write the results, I realized I needed to back and review the data and do some memoing to figuring out the story.
I’m working with data coded in Atlas.ti , and for each code, I’m reviewing the data and summarizing each quote with a bullet point in a memo. Then I re-organize the bullets by type (however my brain is wanting to group them), and write headers, and re-write those headers until they are phrases that can be complied into sentences.
What other ideas do you use to get unstuck?
Imposter syndrome and early career research.
Home » Research Methodology – Types, Examples and writing Guide
Table of Contents
Definition:
Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.
Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:
I. Introduction
II. Research Design
III. Data Collection Methods
IV. Data Analysis Methods
V. Ethical Considerations
VI. Limitations
VII. Conclusion
Types of Research Methodology are as follows:
This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.
This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.
This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.
This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.
This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.
This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.
This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.
This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.
An Example of Research Methodology could be the following:
Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults
Introduction:
The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.
Research Design:
The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.
Participants:
Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.
Intervention :
The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.
Data Collection:
Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.
Data Analysis:
Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.
Ethical Considerations:
This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.
Data Management:
All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.
Limitations:
One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.
Conclusion:
This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.
Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:
Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.
The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.
The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.
Here are some of the applications of research methodology:
Research methodology serves several important purposes, including:
Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:
Research Methodology | Research Methods |
---|---|
Research methodology refers to the philosophical and theoretical frameworks that guide the research process. | refer to the techniques and procedures used to collect and analyze data. |
It is concerned with the underlying principles and assumptions of research. | It is concerned with the practical aspects of research. |
It provides a rationale for why certain research methods are used. | It determines the specific steps that will be taken to conduct research. |
It is broader in scope and involves understanding the overall approach to research. | It is narrower in scope and focuses on specific techniques and tools used in research. |
It is concerned with identifying research questions, defining the research problem, and formulating hypotheses. | It is concerned with collecting data, analyzing data, and interpreting results. |
It is concerned with the validity and reliability of research. | It is concerned with the accuracy and precision of data. |
It is concerned with the ethical considerations of research. | It is concerned with the practical considerations of research. |
Researcher, Academic Writer, Web developer
Writing about data, writing about results, more apa style.
With many examples (including "Poor", "Better", and "Best" versions), this book shows how to understand your data as well as take the perspective of the reader. Learn to explain everything from a single number to the results of multiple logistic regressions in plain words (though not every type of relationship or test is covered). See also her Supplementary Materials including Podcasts (Video) presentations of slides.
Always be careful using these templates for writing up results. They are specific to the way the data was coding and the specific research question. You may need to include more, less, or different information in your field or for particular journals. The best model is from your advisor, colleague, or another paper in your field.
Organized by test with additional FAQs, explanations, and commentary, this book is a well organized compilation with complete examples of reporting each test, including a description of the findings and the test results in APA style. Includes Descriptive Statistics, Reliability, and standard tests up to ANOVA and Multiple Regression, plus a chapter on tables .
If none of the above cover your situation, check the Psychology Resource Archive by University of Nebraska, which has a huge collection of short pdfs on specific analyses, many (but not all) with example write-ups. Also with instructions and output from SPSS.
Ask a Librarian | Hours & Directions | Mason Libraries Home
Copyright © George Mason University
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Yelena p. wu.
1 Division of Public Health, Department of Family and Preventive Medicine, University of Utah,
2 Cancer Control and Population Sciences, Huntsman Cancer Institute,
3 Department of Pediatrics-Nutrition, USDA/ARS Children’s Nutrition Research Center, Baylor College of Medicine,
4 College of Nursing, University of Central Florida,
5 Department of Psychiatry and Human Behavior, Brown University, and
6 School of Nursing, University of Pennsylvania
Objective To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research. Methods A question and answer format is used to address considerations for writing and evaluating qualitative research. Results and Conclusions When producing qualitative research, individuals are encouraged to address the qualitative research considerations raised and to explicitly identify the systematic strategies used to ensure rigor in study design and methods, analysis, and presentation of findings. Increasing capacity for review and publication of qualitative research within pediatric psychology will advance the field’s ability to gain a better understanding of the specific needs of pediatric populations, tailor interventions more effectively, and promote optimal health.
The Journal of Pediatric Psychology (JPP) has a long history of emphasizing high-quality, methodologically rigorous research in social and behavioral aspects of children’s health ( Palermo, 2013 , 2014 ). Traditionally, research published in JPP has focused on quantitative methodologies. Qualitative approaches are of interest to pediatric psychologists given the important role of qualitative research in developing new theories ( Kelly & Ganong, 2011 ), illustrating important clinical themes ( Kars, Grypdonck, de Bock, & van Delden, 2015 ), developing new instruments ( Thompson, Bhatt, & Watson, 2013 ), understanding patients’ and families’ perspectives and needs ( Bevans, Gardner, Pajer, Riley, & Forrest, 2013 ; Lyons, Goodwin, McCreanor, & Griffin, 2015 ), and documenting new or rarely examined issues ( Haukeland, Fjermestad, Mossige, & Vatne, 2015 ; Valenzuela et al., 2011 ). Further, these methods are integral to intervention development ( Minges et al., 2015 ; Thompson et al., 2007 ) and understanding intervention outcomes ( de Visser et al., 2015 ; Hess & Straub, 2011 ). For example, when designing an intervention, qualitative research can identify patient and family preferences for and perspectives on desirable intervention characteristics and perceived needs ( Cassidy et al., 2013 ; Hess & Straub, 2011 ; Thompson, 2014 ), which may lead to a more targeted, effective intervention.
Both qualitative and quantitative approaches are concerned with issues such as generalizability of study findings (e.g., to whom the study findings can be applied) and rigor. However, qualitative and quantitative methods have different approaches to these issues. The purpose of qualitative research is to contribute knowledge or understanding by describing phenomenon within certain groups or populations of interest. As such, the purpose of qualitative research is not to provide generalizable findings. Instead, qualitative research has a discovery focus and often uses an iterative approach. Thus, qualitative work is often foundational to future qualitative, quantitative, or mixed-methods studies.
At the time of this writing, three of six current calls for papers for special issues of JPP specifically note that manuscripts incorporating qualitative approaches would be welcomed. Despite apparent openness to broadening JPP’s emphasis beyond its traditional quantitative approach, few published articles have used qualitative methods. For example, of 232 research articles published in JPP from 2012 to 2014 (excluding commentaries and reviews), only five used qualitative methods (2% of articles).
The goal of the current article is to present considerations for writing and evaluating qualitative research within the context of pediatric psychology to provide a framework for writing and reviewing manuscripts reporting qualitative findings. The current article may be especially useful to reviewers and authors who are less familiar with qualitative methods. The tenets presented here are grounded in the well-established literature on reporting and evaluating qualitative research, including guidelines and checklists ( Eakin & Mykhalovskiy, 2003 ; Elo et al., 2014 ; Mays & Pope, 2000 ; Tong, Sainsbury, & Craig, 2007 ). For example, the Consolidated Criteria for Reporting Qualitative Research checklist describes essential elements for reporting qualitative findings ( Tong et al., 2007 ). Although the considerations presented in the current manuscript have broad applicability to many fields, examples were purposively selected for the field of pediatric psychology.
Our goal is that this article will stimulate publication of more qualitative research in pediatric psychology and allied fields. More specifically, the goal is to encourage high-quality qualitative research by addressing key issues involved in conducting qualitative studies, and the process of conducting, reporting, and evaluating qualitative findings. Readers interested in more in-depth information on designing and implementing qualitative studies, relevant theoretical frameworks and approaches, and analytic approaches are referred to the well-developed literature in this area ( Clark, 2003 ; Corbin & Strauss, 2008 ; Creswell, 1994 ; Eakin & Mykhalovskiy, 2003 ; Elo et al., 2014 ; Mays & Pope, 2000 ; Miles, Huberman, & Saldaña, 2013 ; Ritchie & Lewis, 2003 ; Saldaña, 2012 ; Sandelowski, 1995 , 2010 ; Tong et al., 2007 ; Yin, 2015 ). Researchers new to qualitative research are also encouraged to obtain specialized training in qualitative methods and/or to collaborate with a qualitative expert in an effort to ensure rigor (i.e., validity).
We begin the article with a definition of qualitative research and an overview of the concept of rigor. While we recognize that qualitative methods comprise multiple and distinct approaches with unique purposes, we present an overview of considerations for writing and evaluating qualitative research that cut across qualitative methods. Specifically, we present basic principles in three broad areas: (1) study design and methods, (2) analytic considerations, and (3) presentation of findings (see Table 1 for a summary of the principles addressed in each area). Each area is addressed using a “question and answer” format. We present a brief explanation of each question, options for how one could address the issue raised, and a suggested recommendation. We recognize, however, that there are no absolute “right” or “wrong” answers and that the most “right” answer for each situation depends on the specific study and its purpose. In fact, our strongest recommendation is that authors of qualitative research manuscripts be explicit about their rationale for design, analytic choices, and strategies so that readers and reviewers can evaluate the rationale and rigor of the study methods.
Summary of Overarching Principles to Address in Qualitative Research Manuscripts
1. Research question identification |
a. Describe a clear and feasible research question that focuses on discovery or exploration |
b. Hypotheses: Avoid providing hypotheses |
2. Rigor and transparency |
a. Rigor: Describe how rigor (e.g., credibility, dependability, confirmability, transferability) was documented throughout the research process |
b. Transparency: Clearly articulate study procedures and data analysis strategies |
3. Study design and methods |
a. Theory: Describe how theory informed the study, including research question, design, analysis, and/or interpretation |
i. Use methodological congruence as a guiding principle |
ii. If divergence from theory occurs, explain and justify how and why theory was modified |
b. Sampling and sample size: Following the concept of transferability, clearly describe sample selection methods and sample descriptive characteristics, and provide evidence of data saturation and depth of categories |
c. Describe any changes to data collection methods made over the course of the study (e.g., modifications to interview guide) |
4 Data analysis |
a. Implement, document, and describe a systematic analytic process (e.g., use of code book, development of codes—a priori codes, emergent codes, how codes were collapsed, methods used for coding, memos, coding process) |
b. Coding reliability: Provide information on who comprised the coding team (if multiple coders were used), and coding training and process, with emphasis on systematic methods, including strategies for resolving differences between coders |
c. Method of organizing data (e.g., computer software, manually): Describe how data were organized. If qualitative computer software was used, provide name and version number of software used. |
5. Presentation of findings |
a. Results and discussion: Provide summaries and interpretations of the data (e.g., themes, conceptual models) and select illustrative quotes. Present the findings in the context of the relevant literature. |
b. Quantification of results: Consider whether quantification of findings is appropriate. If quantification is used, provide justification for its use. |
Qualitative methods are used across many areas of health research, including health psychology ( Gough & Deatrick, 2015 ), to study the meaning of people’s lives in their real-world roles, represent their views and perspectives, identify important contextual conditions, discover new or additional insights about existing social and behavioral concepts, and acknowledge the contribution of multiple perspectives ( Yin, 2015 ). Qualitative research is a family of approaches rather than a single approach. There are multiple and distinct qualitative methodologies or stances (e.g., constructivism, post-positivism, critical theory), each with different underlying ontological and epistemological assumptions ( Lincoln, Lynham, & Guba, 2011 ). However, certain features are common to most qualitative approaches and distinguish qualitative research from quantitative research ( Creswell, 1994 ).
Key to all qualitative methodologies is that multiple perspectives about a phenomenon of interest are essential, and that those perspectives are best inductively derived or discovered from people with personal experience regarding that phenomenon. These perspectives or definitions may differ from “conventional wisdom.” Thus, meanings need to be discovered from the population under study to ensure optimal understanding. For instance, in a recent qualitative study about texting while driving, adolescents said that they did not approve of texting while driving. The investigators, however, discovered that the respondents did not consider themselves driving while a vehicle was stopped at a red light. In other words, the respondents did approve of texting while stopped at a red light. In addition, the adolescents said that they highly valued being constantly connected via texting. Thus, what is meant by “driving” and the value of “being connected” need to be considered when approaching the issue of texting while driving with adolescents ( McDonald & Sommers, 2015 ).
Qualitative methods are also distinct from a mixed-method approach (i.e., integration of qualitative and quantitative approaches; Creswell, 2013b ). A mixed-methods study may include a first phase of quantitative data collection that provides results that inform a second phase of the study that includes qualitative data collection, or vice versa. A mixed-methods study may also include concurrent quantitative and qualitative data collection. The timing, priority, and stage of integration of the two approaches (quantitative and qualitative) are complex and vary depending on the research question; they also dictate how to attend to differing qualitative and quantitative principles ( Creswell et al., 2011 ). Understanding the basic tenets of qualitative research is preliminary to integrating qualitative research with another approach that has different tenets. A full discussion of the integration of qualitative and quantitative research approaches is beyond the scope of this article. Readers interested in the topic are referred to one of the many excellent resources on the topic ( Creswell, 2013b ).
Qualitative research questions are typically open-ended and are framed in the spirit of discovery and exploration and to address existing knowledge gaps. The current manuscript provides exemplar pediatric qualitative studies that illustrate key issues that arise when reporting and evaluating qualitative studies. Example research questions that are contained in the studies cited in the current manuscript are presented in Table 2 .
Example Qualitative Research Questions From the Pediatric Literature
Citation | Study purpose or research question |
---|---|
“How do parents who no longer live together make treatment decisions for their children with cancer?” | |
“(a) How parents gained insight into their child’s perspective [when the child had incurable cancer]; (b) to elucidate the parental diversity in acknowledging the ‘voice of the child’;and (c) to gain insight into the factors that underlie the diversity in the parents’ ability to take into account their child’s perspective.” | |
Instrument development: “The [PROMIS Pediatric Stress] instruments were developed successively with guidance from developmental, cultural, and linguistic experts and based on input from an international group of youth…This article describes the qualitative development of the PROMIS Pediatric Stress Response item banks.” | |
“The study objective was to explore the emotional experiences of siblings as expressed by participants during group sessions, and to identify relevant themes for interventions targeted at siblings [of children with rare disorders].” | |
“We describe here the development and components of a pilot school-based health care transition education program implemented in 2005 in a large urban county in central Flordia. We then present [qualitative] data on program acceptability (report of relevance and satisfaction) and feasibility (ease of implementation, integration, and expansion).” | |
“What are the various components of a successful health care transition for adolescents and young adults with Type 1 Diabetes?” |
There are several overarching principles with unique application in qualitative research, including definitions of scientific rigor and the importance of transparency. Quantitative research generally uses the terms reliability and validity to describe the rigor of research, while in qualitative research, rigor refers to the goal of seeking to understand the tacit knowledge of participants’ conception of reality ( Polanyi, 1958 ). For example, Haukeland and colleagues (2015) used qualitative analysis to identify themes describing the emotional experiences of a unique and understudied population—pediatric siblings of children with rare medical conditions such as Turner syndrome and Duchenne muscular dystrophy. Within this context, the authors’ rendering of the diverse and contradictory emotions experienced by siblings of children with these rare conditions represents “rigor” within a qualitative framework.
While debate exists regarding the terminology describing and strategies for strengthening scientific rigor in qualitative studies ( Guba, 1981 ; Morse, 2015a , 2015b ; Sandelowski, 1993a ; Whittemore, Chase, & Mandle, 2001 ), little debate exists regarding the importance of explaining strategies used to strengthen rigor. Such strategies should be appropriate for the specific study; therefore, it is wise to clearly describe what is relevant for each study. For example, in terms of strengthening credibility or the plausibility of data analysis and interpretation, prolonged engagement with participants is appropriate when conducting an observational study (e.g., observations of parent–child mealtime interactions; Hughes et al., 2011 ; Power et al., 2015 ). For an interview-only study, however, it would be more practical to strengthen credibility through other strategies (e.g., keeping detailed field notes about the interviews included in the analysis).
Dependability is the stability of a data analysis protocol. For instance, stepwise development of a coding system from an “a priori” list of codes based on the underlying conceptual framework or existing literature (e.g., creating initial codes for potential barriers to medication adherence based on prior studies) may be essential for analysis of data from semi-structured interviews using multiple coders. But this may not be the ideal strategy if the purpose is to inductively derive all possible coding categories directly from data in an area where little is known. For some research questions, the strategy may be to strengthen confirmability or to verify a specific phenomenon of interest using different sources of data before generating conclusions. This process, which is commonly referred to in the research literature as triangulation, may also include collecting different types of data (e.g., interview data, observational data), using multiple coders to incorporate different ways of interpreting the data, or using multiple theories ( Krefting, 1991 ; Ritchie & Lewis, 2003 ). Alternatively, another investigator may use triangulation to provide complementarity data ( Krefting, 1991 ) to garner additional information to deepen understanding. Because the purpose of qualitative research is to discover multiple perspectives about a phenomenon, it is not necessarily appropriate to attain concordance across studies or investigators when independently analyzing data. Some qualitative experts also believe that it is inappropriate to use triangulation to confirm findings, but this debate has not been resolved within the field ( Ritchie & Lewis, 2003 ; Tobin & Begley, 2004 ). More agreement exists, however, regarding the value of triangulation to complement, deepen, or expand understanding of a particular topic or issue ( Ritchie & Lewis, 2003 ). Finally, instead of basing a study on a sample that allows for generalizing statistical results to other populations, investigators in qualitative research studies are focused on designing a study and conveying the results so that the reader understands the transferability of the results. Strategies for transferability may include explanations of how the sample was selected and descriptive characteristics of study participants, which provides a context for the results and enables readers to decide if other samples share critical attributes. A study is deemed transferable if relevant contextual features are common to both the study sample and the larger population.
Strategies to enhance rigor should be used systematically across each phase of a study. That is, rigor needs to be identified, managed, and documented throughout the research process: during the preparation phase (data collection and sampling), organization phase (analysis and interpretation), and reporting phase (manuscript or final report; Elo et al., 2014 ). From this perspective, the strategies help strengthen the trustworthiness of the overall study (i.e., to what extent the study findings are worth heeding; Eakin & Mykhalovskiy, 2003 ; Lincoln & Guba, 1985 ).
A good example of managing and documenting rigor and trustworthiness can be found in a study of family treatment decisions for children with cancer ( Kelly & Ganong, 2011 ). The researchers describe how they promoted the rigor of the study and strengthening its credibility by triangulating data sources (e.g., obtaining data from children’s custodial parents, stepparents, etc.), debriefing (e.g., holding detailed conversations with colleagues about the data and interpretations of the data), member checking (i.e., presenting preliminary findings to participants to obtain their feedback and interpretation), and reviewing study procedure decisions and analytic procedures with a second party.
Transparency is another key concept in written reports of qualitative research. In other words, enough detail should be provided for the reader to understand what was done and why ( Ritchie & Lewis, 2003 ). Examples of information that should be included are a clear rationale for selecting a particular population or people with certain characteristics, the research question being investigated, and a meaningful explanation of why this research question was selected (i.e., the gap in knowledge or understanding that is being investigated; Ritchie & Lewis, 2003 ). Clearly describing recruitment, enrollment, data collection, and data analysis or extraction methods are equally important ( Dixon-Woods, Shaw, Agarwal, & Smith, 2004 ). Coherency among methods and transparency about research decisions adds to the robustness of qualitative research ( Tobin & Begley, 2004 ) and provides a context for understanding the findings and their implications.
Is qualitative research hypothesis driven.
In contrast to quantitative research, qualitative research is not typically hypothesis driven ( Creswell, 1994 ; Ritchie & Lewis, 2003 ). A risk associated with using hypotheses in qualitative research is that the findings could be biased by the hypotheses. Alternatively, qualitative research is exploratory and typically guided by a research question or conceptual framework rather than hypotheses ( Creswell, 1994 ; Ritchie & Lewis, 2003 ). As previously stated, the goal of qualitative research is to increase understanding in areas where little is known by developing deeper insight into complex situations or processes. According to Richards and Morse (2013) , “If you know what you are likely to find, … you should not be working qualitatively” (p. 28). Thus, we do not recommend that a hypothesis be stated in manuscripts presenting qualitative data.
Consistent with the exploratory nature of qualitative research, one particular qualitative method, grounded theory, is used specifically for discovering substantive theory (i.e., working theories of action or processes developed for a specific area of concern; Bryant & Charmaz, 2010 ; Glaser & Strauss, 1967 ). This method uses a series of structured steps to break down qualitative data into codes, organize the codes into conceptual categories, and link the categories into a theory that explains the phenomenon under study. For example, Kelly and Ganong (2011) used grounded theory methods to produce a substantive theory about how single and re-partnered parents (e.g., households with a step-parent) made treatment decisions for children with childhood cancer. The theory of decision making developed in this study included “moving to place,” which described the ways in which parents from different family structures (e.g., single and re-partnered parents) were involved in the child’s treatment decision-making. The resulting theory also delineated the causal conditions, context, and intervening factors that contributed to the strategies used for moving to place.
Theories may be used in other types of qualitative research as well, serving as the impetus or organizing framework for the study ( Sandelowski, 1993b ). For example, Izaguirre and Keefer (2014) used Social Cognitive Theory ( Bandura, 1986 ) to investigate self-efficacy among adolescents with inflammatory bowel disease. The impetus for selecting the theory was to inform the development of a self-efficacy measure for adolescent self-management. In another study on health care transition in youth with Type 1 Diabetes ( Pierce, Wysocki, & Aroian, 2016 ), the investigators adapted a social-ecological model—the Socio-ecological Model of Adolescent and Young Adult Transition Readiness (SMART) model ( Schwartz, Tuchman, Hobbie, & Ginsberg, 2011 )—to their study population ( Pierce & Wysocki, 2015 ). Pierce et al. (2016) are currently using the adapted SMART model to focus their data collection and structure the preliminary analysis of their data about diabetes health care transition.
Regardless of whether theory is induced from data or selected in advance to guide the study, consistent with the principle of transparency , its role should be clearly identified and justified in the research publication ( Bradbury-Jones, Taylor, & Herber, 2014 ; Kelly, 2010 ). Methodological congruence is an important guiding principle in this regard ( Richards & Morse, 2013 ). If a theory frames the study at the outset, it should guide and direct all phases. The resulting publication(s) should relate the phenomenon of interest and the research question(s) to the theory and specify how the theory guided data collection and analysis. The publication(s) should also discuss how the theory fits with the finished product. For instance, authors should describe how the theory provided a framework for the presentation of the findings and discuss the findings in context with the relevant theoretical literature.
A study examining parents’ motivations to promote vegetable consumption in their children ( Hingle et al., 2012 ) provides an example of methodological congruence. The investigators adapted the Model of Goal Directed Behavior ( Bagozzi & Pieters, 1998 ) for parenting practices relevant to vegetable consumption (Model of Goal Directed Vegetable Parenting Practices; MGDVPP). Consistent with the adapted theoretical model and in keeping with the congruence principle, interviews were guided by the theoretical constructs contained within the MGDVPP, including parents’ attitudes, subjective norms, and perceived behavioral control related to promoting vegetable consumption in children ( Hingle et al., 2012 ). The study discovered that the adapted model successfully identified parents’ motivations to encourage their children to eat more vegetables.
The use of the theory should be consistent with the basic goal of qualitative research, which is discovery. Alternatively stated, theories should be used as broad orienting frameworks for exploring topical areas without imposing preconceived ideas and biases. The theory should be consistent with the study findings and not be used to force-fit the researcher’s interpretation of the data ( Sandelowski, 1993b ). Divergence from the theory when it does not fit the study findings is illustrated in a qualitative study of hypertension prevention beliefs in Hispanics ( Aroian, Peters, Rudner, & Waser, 2012 ). This study used the Theory of Planned Behavior as a guiding theoretical framework but found that coding separately for normative and control beliefs was not the best organizing schema for presenting the study findings. When divergence from the original theory occurs, the research report should explain and justify how and why the theory was modified ( Bradbury-Jones et al., 2014 ).
Qualitative sampling methods should be “purposeful” ( Coyne, 1997 ; Patton, 2015 ; Tuckett, 2004 ). Purposeful sampling is based on the study purpose and investigator judgments about which people and settings will provide the richest information for the research questions. The logic underlying this type of sampling differs from the logic underlying quantitative sampling ( Patton, 2015 ). Quantitative research strives for empirical generalization. In qualitative studies, generalizability beyond the study sample is typically not the intent; rather, the focus is on deriving depth and context-embedded meaning for the relevant study population.
Purposeful sampling is a broad term. Theoretical sampling is one particular type of purposeful sampling unique to grounded theory methods ( Coyne, 1997 ). In theoretical sampling, study participants are chosen according to theoretical categories that emerge from ongoing data collection and analyses ( Bryant & Charmaz, 2010 ). Data collection and analysis are conducted concurrently to allow generating and testing hypotheses that emerge from analyzing incoming data. The following example from the previously mentioned qualitative interview study about transition from pediatric to adult care in adolescents with type 1 diabetes ( Pierce et al., 2016 ) illustrates the process of theoretical sampling: An adolescent study participant stated that he was “turned off” by the “childish” posters in his pediatrician’s office. He elaborated that he welcomed transitioning to adult care because his diabetes was discovered when he was 18, an age when he reportedly felt more “mature” than most pediatric patients. These data were coded as “developmental misfit” and prompted a tentative hypothesis about developmental stage at entry for pediatric diabetes care and readiness for health care transition. Examining this hypothesis prompted seeking study participants who varied according to age or developmental stage at time of diagnosis to examine the theoretical relevance of an emerging theme about developmental fit.
Not all purposeful sampling, however, is “theoretical.” For example, ethnographic studies typically seek to understand a group’s cultural beliefs and practices ( Creswell, 2013a ). Consistent with this purpose, researchers conducting an ethnographic study might purposefully select study participants according to specific characteristics that reflect the social roles and positions in a given group or society (e.g., socioeconomic status, education; Johnson, 1990 ).
Random sampling is generally not used in qualitative research. Random selection requires a sufficiently large sample to maximize the potential for chance and, as will be discussed below, sample size is intentionally small in qualitative studies. However, random sampling may be used to verify or clarify findings ( Patton, 2015 ). Validating study findings with a randomly selected subsample can be used to address the possibility that a researcher is inadvertently giving greater attention to cases that reinforce his or her preconceived ideas.
Regardless of the sampling method used, qualitative researchers should clearly describe the sampling strategy and justify how it fits the study when reporting study findings (transparency). A common error is to refer to theoretical sampling when the cases were not chosen according to emerging theoretical concepts. Another common error is to apply sampling principles from quantitative research (e.g., cluster sampling) to convince skeptical reviewers about the rigor or validity of qualitative research. Rigor is best achieved by being purposeful, making sound decisions, and articulating the rationale for those decisions. As mentioned earlier in the discussion of transferability , qualitative researchers are encouraged to describe their methods of sample selection and descriptive characteristics about their sample so that readers and reviewers can judge how the current sample may differ from others. Understanding the characteristics of each qualitative study sample is essential for the iterative nature of qualitative research whereby qualitative findings inform the development of future qualitative, quantitative, or mixed-methods studies. Reviewers should evaluate sampling decisions based on how they fit the study purpose and how they influence the quality of the end product.
No definitive rules exist about sample size in qualitative research. However, sample sizes are typically smaller than those in quantitative studies ( Patton, 2015 ). Small samples often generate a large volume of data and information-rich cases, ultimately leading to insight regarding the phenomenon under study ( Patton, 2015 ; Ritchie & Lewis, 2003 ). Sample sizes of 20–30 cases are typical, but a qualitative sample can be even smaller under some circumstances ( Mason, 2010 ).
Sample size adequacy is evaluated based on the quality of the study findings, specifically the full development of categories and inter-relationships or the adequacy of information about the phenomenon under study ( Corbin & Strauss, 2008 ; Ritchie & Lewis, 2003 ). Small sample sizes are of concern if they do not result in these outcomes. Data saturation (i.e., the point at which no new information, categories, or themes emerge) is often used to judge informational adequacy ( Morgan, 1998 ; Ritchie & Lewis, 2003 ). Although enough participants should be included to obtain saturation ( Morgan, 1998 ), informational adequacy pertains to more than sample size. It is also a function of the quality of the data, which is influenced by study participant characteristics (e.g., cognitive ability, knowledge, representativeness) and the researcher’s data-gathering skills and analytical ability to generate meaningful findings ( Morse, 2015b ; Patton, 2015 ).
Sample size is also influenced by type of qualitative research, the study purpose, the sample, the depth and complexity of the topic investigated, and the method of data collection. In general, the more heterogeneous the sample, the larger the sample size, particularly if the goal is to investigate similarities and differences by specific characteristics ( Ritchie & Lewis, 2003 ). For instance, in a study to conduct an initial exploration of factors underlying parents’ motivations to use good parenting practices, theoretical saturation (i.e., the point at which no new information, categories, or themes emerge) was obtained with a small sample ( n = 15), most likely because the study was limited to parents of young children ( Hingle et al., 2012 ). If the goal of the study had been, for example, to identify racial/ethnic, gender, or age differences in food parenting practices, a larger sample would likely be needed to obtain saturation or informational adequacy.
Studies that seek to understand maximum variation in a phenomenon might also need a larger sample than one that is seeking to understand extreme or atypical cases. For example, a qualitative study of diet and physical activity in young Australian men conducted focus groups to identify perceived motivators and barriers to healthy eating and physical activity and examine the influence of body weight on their perceptions. Examining the influence of body weight status required 10 focus groups to allow for group assignment based on body mass index ( Ashton et al., 2015 ). More specifically, 61 men were assigned to a healthy-weight focus group ( n = 3), an overweight/obese focus group ( n = 3), or a mixed-weight focus group ( n = 4). Had the researcher not been interested in whether facilitators and barriers differed by weight status, its likely theoretical saturation could have been obtained with fewer groups. Depth of inquiry also influences sample size ( Sandelowski, 1995 ). For instance, an in-depth analysis of an intervention for children with cancer and their families included 16 family members from three families. Study data comprised 52 hrs of videotaped intervention sessions and 10 interviews ( West, Bell, Woodgate, & Moules, 2015 ). Depth was obtained through multiple data points and types of data, which justified sampling only a few families.
Authors of publications describing qualitative findings should show evidence that the data were “saturated” by a sample with sufficient variation to permit detailing shared and divergent perspectives, meanings, or experiences about the topic of inquiry. Decisions related to the sample (e.g., targeted recruitment) should be detailed in publications so that peer reviewers have the context for evaluating the sample and determining how the sample influenced the study findings ( Patton, 2015 ).
When conducting qualitative research, voluminous amounts of data are gathered and must be prepared (i.e., transcribed) and managed. During the analytic process, data are systematically transformed through identifying, defining, interpreting, and describing findings that are meant to comprehensively describe the phenomenon or the abstract qualities that they have in common. The process should be systematic ( dependability ) and well-documented in the analysis section of a qualitative manuscript. For example, Kelly and Ganong (2011) , in their study of medical treatment decisions made by families of children with cancer, described their analytic procedure by outlining their approach to coding and use of memoing (e.g., keeping careful notes about emerging ideas about the data throughout the analytic process), comparative analysis (e.g., comparing data against one another and looking for similarities and differences), and diagram drawing (e.g., pictorially representing the data structure, including relationships between codes).
Because the intent of qualitative research is to account for multiple perspectives, the goal of qualitative analysis is to comprehensively incorporate those perspectives into discernible findings. Researchers accustomed to doing quantitative studies may expect authors to quantify interrater reliability (e.g., kappa statistic) but this is not typical in qualitative research. Rather, the emphasis in qualitative research is on (1) training those gathering data to be rigorous and produce high-quality data and on (2) using systematic processes to document key decisions (e.g., code book), clear direction, and open communication among team members during data analysis. The goal is to make the most of the collective insight of the investigative team to triangulate or complement each other’s efforts to process and interpret the data. Instead of evaluating if two independent raters came to the same numeric rating, reviewers of qualitative manuscripts should judge to what extent the overall process of coding, data management, and data interpretation were systematic and rigorous. Authors of qualitative reports should articulate their coding procedures for others to evaluate. Together, these strategies promote trustworthiness of the study findings.
An example of how these processes are described in the report of a qualitative study is as follows:
The first two authors independently applied the categories to a sample of two interviews and compared their application of the categories to identify lack of clarity and overlap in categories. The investigators created a code book that contained a definition of categories, guidelines for their application, and excerpts of data exemplifying the categories. The first two authors independently coded the data and compared how they applied the categories to the data and resolved any differences during biweekly meetings. ATLAS.ti, version 6.2, was used to document and accommodate ongoing changes and additions to the coding structure ( Palma et al., 2015 , p. 224).
Multiple computer software packages for qualitative data analysis are currently available ( Silver & Lewins, 2014 ; Yin, 2015 ). These packages allow the researcher to import qualitative data (e.g., interview transcripts) into the software program and organize data segments (e.g., delineate which interview excerpts are relevant to particular themes). Qualitative analysis software can be useful for organizing and sorting through data, including during the analysis phase. Some software programs also offer sophisticated coding and visualization capabilities that facilitate and enhance interpretation and understanding. For example, if data segments are coded by specific characteristics (e.g., gender, race/ethnicity), the data can be sorted and analyzed by these characteristics, which may contribute to an understanding of whether and/or how a particular phenomenon may vary by these characteristics.
The strength of computer software packages for qualitative data analysis is their potential to contribute to methodological rigor by organizing the data for systematic analyses ( John & Johnson, 2000 ; MacMillan & Koenig, 2004 ). However, the programs do not replace the researchers’ analyses. The researcher or research team is ultimately responsible for analyzing the data, identifying the themes and patterns, and placing the findings within the context of the literature. In other words, qualitative data analysis software programs contribute to, but do not ensure scientific rigor or “objectivity” in, the analytic process. In fact, using a software program for analysis is not essential if the researcher demonstrates the use of alternative tools and procedures for rigor.
Should there be overlap between presentation of themes in the results and discussion sections.
Qualitative papers sometimes combine results and discussion into one section to provide a cohesive presentation of the findings along with meaningful linkages to the existing literature ( Burnard, 2004 ; Burnard, Gill, Stewart, Treasure, & Chadwick, 2008 ). Although doing so is an acceptable method for reporting qualitative findings, some journals prefer the two sections to be distinct.
When the journal style is to distinguish the two sections, the results section should describe the findings, that is, the themes, while the discussion section should pull the themes together to make larger-level conclusions and place the findings within the context of the existing literature. For instance, the findings section of a study of how rural African-American adolescents, parents, and community leaders perceived obesity and topics for a proposed obesity prevention program, contained a description of themes about adolescent eating patterns, body shape, and feedback on the proposed weight gain prevention program according to each subset of participants (i.e., adolescents, parents, community leaders). The discussion section then put these themes within the context of findings from prior qualitative and intervention studies in related populations ( Cassidy et al., 2013 ). In the Discussion, when making linkages to the existing literature, it is important to avoid the temptation to extrapolate beyond the findings or to over-interpret them ( Burnard, 2004 ). Linkages between the findings and the existing literature should be supported by ample evidence to avoid spurious or misleading connections ( Burnard, 2004 ).
The results section of a qualitative research report is likely to contain more material than customary in quantitative research reports. Findings in a qualitative research paper typically include researcher interpretations of the data as well as data exemplars and the logic that led to researcher interpretations ( Sandelowski & Barroso, 2002 ). Interpretation pertains to the researcher breaking down and recombining the data and creating new meanings (e.g., abstract categories, themes, conceptual models). Select quotes from interviews or other types of data (e.g., participant observation, focus groups) are presented to illustrate or support researcher interpretations. Researchers trained in the quantitative tradition, where interpretation is restricted to the discussion section, may find this surprising; however, in qualitative methods, researcher interpretations represent an important component of the study results. The presentation of the findings, including researcher interpretations (e.g., themes) and data (e.g., quotes) supporting those interpretations, adds to the trustworthiness of the study ( Elo et al., 2014 ).
The Results section should contain a balance between data illustrations (i.e., quotes) and researcher interpretations ( Lofland & Lofland, 2006 ; Sandelowski, 1998 ). Because interpretation arises out of the data, description and interpretation should be combined. Description should be sufficient to support researcher interpretations, and quotes should be used judiciously ( Morrow, 2005 ; Sandelowski, 1994 ). Not every theme needs to be supported by multiple quotes. Rather, quotes should be carefully selected to provide “voice” to the participants and to help the reader understand the phenomenon from the participant’s perspective within the context of the researcher’s interpretation ( Morrow, 2005 ; Ritchie & Lewis, 2003 ). For example, researchers who developed a grounded theory of sexual risk behavior of urban American Indian adolescent girls identified desire for better opportunities as a key deterrent to neighborhood norms for early sexual activity. They illustrated this theme with the following quote: “I don’t want to live in the ‘hood and all that…My sisters are stuck there because they had babies. That isn’t going to happen to me” ( Saftner, Martyn, Momper, Loveland-Cherry, & Low, 2015 , p. 372).
There is no precise formula for the proportion of description to interpretation. Both descriptive and analytic excess should be avoided ( Lofland & Lofland, 2006 ). The former pertains to presentation of unedited field notes or interview transcripts rather than selecting and connecting data to analytic concepts that explain or summarize the data. The latter pertains to focusing on the mechanics of analysis and interpretation without substantiating researcher interpretations with quotes. Reviewer requests for methodological rigor can result in researchers writing qualitative research papers that suffer from analytic excess ( Sandelowski & Barroso, 2002 ). Page limitations of most journals provide a safeguard against descriptive excess, but page limitations should not circumvent researchers from providing the basis for their interpretations.
Additional potential problems with qualitative results sections include under-elaboration, where themes are too few and not clearly defined. The opposite problem, over-elaboration, pertains to too many analytic distinctions that could be collapsed under a higher level of abstraction. Quotes can also be under- or over-interpreted. Care should be taken to ensure the quote(s) selected clearly support the theme to which they are attached. And finally, findings from a qualitative study should be interesting and make clear contributions to the literature ( Lofland & Lofland, 2006 ; Morse, 2015b ).
There is controversy over whether to quantify qualitative findings, such as providing counts for the frequency with which particular themes are endorsed by study participants ( Morgan, 1993 ; Sandelowski, 2001 ). Qualitative papers usually report themes and patterns that emerge from the data without quantification ( Dey, 1993 ). However, it is possible to quantify qualitative findings, such as in qualitative content analysis. Qualitative content analysis is a method through which a researcher identifies the frequency with which a phenomenon, such as specific words, phrases, or concepts, is mentioned ( Elo et al., 2014 ; Morgan, 1993 ). Although this method may appeal to quantitative reviewers, it is important to note that this method only fits specific study purposes, such as studies that investigate the language used by a particular group when communicating about a specific topic. In addition, results may be quantified to provide information on whether themes appeared to be common or atypical. Authors should avoid using imprecise language, such as “some participants” or “many participants.” A good example of quantification of results to illustrate more or less typical themes comes from a manuscript describing a qualitative study of school nurses’ perceived barriers to addressing obesity with students and their families. The authors described that all but one nurse reported not having the resources they needed to discuss weight with students and families whereas one-quarter of nurses reported not feeling competent to discuss weight issues ( Steele et al., 2011 ). If quantification of findings is used, authors should provide justification that explains how quantification is consistent with the aims or goals of the study ( Sandelowski, 2001 ).
This article highlighted key theoretical and logistical considerations that arise in designing, conducting, and reporting qualitative research studies (see Table 1 for a summary). This type of research is vital for obtaining patient, family, community, and other stakeholder perspectives about their needs and interests, and will become increasingly critical as our models of health care delivery evolve. For example, qualitative research could contribute to the study of health care providers and systems with the goal of optimizing our health care delivery models. Given the increasing diversity of the populations we serve, qualitative research will also be critical in providing guidance in how to tailor health interventions to key characteristics and increase the likelihood of acceptable, effective treatment approaches. For example, applying qualitative research methods could enhance our understanding of refugee experiences in our health care system, clarify treatment preferences for emerging adults in the midst of health care transitions, examine satisfaction with health care delivery, and evaluate the applicability of our theoretical models of health behavior changes across racial and ethnic groups. Incorporating patient perspectives into treatment is essential to meeting this nation’s priority on patient-centered health care ( Institute of Medicine Committee on Quality of Health Care in America, 2001 ). Authors of qualitative studies who address the methodological choices addressed in this review will make important contributions to the field of pediatric psychology. Qualitative findings will lead to a more informed field that addresses the needs of a wide range of patient populations and produces effective and acceptable population-specific interventions to promote health.
The authors thank Bridget Grahmann for her assistance with manuscript preparation.
This work was supported by National Cancer Institute of the National Institutes of Health (K07CA196985 to Y.W.). This work is a publication of the United States Department of Agriculture/Agricultural Research Center (USDA/ARS), Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, Texas. It is also a publication of the USDA/ARS, Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, Texas, and funded in part with federal funds from the USDA/ARS under Cooperative Agreement No. 58‐6250‐0‐008 (to D.T.). The contents of this publication do not necessarily reflect the views or policies of the USDA, nor does mention of trade names, commercial products, or organizations imply endorsement from the U.S. government. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Conflicts of interest : None declared.
Qualitative data presents information using descriptive language, images, and videos instead of numbers.
To help make sense of this type of data—as opposed to quantitative data, which is all about numbers—we’ve compiled a list for you. It features some of the best examples of qualitative data around.
So what makes these examples so great?
They use qualitative data to tell a story.
The Freedom of Information Act (FOIA) Library, aka The Vault , is a fascinating first stop on our journey to getting to know top-notch qualitative data.
Because of FOIA requests, the FBI has been required by law to release information about all sorts of cases. On The Vault, you’ll find everything from interview transcripts with serial killers and crime lords to safety plans for Princess Diana and Queen Elizabeth’s visits to the United States.
Source: FBI Records: The Vault — Al Capone
I could get lost in The Vault for hours, just poking around in different cases. Even after spending just half an hour reading different case files, I came away with all sorts of knowledge I didn’t have before.
Like the fact that Steven Paul Jobs—as in the Steve Jobs—was once considered as a candidate for an appointed position on the U.S. President’s Export Council. And that the FBI did a thorough background investigation of Jobs in 1991 as part of the process.
The Vault’s case file on Jobs reveals intriguing information. Like that some of Jobs’ former employees alleged he could “distort the truth” and let ambition get in the way of relationships with employees and peers.
Source: FBI Records: The Vault — Steven Paul Jobs
This data tells a story. It pulls you in. And it leaves you with additional questions to explore.
That’s some rich qualitative data right there.
And The Vault is full of it—mostly in the form of letters, interview transcripts, investigator observations, newspaper articles, and case summaries.
There’s a reason we find the comments section—or forums like Reddit, which are basically all one big comments section—so fascinating.
They’re full of qualitative data.
In a 2015 study published in Information, Communication & Society , German researchers Nina Springer, Ines Englemann, and Christian Pfaffinger set out to find out why the comments section has such a magnetic pull.
The researchers began with a baseline understanding that “user comments allow ‘annotative reporting’ by embedding users’ viewpoints within an article’s context, providing readers with additional information to form opinions, which can potentially enhance deliberative processes.”
In other words, user comments on an article or a forum post are attractive because they offer extra information to help readers form opinions, as a group, with other commenters.
To dig deeper, the study surveyed “650 commenters, lurkers, and non-users” in Germany.
The results are surprising and show how different the comments-section experience is for contributors, lurkers, and non-participants.
Contributors, according to the study, appear to mostly engage for the sake of “social-interactive motives to participate in journalism, and to discuss with other users.”
Lurkers, on the other hand, are there both for “cognitive and entertainment motives.” The lower the quality of the comments section discussion, the lower the lurker’s satisfaction.
And non-participants? They’re just annoyed that the forums and comments sections exist.
Here’s the thing: if you’re reading forums and comments sections as a qualitative researcher, you’re participating as a lurker. So the study’s results about lurkers primarily getting satisfaction from quality discussions make a lot of sense.
You don’t just want fluff comments. You want rich data from those posts.
And if you take your time to read through a bunch of comments, you can find it.
If you want to learn about the ugly and good parts of marriage, head over to r/Marriage , where people routinely post primary sources, like this post of a spouse’s shopping list.
Source: Reddit r/Marriage
What gets me is the combination of shrimp and Beyond Meat. But the original poster, or OP, is lamenting the horizontal nature of the shopping list.
The comments section is rich with additional qualitative data:
If you were studying marital satire or shopping styles in marriage/partnership, this post would be a perfect place to find qualitative data for your research.
So would any Facebook groups, Quora posts, Instagram Reels, and TikTok videos on grocery shopping, marriage, partnership, and marital humor.
If you’re conducting market research, there are all sorts of places you can go to gain insights on your products. PickFu is one of them. It’s a market research tool where you can test different versions of your products and ads and get objective, written feedback on them.
Tools like SurveyMonkey, Jotform, Qualtrics, and Typeform can all give you survey responses like this, too.
But here’s an example of what I’m talking about. In the image below, an Amazon seller is asking 30 female Amazon Prime subscribers which package would inspire them to click through.
Source: PickFu
The respondents chose the second option, but that’s just quantitative data. It tells us that of the 30 respondents, 20 people voted for Option B, versus Option A’s 10 votes. If there were no survey responses, the Amazon seller wouldn’t know why the majority of respondents picked the second option.
The qualitative data, on the other hand, reveals the answer: the eyes on the packaging design, the clear information, and the product name on Option B are more intriguing for most respondents.
Here’s a written recap of some of the comments arguing in favor of Option B:
You can run surveys using tools like this—and you can either comb through the data yourself or use tools like ATLAS.ti and Nvivo to help you analyze your qualitative data .
The Pew Research Center is a fascinating trove of quantitative data, but it also offers qualitative data in the form of survey analysis. If you’re studying how internet use among teenagers changed as smartphones became ubiquitous, for instance, you’ll find quantitative data on the Pew Research Center. But you’ll also find an analysis of the quantitative data, which reaches deeper into the numbers and responses to bring you the researchers’ observations or conclusions. This is qualitative data.
For example, this Pew Research Center study on teen internet use shows that most U.S. teenagers use the internet every day, with some using it almost constantly—and they’re not on Facebook. You’re more likely to find teen girls on TikTok, Snapchat, and Instagram and teen boys on Reddit and Twitch.
Source: Pew Research Center
Instead of using the numbers gleaned from this piece—or perhaps in addition to that—you could use the observations and analysis to support or inform your choices.
The methodology section of these surveys is also a great place to find qualitative research because it shows you how the survey was conducted.
The moral of the story here is that you can, and should, look for the qualitative data that results from quantitative research—whether it’s on Pew Research Center or somewhere else.
It’s there, and it can help guide your research.
This might seem like a boring place to find qualitative data, but any government website is rich with descriptive, reliable, authoritative information.
Take the Alaska State Legislature’s website, akleg.gov, for instance. On it, you can find the full text of various bills and laws, plus the history of how they were enacted (or not).
If you’re researching anything related to state, local, or federal government policy, government websites are rich with qualitative data in the form of documents, audio files, videos, and notes.
Google Scholar is one of my favorite websites for finding peer-reviewed, scholarly qualitative data. There’s also an entire section devoted to case law. All you have to do is put in a keyword or keyphrase and you’ll get hundreds of authoritative sources to choose from.
You can find research on all sorts of topics, but especially medical, educational, and political studies. Some are paywalled, but you can also look at the Directory of Open Access Journals (DOAI) for non-paywalled, academic, qualitative data.
Source: DOAI
Here’s what the data will typically look like on Google Scholar after you run a search—which, by the way, you can customize according to date, which means you’ll get the freshest data if you want it.
Source: Google Scholar
You’ll notice that most of this research has quantitative data too, which means it’s mixed-method. But it’s full of qualitative analysis and observations as well.
Also, since the results you get from Google Scholar can be anything from books to articles to journals, some sources have more qualitative data than others.
When you need secondary research that’s also qualitative, Google Scholar is an ideal place to find it.
You know how law enforcement takes crime scene photos at the scene of a crime? That’s because the photos offer evidence in the form of qualitative data. The rooms describe a scene in a way that words can’t.
Audio files use something other than written language to describe information or sounds. And video files can combine sound with images to provide detailed information about an event.
In the image below, you’ll see a series of non-graphic images taken at the crime scene of the murder of Marilyn Sheppard in 1954. The police used them as evidence during Sam Sheppard’s trial—he was accused of committing the murder—and Cleveland State University now uses the images to help law students study case law.
Source: CSU Ohio
That’s another feature of excellent qualitative data—it can be applied in more than one way, used for more than one situation or research objective.
In short, qualitative data offers nuance, flexibility, and knowledge you can’t get from quantitative data. That said, both are valuable and have their place in research. Our guide to qualitative vs quantitative data can give you more insight into how the dance between the two works.
Keep reading about user experience.
Qualitative data presents information using descriptive language, images, and videos instead of numbers. To help make sense of this type of data—as opposed to quantitative…
Usability testing helps designers, product managers, and other teams figure out how easily users can use a website, app, or product. With these tools, user…
Qualitative data analysis is the work of organizing and interpreting descriptive data. Interview recordings, open-ended survey responses, and focus group observations all yield descriptive—qualitative—information. This…
UX research tools help designers, product managers, and other teams understand users and how they interact with a company’s products and services. The tools provide…
Qualitative data is information you can describe with words rather than numbers. Quantitative data is information represented in a measurable way using numbers. One type…
It seems like every other company is bragging about their AI-enhanced user experiences. Consumers and the UX professionals responsible for designing great user experiences are…
UX metrics help identify where users struggle when using an app or website and where they are successful. The data collected helps designers, developers, and…
Ease of use is a common expectation for a site to be considered well designed. Over the past few years, we have been used to…
Think that speeding up your website isn’t important? Big mistake. A one-second delay in page load time yields: Your site taking a few extra seconds to…
User experience is one of the most important aspects of having a successful website, app, piece of software, or any other product that you’ve built. …
Your website’s navigation structure has a huge impact on conversions, sales, and bounce rates. If visitors can’t figure out where to find what they want,…
A heatmap is an extremely valuable tool for anyone with a website. Heatmaps are a visual representation of crucial website data. With just a simple…
The problem with a lot of the content that covers website analysis is that the term “website analysis” can refer to a lot of different things—and…
Here, we show you how to use Google Analytics together with Crazy Egg’s heatmap reports to easily identify and fix 3 common website problems.
We share the 3-step process for the website usability testing we recommend to our customers, plus the tools to pull actionable insights out of the process.
Over 300,000 websites use Crazy Egg to improve what's working, fix what isn't and test new ideas.
Last Updated on September 14, 2021
What is qualitative data, what are the ingredients of a good qualitative data analysis, how to conduct an enlightening qualitative data analysis, the pros and cons of qualitative data analysis, get great results from qualitative data analysis.
When numbers fall short and you need the full story, qualitative data analysis comes to the rescue. Instead of following assumptions based on numerical data, qualitative data analysis methods let you dig deeper. Qualitative data analysis examines non-numerical data – words, images, and observations, to uncover themes, patterns, and meanings.
And in this article, we’ll tell you exactly how to do it yourself, in-house.
Qualitative data analysis uncovers the stories and feelings behind numbers. Qualitative methods gain information from conversations, interviews, and observations, capturing what people think and why they act a certain way. Unlike hard numbers, qualitative data helps us see the color and texture of people’s opinions, experiences, and emotions.
Examples of the textual data that often makes up qualitative data pieces are a user’s detailed feedback on a mobile app’s usability, a shopper’s narrative about choosing eco-friendly products, or observational notes on customer behavior in a retail setting.
This type of qualitative data collection helps us understand real feelings and thoughts, and goes beyond numbers and assumptions.
Get qual research with Video Responses
Unlock the voice of the consumer with qualitative insights. Get fast, reliable Video Responses straight from your target customers.
There’s a big difference between knowing that 50% of customers prefer your new product and understanding the nuanced reasons behind that preference.
It’s easy to get blinded by shiny numbers. In this case, a preference signals that you’re doing something great. But not knowing what, means you can’t replicate it, or double down on it to crank up that 50% even more.
So what you’ll need to do is dig into the ‘why’ behind the ‘what’. And we mean really dig. A strong qualitative data analysis process really aims at not putting words inside your customers mouths but letting them speak for themselves.
Another example is when a company finds out through a quick quantitative data survey that customers rate their service 4 out of 5. Which isn’t bad. But how can they improve it – or even work to maintain it? Guesswork is lethal here, yet it’s what so many companies resort to.
Which leads to obvious follow-up actions that are usually not customer-centric. Let’s say that this company assumes people are mostly happy because of their quick response times. So, they implement chatbots to take care of the first part of conversations, to speed things up even more. What could be wrong with that?
But what if through in-depth interviews, they could have discovered that the personal touch from the staff right from the get-go is what customers really value?
In consumer research, these nuances are gold. They allow your team to make finely tuned adjustments that resonate deeply with your audience. It’s what helps you move beyond the one-size-fits-all approach suggested by quantitative data.
So if you want to start making experiences and products that feel personal and relevant to each customer, here are some ways to approach qualitative data research.
What it is: Content analysis involves examining texts, reviews, and comments to identify frequently occurring words and sentiments, providing a quantitative measure of qualitative feedback.
Good to know:
Chances are, you already have a lot of content that can be analyzed for qualitative data research. In that case, content analysis is your go-to approach to getting started. Content analysis means zooming in on recurring words, phrases, and sentiments scattered across reviews and comments.
Dig into reviews, comments, and emails and start flagging words and phrases that keep coming back. These can help you identify areas for improvement, but also show you what really is working.
This way, content analysis offers a quantitative measure of qualitative feedback, enabling you to prioritize actions based on what’s most mentioned by your customers, when they’re not prompted or asked anything specifically.
By systematically categorizing and quantifying this feedback, you’ll be able to make informed decisions on product features, marketing messages, and even future design innovations.
What it is: Narrative analysis delves into customers’ stories to understand their experiences, decisions, and emotions throughout their journey with your brand.
A lot of times brands are mostly interested in the beginning and end of a customer journey: how do I get in front of customers, and how do I get in their shopping basket?
But the story of what happens between those two moments is just as, if not more important. And with narrative analysis, you can help connect the dots.
You won’t just be looking at the touchpoints there were, but also what customers were thinking and feeling at each stage. By interpreting qualitative data, you can create a full story from start to finish on how customers think and feel and make decisions in your market.
And that is so much more than just a nice story. Narrative analysis shows you where you can swoop in, where you should change your communications or where you should offer more support — for a happy ever after.
What it is: Discourse analysis examines language and communication on platforms like social media to understand how they influence public perception and consumer behavior.
Discourse analysis looks at the broader conversation around topics relevant to your brand. This qualitative data analysis method looks at how language and communication on platforms like social media shape public perception and influence consumer behavior.
Discourse analysis not just about what’s being said about your brand and products; it’s about understanding the cultural, social, and environmental currents that drive these conversations.
For example, when customers discuss “sustainability,” they’re not just talking about your specific packaging; they’re engaging in a larger dialogue about corporate responsibility, environmental impact, and ethical consumption.
Discourse analysis helps you grasp the nuances of these discussions, revealing how your brand can authentically contribute to and lead within these conversations.
This strategic insight allows you to align your messaging with your audience’s values, build credibility, and position your brand as a leader in meaningful sustainability efforts.
By engaging with and influencing the discourse, you can adapt to current consumer expectations but you can even take it a step further, and shape future trends and behaviors in alignment with your brand’s values and goals.
What it is: Thematic analysis seeks to find common themes within qualitative data, moving beyond individual opinions to uncover broader patterns.
Plenty of brands are already sitting on qualitative data from thousands of customer interactions, which might seem like a jumble of individual opinions and experiences.
You might look at them and think ‘ ha, humans really all want or value different things ’. But there will be overlap, and that is where the real value lies.
Thematic analysis aims at finding common themes in this qualitative data. You move beyond surface-level chaos by categorizing all pieces of feedback into distinct themes.
These themes could range from specific product features, such as “battery life” in electronics, to broader experiential factors, like “customer service excellence” or “ease of use.” By identifying these recurring patterns, you gain a clearer, more organized understanding of your customers’ priorities and pain points.
One of the benefits of thematic analysis is that it helps you organize a wide range of feedback into clear, actionable insights for each team in your business. You may uncover themes about the product, about communication, or other parts of your business that customers get exposed to. In other words: every business could benefit from some thematic analysis.
What it is: Grounded theory uses early feedback from users to develop theories and strategies that meet their needs, focusing on continuous improvement.
For those launching a new service, grounded theory takes feedback from early users and starts building from there. It uses real, raw customer thoughts to shape a strategy that better meets their needs.
This approach isn’t just about collecting data; it’s about letting qualitative data direct your next moves, ensuring your innovations are not just shots in the dark but informed, strategic decisions aimed at fulfilling genuine customer needs.
When you adopt grounded theory, you commit to a process of continuous improvement and adaptation. As feedback starts rolling in from those first users or beta testers, you’re given a unique opportunity to see your product through the eyes of those it’s meant to serve.
This early-stage feedback is gold—unfiltered, direct, and incredibly insightful. It tells you what’s resonating with your audience, what’s missing the mark, and, crucially, how to adjust your offering for better alignment with customer expectations.
Bear in mind that when done right, grounded theory goes beyond merely reacting to feedback. It’s about proactively seeking it out and engaging with it. This means not just reading comments or reviews, but diving deeper through follow-up questions, interviews, or focus groups to really understand the why behind the feedback.
Diving into qualitative data analysis can feel like a big task for many brands. There’s often worry about how much time it’ll take. Or how much money. And then there’s the question of whether all that detail might lead you off track instead of to clear answers.
After all, businesses move fast these days, and spending a lot of time on a research project doesn’t always fit the schedule.
But those worries don’t have to stop you. With the right plan and the best tools, you can dodge those issues. Start by creating a roadmap, so you know what the next few days, weeks or months will look like. See? It’s less daunting already.
Below, we’ll break the whole process down into simple steps. We’re going to walk through how to tackle qualitative data analysis without getting bogged down.
When it comes to qualitative research, if something’s said, it’s crucial. And that means you gotta write it down. Or at least have a tool to do it for you.
‘ ’I don’t wanna miss a thing’ ’ is your theme song for this step.
Every chuckle, pause, or sigh can give you insights into what your customers really think and feel. Now, I know what you’re thinking: “Transcribing interviews sounds like a lot of work. Let alone conducting all of them!”
But here’s the good news—using Attest makes this step a pleasant breeze on a hot summer night. With Attest, you can send out surveys that dive deep into all the qualitative questions you’ve been itching to ask. Our platform is designed to capture rich, detailed responses in a way that is easy to search and analyze.
This means you don’t have to worry about spending hours transcribing interviews. The responses are already there in writing, ready for you to analyze. This doesn’t just save time; it ensures accuracy. You’re getting the unfiltered voice of your customer, directly and conveniently. No more playing detective with hours of audio recordings.
Next, sift through your transcribed interviews, survey responses, and notes. Your goal here is to spot patterns or themes that crop up repeatedly.
This could be similar sentiments about a product feature or shared experiences with your service. Organizing data helps you identify themes that move from scattered bits of feedback to clear, common threads that tell a bigger story.
There are plenty of software tools out there designed to help with qualitative data analysis. These tools can help you code your qualitative data, which means tagging parts of the text with keywords or themes, making it easier to organize and analyze textual data. They can save you a heap of time and help you stay accurate and consistent in your analysis.
That’s where Attest’s innovative Video Responses come into play, offering a seamless and impactful way to gather and analyze qualitative data directly from your target audience – all in the same platform as your quantitative data.
Here’s how we transform qualitative research:
As consumer behaviors and preferences continue to evolve at lightning speed, it’s products like Video Responses that will help brands win more based on decisions made with a deeper understanding of their customers. Jeremy King, CEO and Founder of Attest
Understanding the context in which feedback is provided is crucial in qualitative analysis. It’s not just about what your customers are saying; it’s also about why they’re saying it at that particular moment. This deeper layer of insight can significantly impact how you interpret and act on the data you collect.
Why context matters:
How to account for context in your qualitative analysis:
Once you’ve got some preliminary findings, it’s a good idea to circle back to your participants. This could mean confirming your interpretations with them or diving deeper into certain areas.
This will help you be sure your analysis aligns with your respondents’ intended meanings and experiences. Plus, it shows respect for their contributions and can uncover even richer insights.
Finally, bring your analysis to life in a report that mixes clear, concise writing with visual elements like charts, graphs, and quotes.
Visualization helps make complex insights more accessible, engaging, and persuasive. Your report should not only present what you’ve found but also tell the story of how these research findings can influence decisions and strategies.
The real value of qualitative data analysis lies in its application. Use the insights to inform decisions, refine strategies, and better meet your customers’ needs. This is where your analytical journey makes a tangible impact on your business.
Previously when we’ve had to do qualitative research, it’s taken months and months. Attest gets the information that we need quickly. By the very next day we’re able to implement some of the changes and then go back for round two. Simon Gray, Head of Marketing, Zzoomm
Qualitative data analysis looks at the human side of data. It offers insights that numbers alone can’t provide. But like all research methods, even qualitative data analysis methods have their strengths and weaknesses, especially when it comes to shaping a marketing plan that hits the mark.
Bringing qualitative data into your strategy brings about transformative advantages that can significantly transform how your business connects with your audience and adapts to the market. Without further ado, let’s look at the benefits it brings.
Want to go beyond meeting the explicit needs of your customers, and also address their unspoken desires and creating experiences that truly matter to them? Qualitative analysis offers an unparalleled depth of understanding by capturing the subtleties and complexities of customer behavior and sentiment.
By engaging directly with your audience through interviews, focus groups, or social media interactions, you gain nuanced perspectives that quantitative data alone cannot provide. These rich insights enable you to craft marketing strategies and product innovations that resonate on a deeper level with your audience.
Numbers can be quite limiting. The benefit of qualitative analysis is that you’re not confined to a predetermined set of questions or outcomes.
Instead, you have the freedom to explore new directions, probe interesting findings further, and let the data guide your research process. This flexibility means your research process can evolve in real-time, responding to unexpected insights or shifting market dynamics.
The insights gained from qualitative analysis can significantly inform strategic decision-making. By understanding the nuances of customer feedback, you can make informed and detailed choices about where to allocate resources, which product features to prioritize, and how to position your business in the market.
You can go beyond generic moves in the right direction and make sure you hit the nail on the head on the first try, instead of slowly creeping towards it.
Businesses are always looking for ways to innovate, but where to look? It’s often less obvious and loud than you think. And innovation doesn’t always have to be massively disrupting or a big pivot. Sometimes small changes made by listening to your customers’ unmet needs and emerging desires will tell you everything you need to know for your next product launch.
Innovation that brings information in from customers is often much more to-the-point than innovation that comes from inside the business, where people tend to be focused on the product and possibilities around it a lot. But try a different approach every once in a while. Listen to the people that use your product, not just the ones who create it.
Qualitative data puts your customer’s voice front and center. It highlights their stories, opinions, and feelings, making your marketing strategy more empathetic and customer-focused. This will allow you to build stronger connections with your audience.
Not by any marketing gimmicks, creating online communities or carefully curated UGC campaigns, but by speaking directly to customers’ experiences and emotions. Using qualitative data across your organization brings transformative effects, deeply embedding a culture of attentiveness, adaptability, and unwavering focus on the customer at every level of your business.
This approach does more than just inform product development or marketing strategies—it reshapes the very foundation of how your business operates and interacts with the people it was created for.
We’re not going to pretend that qualitative data analysis is something you can do on autopilot. But while qualitative data analysis brings its set of challenges, understanding these can help you navigate through them more effectively.
Moreover, with the right tools and strategies, the benefits you gain far outweigh any of the potential drawbacks we’ve listed below. Here’s a closer look at these challenges and how to turn them into opportunities:
Yes, qualitative analysis often* demands time and resources. The depth it requires—from collecting detailed narratives to transcribing and interpreting vast amounts of text—can seem daunting. However, this investment in time is what uncovers the nuanced insights that quantitative methods might miss.
*… but not always. With Attest’s Video Responses, you get reliable qual insights fast, alongside your quantitative data!
Of course, the interpretive nature of qualitative data analysis does introduce the risk of subjectivity and bias. But ignoring all opinions and thoughts around your product or brand is arguably worse. What this challenge underscores all the more is the importance of a structured, systematic approach to analysis.
By implementing standardized procedures for coding and analyzing data, and employing tools that facilitate consistency across the process, you can mitigate the risks of subjective bias.
And if you involve a diverse team in the analysis process and make sure you pick a representative set of respondents, qualitative research can enable a deeper, more empathetic understanding of ALL your customers; experiences and perspectives.
Qualitative data collection can indeed be tricky to scale and generalize across a broader market. But who said you can only do qualitative research with in-person interviews? With the right survey tool, like Attest, you can ask quantitative questions at scale, to an audience that is large and diverse.
Our participant audience consists of 125 million people spread across 59 countries, and once you send out a survey, results can come back in mere minutes or hours. So if scalability is holding you back, online surveys with video responses are the answer.
Unlock the full potential of qualitative data analysis with Attest. Gain actionable insights, bridge the gap between raw data and emotional intelligence, and make informed decisions. Discover how Attest can support your journey to deeper consumer understanding at Attest for insights professionals and learn about our commitment to data quality .
Customer Research Manager
How to find gaps in the market – 8 important steps, market analysis, consumer research guide: best process and examples, hybrid and flexible working at attest, subscribe to our newsletter.
Fill in your email and we’ll drop fresh insights and events info into your inbox each week.
* I agree to receive communications from Attest. Privacy Policy .
You're now subscribed to our mailing list to receive exciting news, reports, and other updates!
BMC Medical Education volume 24 , Article number: 841 ( 2024 ) Cite this article
40 Accesses
Metrics details
Access to valid and reliable instruments is essential in the field of implementation science, where the measurement of factors associated with healthcare professionals’ uptake of EBP is central. The Norwegian version of the Evidence-based practice profile questionnaire (EBP 2 -N) measures EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior. Despite its potential utility, the EBP 2 -N requires further validation before being used in a cross-sectional survey targeting different healthcare professionals in Norwegian primary healthcare. This study assessed the content validity, construct validity, and internal consistency of the EBP 2 -N among Norwegian primary healthcare professionals.
To evaluate the content validity of the EBP 2 -N, we conducted qualitative individual interviews with eight healthcare professionals in primary healthcare from different disciplines. Qualitative data was analyzed using the “text summary” model, followed by panel group discussions, minor linguistic changes, and a pilot test of the revised version. To evaluate construct validity (structural validity) and internal consistency, we used data from a web-based cross-sectional survey among nurses, assistant nurses, physical therapists, occupational therapists, medical doctors, and other professionals ( n = 313). Structural validity was tested using a confirmatory factor analysis (CFA) on the original five-factor structure, and Cronbach’s alpha was calculated to assess internal consistency.
The qualitative interviews with primary healthcare professionals indicated that the content of the EBP 2 -N was perceived to reflect the constructs intended to be measured by the instrument. However, interviews revealed concerns regarding the formulation of some items, leading to minor linguistic revisions. In addition, several participants expressed that some of the most specific research terms in the terminology domain felt less relevant to them in clinical practice. CFA results exposed partial alignment with the original five-factor model, with the following model fit indices: CFI = 0.749, RMSEA = 0.074, and SRMR = 0.075. Cronbach’s alphas ranged between 0.82 and 0.95 for all domains except for the Sympathy domain (0.69), indicating good internal consistency in four out of five domains.
The EBP 2 -N is a suitable instrument for measuring Norwegian primary healthcare professionals’ EBP knowledge, attitudes, confidence, and behavior. Although EBP 2 -N seems to be an adequate instrument in its current form, we recommend that future research focuses on further assessing the factor structure, evaluating the relevance of the items, and the number of items needed.
Retrospectively registered (prior to data analysis) in OSF Preregistration. Registration DOI: https://doi.org/10.17605/OSF.IO/428RP .
Peer Review reports
Evidence-based practice (EBP) integrates the best available research evidence with clinical expertise, patient characteristics, and preferences [ 1 ]. The process of EBP is often described as following the five steps: ask, search, appraise, integrate, and evaluate [ 1 , 2 ]. Practicing the steps of EBP requires that healthcare professionals hold a set of core competencies [ 3 , 4 ]. Lack of competencies such as EBP knowledge and skills, as well as negative attitudes towards EBP and low self-efficacy, may hinder the implementation of EBP in clinical practice [ 5 , 6 , 7 , 8 , 9 , 10 ]. Measuring of EBP competencies may assist organizations in defining performance expectations and directing professional practice toward evidence-based clinical decision-making [ 11 ].
Using well-designed and appropriate measurement instruments in healthcare research is fundamental for gathering precise and pertinent data [ 12 , p. 1]. Access to valid and reliable instruments is also essential in the field of implementation science, where conducting consistent measurements of factors associated with healthcare professionals’ uptake of EBP is central [ 13 ]. Instruments measuring the uptake of EBP should be comprehensive and reflect the multidimensionality of EBP; they should be valid, reliable, and suitable for the population and setting in which it is to be used [ 14 ]. Many instruments measuring different EBP constructs are available today [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]. However, the quality of these instruments varies, and rigorous validation studies that aim to build upon and further develop existing EBP instruments are necessary [ 13 , 16 ].
The authors of this study conducted a systematic review to summarize the measurement properties of existing instruments measuring healthcare professionals’ EBP attitudes, self-efficacy, and behavior [ 16 ]. This review identified 34 instruments, five of which were translated into Norwegian [ 23 , 24 , 25 , 26 , 27 ]. Of these five instruments, only the Evidence-based practice profile questionnaire (EBP 2 ) was developed to measure various EBP constructs, such as EBP knowledge, confidence, attitudes, and behavior [ 28 ]. In addition, EBP 2 was developed to be trans-professional [ 28 ]. Although not exclusively demonstrating high-quality evidence for all measurement properties, the review authors concluded that the EBP 2 was among the instruments that could be recommended for further use and adaption for use among different healthcare disciplines [ 16 ].
EBP 2 was initially developed by McEvoy et al. in 2010 and validated for Australian academics, practitioners, and students from different professions (physiotherapy, podiatry, occupational therapy, medical radiation, nursing, human movement) [ 28 ]. The instrument was later translated into Chinese and Polish and further tested among healthcare professionals in these countries [ 29 , 30 , 31 , 32 ]. The instrument was also translated into Norwegian and cross-culturally adapted into Norwegian [ 27 ]. The authors assessed content validity, face validity, internal consistency, test-retest reliability, measurement error, discriminative validity, and structural validity among bachelor students from nursing and social education and health and social workers from a local hospital [ 27 ]. Although the authors established the content validity of the EBP 2 -Norwegian version (EBP 2 -N), they recommended further linguistic improvements. Additionally, while they found the EBP 2 -N valid and reliable for three subscales, the original five-factor model could not be confirmed using confirmatory factor analysis. Therefore, they recommended further research on the instrument measurement properties [ 27 ].
We recognized the need for further assessment of measurement properties of the EBP 2 -N before using this instrument in a planned cross-sectional survey targeting physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors working with older people in Norwegian primary healthcare [ 33 ]. As our target population differed from the population studied by Titlestad et al. [ 27 ], the EBP 2 -N should be validated again, assessing content validity, construct validity and internal consistency [ 12 , p. 152]. The assessment of content validity evaluates whether the content of an instrument is relevant, comprehensive, and understandable for a specific population [ 34 ]. Construct validity, including structural validity and cross-cultural validity, can provide evidence on whether an instrument measures what it intends to do [ 12 , p. 169]. Furthermore, the degree of interrelatedness among the items (internal consistency) should be assessed when evaluating how items of a scale are combined [ 35 ]. Our objectives were to comprehensively assess content validity, structural validity, and internal consistency of the EBP 2 -N among Norwegian primary healthcare professionals. We hypothesized that the EBP 2 -N was a valid and reliable instrument suitable for use in Norwegian primary healthcare settings.
This study was conducted in two phases: Phase 1 comprised a qualitative assessment of the content validity of the EBP 2 -N, followed by minor linguistic adaptions and a pilot test of the adapted version. Phase 2 comprised an assessment of structural validity and internal consistency of the EBP 2 -N based on the result from a web-based cross-sectional survey.
The design and execution of this study adhered to the COSMIN Study Design checklist for patient-reported outcome measurement instruments, as well as the methodology for assessing the content validity of self-reported outcome measures [ 34 , 36 , 37 ]. Furthermore, this paper was guided by the COSMIN Reporting guidelines for studies on measurement properties of patient-reported outcome measures [ 38 ].
Participants eligible for inclusion in both phases of this study were health personnel working with older people in primary healthcare in Norway, such as physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors. Proficiency in reading and understanding Norwegian was a prerequisite for inclusion. This study is part of a project called FALLPREVENT, a research project that aims to bridge the gap between research and practice in fall prevention in Norway [ 39 ].
The EBP 2 -N consists of 58 self-reported items that are divided into five different domains: (1) Relevance (items 1–14), which refers to the value, emphasis, and importance respondents place on EBP; (2) Sympathy (items 15–21) which refers to the perceived compatibility of EBP with professional work; (3) Terminology (items 22–38), which refers to the understanding of common research terms; (4) Practice (items 39–47), which refers to the use of EBP in clinical practice and; (5) Confidence (items 48–58), which relates to respondents perception of their EBP skills [ 28 ]. All the items are rated on a five-point Likert scale (1 to 5) (see questionnaire in Additional file 1 ). Each domain is summarized, with higher scores indicating a higher degree of the construct measured in the domain in question. The items in the Sympathy domain are negatively phrased and need to be reversed before being summarized. The possible range in summarized scores (min-max) per domain are as follows: Relevance (14–70), Sympathy (7-35) , Terminology (17–85), Practice (9-45) , and Confidence (11–55).
Recruitment and participant characteristics.
Snowball sampling was used to recruit participants in Eastern Norway, and possible eligible participants were contacted via managers in healthcare settings. The number of participants needed for the qualitative content validity interviews was based on the COSMIN methodology recommendations and was set to at least seven participants [ 34 , 37 ]. We recruited and included eight participants. All participants worked with older people in primary healthcare, and included two physical therapists, two occupational therapists, two assistant nurses, one nurse, and one medical doctor. The median age (min-max) was 35 (28–55). Two participants held upper secondary education, four held a bachelor’s degree, and two held a master’s degree. Six participants reported that they had some EBP training from their education or had attended EBP courses, and two had no EBP training.
Before the interviews, a panel of four members (NGL, TB, NRO, and KBT) developed a semi-structured interview guide. Two panel members were EBP experts with extensive experience in EBP research and measurement (NRO and KBT). KBT obtained consent from the developer of the original EBP 2 questionnaire and translated the questionnaire into Norwegian in 2013 [ 27 ].
To evaluate the content validity of the EBP 2 -N for use among different healthcare professionals working in primary healthcare in Norway, we conducted individual interviews with eight healthcare professionals from different disciplines. Topics in the interview guide were guided by the standards of the COSMIN study design checklist and COSMIN criteria for good content validity, which include questions related to the following three aspects [ 34 , 37 ]: Whether the items of the instrument were perceived relevant (relevance), whether all key concepts were included (comprehensiveness), and whether the instructions, items, and response options were understandable (comprehensibility) [ 34 ]. The interview guide is presented in Additional File 2 . Interview preparations and training included a review of the interview guide and a pilot interview with a physical therapist not included in the study.
Eight interviews were conducted by the first author (NGL) in May and June 2022. All interviews were conducted in the participant’s workplaces. The interviews followed a “think-aloud” method [ 12 , p. 58, 40 , p. 5]. Hence, in the first part of the interview, the participants were asked to complete the questionnaire on paper while simultaneously saying aloud what they were thinking while responding to the questionnaire. Participants also had to state their choice of answer aloud and make a pen mark on the items or responses that either were difficult to understand or did not feel relevant to them. In the second part of the interviews, participants were asked to elaborate on why items were marked as difficult to understand or irrelevant, focusing on relevance and comprehensibility. In addition, the participants were asked to give their overall impression of the instrument and state if they thought any essential items (comprehensiveness) were missing. Only the second part of the interviews were audio-recorded.
After conducting the individual interviews, the first author immediately transcribed the recorded audio data. The subsequent step involved gathering and summarizing participants’ comments into one document that comprised the questionnaire instructions, items, and response options. Using the “text summary” model [ 41 , p.61], we summarized the primary “themes” and “problems” identified by participants during the interviews. These were then aligned with the specific item or section of the questionnaire to which the comments were related. For example, comments on the items’ comprehensibility were identified as one “theme”, and the corresponding “problem” was that the item was perceived as too academically formulated or too complex to understand. Comments on an item’s relevance was another “theme” identified, and an example of a corresponding “problem” was that the EBP activity presented in the item was not recognized as usual practice for the participant. The document contained these specific comments and summarized the participants’ overall impression of the instrument. Additionally, it included more general comments addressing the instrument’s relevance, comprehensibility, and comprehensiveness.
Next, multiple rounds of panel group discussions took place, and the final document with a summary of participants’ comments served as the foundation for these discussions. The content validity of the items, instructions, and response options underwent thorough examinations by the panel members. Panel members discussed aspects, such as relevance, comprehensiveness, and comprehensibility, drawing upon insights from interview participants’ comments and the panel members’ extensive knowledge about EBP.
Finally, the revised questionnaire was pilot tested on 40 master’s students (physical therapists) to evaluate the time used to respond, and the students were invited to make comments in free text adjacent to each domain in the questionnaire. The pilot participants answered a web-based version of the questionnaire.
Recruitment and data collection for the cross-sectional survey.
Snowball sampling was used to recruit participants. The invitation letter, with information about the study and consent form, was distributed via e-mail to healthcare managers in over 37 cities and municipalities representing the eastern, western, central, and northern parts of Norway. The managers forwarded the invitation to eligible employees and encouraged them to respond to the questionnaire. The respondents that consented to participation automatically received a link to the online survey. Our approach to recruitment made it impossible to keep track of the exact number of potential participants who received invitations to participate. As such, we were unable to determine a response rate.
Statistical analyses were performed using STATA [ 42 ]. We tested the structural validity and internal consistency of the 58 domain items of the EBP 2 -N, using the same factor structure as in the initial evaluation [ 28 ] and the study that translated the questionnaire into Norwegian [ 27 ]. Structural validity was assessed using confirmatory factor analysis with maximum likelihood estimation to test if the data fit the predetermined original five-factor structure. Model fit was assessed by evaluating the comparative fit index (CFI), root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). Guidelines suggest that a good-fitting model should have a CFI of around 0.95 or higher, RMSEA of around 0.06 or lower, and SRMR of around 0.08 or lower [ 43 ]. Cronbach’s alpha was calculated for each of the five domains to evaluate whether the items within the domains were interrelated. It has been proposed that Cronbach’s alpha between 0.70 and 0.95 can be considered good [ 44 ].
The sample size required for a factor analysis was set based on COSMIN criteria for at least an “adequate” sample size, which is at least five times the number of items and > 100 [ 45 , 46 ]. Accordingly, the sample size required in our case was > 290 respondents. Regarding missing data, respondents with over 25% missing items on domain items were excluded from further analysis. Respondents with over 20% missing on one domain were excluded from the analysis of that domain. The Little’s MCAR test was conducted to test whether data were missing completely at random. Finally, for respondents with 20% or less missing data on one domain, the missing values were substituted with the respondent’s mean of other items within the same domain.
The Norwegian Agency for Shared Services in Education and Research (SIKT) approved the study in March 2022 (ref: 747319). We obtained written informed consent from the participants interviewed and the cross-sectional survey participants.
The findings for Phase 1 and Phase 2 will be presented separately. Phase 1 will encompass the results of the qualitative content validity assessment, adaptions, and pilot testing of the EBP 2 -N. Phase 2 will encompass the results of assessing the structural validity and internal consistency of the EBP 2 -N.
Comprehensiveness: whether key concepts are missing.
Only a few comments were made on comprehensiveness. Notably, one participant expressed the need for additional items addressing clinical experience and user perspectives.
Overall, the participants commented that they perceived the instrument as relevant to their context. However, several participants pointed out some items that felt less relevant. The terminology domain emerged as a specific area of concern, as most participants expressed that this subscale contained items that felt irrelevant to clinical practice. Comments such as “I do not feel it’s necessary to know all these terms to work evidence-based,” and “The more overarching terms like RCT, systematic review, clinical relevance, and meta-analysis I find relevant, but not the more specific statistical terms,” captured the participants’ perspectives on the relevance of the terminology domain.
Other comments related to the terminology domain revealed that these items could cause feelings of demotivation or inadequacy: “One can become demotivated or feel stupid because of these questions” and “Many will likely choose not to answer the rest of the form, as they would feel embarrassed not knowing”. Other comments on relevance were related to items in other subscales, for example, critical appraisal items (i.e., items 20, 42, and 55), which were considered less relevant by some participants. One participant commented: “If one follows a guideline as recommended, there is no need for critical assessment”.
All eight participants stated that they understood what the term EBP meant. The predominant theme from the participant’s comments was related to the comprehensibility of the EBP 2 -N. Most of the comments on comprehensibility revolved around the formulation of items. Participants noted challenges related to comprehensibility in 35 out of 58 items, either due to difficulty in understanding, readability issues, the length of items, lack of clarity, or overly academic language. For instance, item five in the Relevance domain, “I intend to develop knowledge about EBP”, received comments that expressed uncertainty about whether “EBP” referred to the five steps of EBP or evidence-based clinical interventions/practices (e.g., practices following recommendations in evidence-based guidelines). Items that were perceived as overly academic included phrases such as “intend to apply”, “intend to develop”, or “convert your information needs”. For these phrases, participants suggested simpler formulations in layperson’s Norwegian. Some participants deemed the instrument “too advanced,” “on a too high level,” or “too abstract”, and others expressed that they understood most of the instrument’s content, indicating a divergence among participants.
Examples of items considered challenging to read, too complex, or overly lengthy were items six and 12 in the relevance domain, 16 and 20 in the sympathy domain, and 58 in the confidence domain. The typical comments from participants revealed a preference for shorter, less complex items with a clear and singular focus. In addition, some comments referred to the formulation of response options. For instance, two response options in the confidence domain, “Reasonably confident” and “Quite confident”, were perceived as too similar in Norwegian. In the practice subscale, a participant pointed out that the term “monthly or less” lacked precision, as it could cover any frequency from once to twelve times a year, thus being perceived as imprecise.
The results of the interviews were discussed during several rounds of panel group meetings. After thoroughly examining the comments, 33 items underwent revisions during the panel meetings. These revisions primarily involved minor linguistic adjustments to preserve the original meaning of the items. For example, the Norwegian version of item 8 was considered complex and overly academically formulated and underwent revision. The phrase “I intend to apply” was replaced by “I want to use”, as the panel group considered this phrase easier to understand in Norwegian. Another example involved the term “Framework,” which some participants found vague or difficult to understand (i.e., in item 3, “my profession uses EBP as a framework”). The term “framework” was replaced with “way of thinking and working”, considered more concrete and understandable in Norwegian. The phrase “way of thinking and working” was also added to item 5 to clarify that “EBP” referred to the five steps of EBP, not interventions in line with evidence-based recommendations. Additionally, it was challenging to revise items that participants considered challenging to read, too complex, or overly lengthy (i.e., 6, 12, 16, 20, and 58), as it was difficult to shorten them without losing their original meaning. However, replacing overly academic words with simpler formulations made these examples less complex and more readable.
In terms of relevance of the items, no items were removed, and the terminology domain was retained despite comments regarding its relevance. Changing this domain would have impeded the opportunity to compare results from future studies using this questionnaire with previous studies using the same questionnaire. Regarding comprehensiveness, the panel group reached a consensus that the domains included all essential items concerning the constructs that the original instrument states to measure. Further, examples of minor linguistic changes and additional details on item revisions are reported in Additional File 3 .
The median time to answer the questionnaire was nine minutes. Students made no further comments to the questionnaire.
A total of 313 responded to the survey. The respondents’ mean age (SD) was 42.7 years (11.4).The sample included 119 nurses, 74 assistant nurses, 64 physical therapists, 38 occupational therapists, three medical doctors, and 15 other professionals, mainly social educators. In total, 63.9% ( n = 200) of the participants held a bachelor’s degree, 11.8% ( n = 37) held a master’s degree, and 0.3% ( n = 1) held a Ph.D. Moreover, 10.5% ( n = 33) of the participants had completed upper secondary education, and 13.1% ( n = 41) had tertiary vocational education. One hundred and eighty-five participants (59.1%) reported no formal EBP training, while among the 128 participants who had undergone formal EBP training, 31.5% had completed over 20 h of EBP training. The mean scores (SD) for the different domains were as follows: Relevance 80.2 (7.3), Sympathy 21.2 (3.6), Terminology 44.5 (15.3), Practice 22.2 (5.8), and Confidence 31.2 (9.2).
Out of 314 respondents, one was excluded due to over 25% missing domain items, and three were excluded due to more than 20% missing data in specific domains. Twenty-six respondents had under 20% missing data on one domain, and these missing values were substituted with the respondent’s mean of the other items within the same domain. In total, 313 responses were included in the final analysis. Each domain item had at most 1.3% missing items in total. The percentage of missing data per domain was low and relatively similar across the five domains ( Relevance = 0.05%, Sympathy = 0.2%, Terminology = 0.4%, Practice = 0.6%, Confidence = 0.6%). The Little’s MCAR test showed p-values higher than 0.05 for all domains, indicating that data was missing completely at random.
A five-factor model was estimated based on the original five-factor structure (Fig. 1 ). The model was estimated using the maximum likelihood method. A standardized solution was estimated, constraining the variance of latent variables to 1. Correlation among latent variables was allowed. The results of the CFA showed the following model fit indices: CFI = 0.749, RMSEA = 0.074, and SRMR = 0.075. The CFI and RMSEA results did not meet the criteria for a good-fitting model set a priori (CFI of around 0.95 or higher, RMSEA of around 0.06 or lower). However, the SRMR value met the criteria around 0.08 or lower. All standardized factor loadings were over 0.32, and only five items loaded under 0.5. The range of standardized factor loadings was the following in the different domains: Relevance = 0.47–0.79; Terminology = 0.51–0.80; Practice = 0.35–0.70, Confidence = 0.43–0.86, and Sympathy = 0.32–0.65 (Fig. 1 ).
Confirmatory factor analysis, standardized solution of the EBP2-N. ( n = 313). Note: Large circles = latent variables, Rectangles = measured items, small circles = residual variance
As reported in Table 1 , Cronbach’s alphas ranged between 0.82 and 0.95 for all domains except for the Sympathy domain, where Cronbach’s alpha was 0.69. Results indicate good internal consistency for four domains and close to the cut-off of good internal consistency (> 0.70) on Sympathy.
In this study, we aimed to assess the measurement properties of the EBP 2 -N questionnaire. The study population of interest was healthcare professionals working with older people in Norwegian primary healthcare, including physical therapists, occupational therapists, nurses, assistant nurses, and medical doctors. The study was conducted in two phases: content validity was assessed in Phase 1, and construct validity and internal consistency were assessed in phase 2.
The findings from Phase 1 and the qualitative interviews with primary healthcare professionals indicated that the content of the EBP 2 -N was perceived to reflect the constructs intended to be measured by the instrument [ 28 ]. However, the interviews also revealed different perceptions regarding the relevance and comprehensibility of certain items. Participants expressed concerns about the formulation of some items, and we decided to make minor linguistic adjustments, aligning with previous recommendations to refine item wording through interviews [ 27 ]. Lack of content validity can have adverse consequences [ 34 ]. Irrelevant or incomprehensible items may make respondents tired of answering, leading to potentially biased answers [ 47 , 48 , p. 139]. Analysis of missing data showed that possible irrelevant or incomprehensible items did not lead to respondent fatigue, as the overall percentage of missing items was low (at most 1.3%), and the percentage of missing data did not vary across the domains. Irrelevant items may also impact other measurement properties, such as structural validity and internal consistency [ 34 ]. We believe that the minor linguistic revisions we made to some items made the questionnaire easier to understand. This assumption was supported by the pilot test of 40 master’s students, where no further comments regarding comprehensibility were added.
The overall relevance of the instruments was perceived positively. However, several participants expressed concerns about the terminology domain as some of the most specific research terms felt irrelevant to them in clinical practice. Still, the panel group decided to keep all items in the terminology domain to allow comparison of results among future studies on the same instrument and subscales. In addition, this decision was based on the fact that knowledge about research terminology, such as “types of data,” “measures of effect,” and “statistical significance,” are essential competencies to perform step three of the EBP process (critical appraisal) [ 3 ]. Leaving out parts of the terminology domain could, therefore, possibly make our assessment of the EBP constructs less comprehensive and complete [ 14 ]. However, since the relevance of some items in the terminology domain was questioned, we cannot fully confirm the content validity of this domain, and we recommend interpreting it with caution.
The confirmatory factor analysis (CFA) in Phase 2 of this study revealed that the five-factor model only partially reflected the dimensionality of the constructs measured by the instrument. The SRMR was the only model fit indices that completely met the criteria for a good-fitting model set a priori, yielding a value of 0.075. In contrast, the CFI at 0.749 and RMSEA at 0.074 fell short of the criteria for a good-fitting model (CFI ≥ 0.95, RMSEA ≤ 0.06). However, our model fit indices were closer to the criteria for a good-fitting model compared to Titlestad et al. (2017) [ 27 ] who demonstrated a CFI of 0.69, RMSEA of 0.089, and SRMR of 0.095. This tendency toward better fit in our study may be related to the larger sample size, in agreement with established recommendations of a minimum of 100–200 participants and at least 5–10 times the number of items to ensure the precision of the model and overall model fit [ 46 , p. 380].
Although our sample size met COSMIN’s criteria for an “adequate” sample size [ 45 ], the partially adequate fit indices suggest that the original five-factor model might not be the best-fitting model. A recent study on the Chinese adaptation of the EBP 2 demonstrated that item reduction and using a four-factor structure improved model fit (RMSEA = 0.052, CFI = 0.932) [ 30 ]. The same study removed eighteen items based on content validity evaluation (four from relevance , seven from terminology , and seven from sympathy ) [ 30 ]. In another study where the EBP 2 was adapted for use among Chinese nurses, thirteen items (two from sympathy , eight from terminology , one from practice , and two from confidence ) were removed, and an eight-factor structure was identified [ 29 ]. However, compared to our study, noticeably improved model fit was not demonstrated in this study [ 29 ]. The model fit indices of their 45-item eight-factor structure were quite similar to the one found in our study (RMSEA = 0.065, SRMR = 0.077, CFI = 0.884) [ 29 ]. The results from the two above mentioned studies suggest that a model including fewer items and another factor structure potentially could have applied to our population as well. Although the five-factor model only partially reflects the constructs measured by the EBP 2 -N in our population, it contributes valuable insights into the instrument’s performance in a specific healthcare setting.
Cronbach’s alpha results in this study indicate good internal consistency for four domains, being over 0.82. However, the alpha of 0.69 in the sympathy did not reach the pre-specified cut-off of good internal consistency (0.70) [ 44 ]. A tendency of relatively lower Cronbach’s alpha values on the sympathy domain, compared to the other four domains, has also been identified in previous similar studies [ 27 , 28 , 31 , 32 ]. Titlestad et al. (2017) reported Cronbach’s alpha to be 0.66 in the sympathy domain and above 0.90 in the other domains [ 27 ]. McEvoy et al. (2010), Panczyk et al. (2017), and Belowska et al. (2020) reported Cronbach’s alphas of 0.76–0.80 for the sympathy domain, and 0.85–0.97 for the other domains [ 28 , 31 , 32 ]. In these three cases, Cronbach’s alphas of the sympathy domain were all over 0.70, but the same tendency of this domain demonstrating lower alphas than the other four domains was evident. The relatively lower alpha values in the sympathy domain may be related to the negative phrasing of items [ 49 ], the low number of items in this domain compared to the others ( n = 7) [ 12 , p. 84, 47 , p. 86], and a possible heterogeneity in the construct measured [ 47 , p. 232]. The internal consistency results of our study indicate that the items in the sympathy domain are less interrelated than the other domains. However, having a Cronbach’s alpha value of 0.69 indicates that the items do not entirely lack interrelatedness.
Methodological limitations that could potentially introduce bias into the results should be acknowledged. Although the eight participants involved in the qualitative content validity interviews in Phase 1 covered all healthcare disciplines and education levels aimed to be included in the survey in Phase 2, it remains uncertain whether these eight participants demonstrated all potential variations in the population of interest. It is possible that those that agreed to participate in qualitative interviews regarding an EBP instrument held more positive attitudes toward EBP than the general practitioner would do. Another possible limitation pertains to the qualitative interviews and the fact that the interviewer (NGL) had limited experience facilitating “think-aloud” interviews. To reduce the potential risk of bias related to the interviewer, the panel group with extensive experience in EBP research took part in the interview preparation, and a pilot interview was conducted before the interviews to ensure training.
Furthermore, using a non-random sampling method and the unknown response rate in Phase 2 may have led to biased estimates of measurement properties and affected the representativeness of the sample included. Additionally, the characteristics of non-responders remain unknown, making it challenging to assess whether they differ from the responders and if the final sample adequately represents the variability in the construct of interest. Due to potential selection bias and non-response bias, there may be uncertainty regarding the accuracy of the measurement property assessment and whether the study sample fully represents the entire population of interest [ 50 , p. 205].
The EBP 2 -N is suitable for measuring Norwegian primary healthcare professionals’ EBP knowledge, attitudes, confidence, and behavior. Researchers can use the EBP 2 -N to increase their understanding of factors affecting healthcare professional’s implementation of EBP and to guide the development of tailored strategies for implementing EBP.
This study revealed positive perceptions of the content validity of the EBP 2 -N, though with nuanced concerns about the relevance and comprehensibility of certain items and uncertainty regarding the five-factor structure of the EBP 2 -N. The minor linguistic revisions we made to some items made the questionnaire more understandable. However, when EBP 2 -N is used in primary healthcare, caution should be exercised when interpreting the results of the terminology domain, as the relevance of some items has been questioned.
Future research should focus on further assessing the factor structure of the EBP 2 -N, evaluating the relevance of the items, and exploring the possibility of reducing the number of items, especially when applied in a new setting or population. Such evaluations could further enhance our understanding of the instrument’s measurement properties and potentially lead to improvements in the measurement properties of the EBP 2 -N.
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
The Evidence-based practice profile
The Norwegian version of the Evidence-based practice profile questionnaire
Consensus-based Standards for the Selection of Health Measurement Instruments
Confirmatory factor analysis
Comparative fit index
Root mean square error of approximation
Standardized square residual
The Norwegian Agency for Shared Services in Education and Research
Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, et al. Sicily statement on evidence-based practice. BMC Med Educ. 2005;5(1):1.
Article Google Scholar
Straus SE, Glasziou P, Richardson WS, Haynes RB, Pattani R, Veroniki AA. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Elsevier; 2019.
Google Scholar
Albarqouni L, Hoffmann T, Straus S, Olsen NR, Young T, Ilic D, et al. Core competencies in evidence-based practice for Health professionals: Consensus Statement based on a systematic review and Delphi Survey. JAMA Netw Open. 2018;1(2):e180281.
Straus S, Glasziou P, Richardson W, Haynes R. Evidence-based medicine: how to practice and teach EBM. Fifth edition ed: Elsevier Health Sciences; 2019.
Paci M, Faedda G, Ugolini A, Pellicciari L. Barriers to evidence-based practice implementation in physiotherapy: a systematic review and meta-analysis. Int J Qual Health Care. 2021;33(2).
Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20(6):793–802.
da Silva TM, Costa Lda C, Garcia AN, Costa LO. What do physical therapists think about evidence-based practice? A systematic review. Man Ther. 2015;20(3):388–401.
Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(S6):S57–60.
Saunders H, Gallagher-Ford L, Kvist T, Vehviläinen-Julkunen K. Practicing Healthcare professionals’ evidence-based practice competencies: an overview of systematic reviews. Worldviews Evid Based Nurs. 2019;16(3):176–85.
Salbach NM, Jaglal SB, Korner-Bitensky N, Rappolt S, Davis D. Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Phys Ther. 2007;87(10):1284–303.
Saunders H, Vehvilainen-Julkunen K. Key considerations for selecting instruments when evaluating healthcare professionals’ evidence-based practice competencies: a discussion paper. J Adv Nurs. 2018;74(10):2301–11.
de Vet HCW, Terwee CB, Mokkink LB, Knol DL. Measurement in Medicine: a practical guide. Cambridge: Cambridge: Cambridge University Press; 2011.
Book Google Scholar
Tilson JK, Kaplan SL, Harris JL, Hutchinson A, Ilic D, Niederman R, et al. Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ. 2011;11:78.
Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako K, Boruff J et al. Challenges and future directions in the measurement of evidence-based practice: qualitative analysis of umbrella review findings. J Eval Clin Pract. 2022.
Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006;296(9):1116–27.
Landsverk NG, Olsen NR, Brovold T. Instruments measuring evidence-based practice behavior, attitudes, and self-efficacy among healthcare professionals: a systematic review of measurement properties. Implement Science: IS. 2023;18(1):42.
Hoegen PA, de Bot CMA, Echteld MA, Vermeulen H. Measuring self-efficacy and outcome expectancy in evidence-based practice: a systematic review on psychometric properties. Int J Nurs Stud Adv. 2021;3:100024.
Oude Rengerink K, Zwolsman SE, Ubbink DT, Mol BW, van Dijk N, Vermeulen H. Tools to assess evidence-based practice behaviour among healthcare professionals. Evid Based Med. 2013;18(4):129–38.
Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice. J Adv Nurs. 2014;70(10):2181–95.
Buchanan H, Siegfried N, Jelsma J. Survey instruments for Knowledge, skills, attitudes and Behaviour related to evidence-based practice in Occupational Therapy: a systematic review. Occup Ther Int. 2016;23(2):59–90.
Fernández-Domínguez JC, Sesé-Abad A, Morales-Asencio JM, Oliva-Pascual-Vaca A, Salinas-Bueno I, de Pedro-Gómez JE. Validity and reliability of instruments aimed at measuring evidence-based practice in physical therapy: a systematic review of the literature. J Eval Clin Pract. 2014;20(6):767–78.
Belita E, Squires JE, Yost J, Ganann R, Burnett T, Dobbins M. Measures of evidence-informed decision-making competence attributes: a psychometric systematic review. BMC Nurs. 2020;19:44.
Egeland KM, Ruud T, Ogden T, Lindstrom JC, Heiervang KS. Psychometric properties of the Norwegian version of the evidence-based practice attitude scale (EBPAS): to measure implementation readiness. Health Res Policy Syst. 2016;14(1):47.
Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The evidence-based practice attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implement Science: IS. 2017;12(1):44.
Grønvik CKU, Ødegård A, Bjørkly S. Factor Analytical Examination of the evidence-based practice beliefs scale: indications of a two-factor structure. scirp.org; 2016.
Moore JL, Friis S, Graham ID, Gundersen ET, Nordvik JE. Reported use of evidence in clinical practice: a survey of rehabilitation practices in Norway. BMC Health Serv Res. 2018;18(1):379.
Titlestad KB, Snibsoer AK, Stromme H, Nortvedt MW, Graverholt B, Espehaug B. Translation, cross-cultural adaption and measurement properties of the evidence-based practice profile. BMC Res Notes. 2017;10(1):44.
McEvoy MP, Williams MT, Olds TS. Development and psychometric testing of a trans-professional evidence-based practice profile questionnaire. Med Teach. 2010;32(9):e373–80.
Hu MY, Wu YN, McEvoy MP, Wang YF, Cong WL, Liu LP, et al. Development and validation of the Chinese version of the evidence-based practice profile questionnaire (EBP < sup > 2 Q). BMC Med Educ. 2020;20(1):280.
Jia Y, Zhuang X, Zhang Y, Meng G, Qin S, Shi WX, et al. Adaptation and validation of the evidence-based Practice Profile Questionnaire (EBP(2)Q) for clinical postgraduates in a Chinese context. BMC Med Educ. 2023;23(1):588.
Panczyk M, Belowska J, Zarzeka A, Samolinski L, Zmuda-Trzebiatowska H, Gotlib J. Validation study of the Polish version of the evidence-based Practice Profile Questionnaire. BMC Med Educ. 2017;17(1):38.
Belowska J, Panczyk M, Zarzeka A, Iwanow L, Cieślak I, Gotlib J. Promoting evidence-based practice - perceived knowledge, behaviours and attitudes of Polish nurses: a cross-sectional validation study. Int J Occup Saf Ergon. 2020;26(2):397–405.
Knowledge A. Confidence, and Behavior Related to Evidence-based Practice Among Healthcare Professionals Working in Primary Healthcare. Protocol of a Cross-sectional Survey [Internet]. OSF. 2023. Available from: https://doi.org/10.17605/OSF.IO/428RP
Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, et al. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study. Qual Life Res. 2018;27(5):1159–70.
Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57.
Mokkink LB, de Vet HCW, Prinsen CAC, Patrick DL, Alonso J, Bouter LM, et al. COSMIN Risk of Bias checklist for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1171–9.
Mokkink LB, Prinsen CA, Patrick D, Alonso J, Bouter LM, Vet HCD et al. Cosmin Study design checklist for patient-reported outecome measurement instruments [PDF]. 2019. https://www.cosmin.nl/tools/checklists-assessing-methodological-study-qualities/ . https://www.cosmin.nl/wp-content/uploads/COSMIN-study-designing-checklist_final.pdf
Gagnier JJ, Lai J, Mokkink LB, Terwee CB. COSMIN reporting guideline for studies on measurement properties of patient-reported outcome measures. Qual Life Res. 2021;30(8):2197–218.
Bjerk M, Flottorp SA, Pripp AH, Øien H, Hansen TM, Foy R, et al. Tailored implementation of national recommendations on fall prevention among older adults in municipalities in Norway (FALLPREVENT trial): a study protocol for a cluster-randomised trial. Implement Science: IS. 2024;19(1):5.
Presser S, Couper MP, Lessler JT, Martin E, Martin J, Rothgeb JM et al. Methods for Testing and Evaluating Survey Questions. Methods for Testing and Evaluating Survey Questionnaires2004. pp. 1–22.
Willis GB. Analysis of the cognitive interview in Questionnaire Design. Cary: Cary: Oxford University Press, Incorporated;; 2015.
StataCorp. Stata Statistical Software. 18 ed. College Station, TX: StataCorp; 2023.
Hu L-t, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55.
Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42.
Mokkink LB, Prinsen CA, Patrick DL, Alonso J, Bouter LM, de Vet HC et al. COSMIN methodology for systematic reviews of Patient-Reported Outcome Measures (PROMs) – user manual. 2018. https://www.cosmin.nl/tools/guideline-conducting-systematic-review-outcome-measures/
Brown TA. Confirmatory Factor Analysis for Applied Research. New York: New York: Guilford; 2015.
Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. New York, New York: Oxford University Press; 2015.
de Leeuw ED, Hox JJ, Dillman DA. International handbook of survey methodology. New York, NY: Taylor & Francis Group/Lawrence Erlbaum Associates; 2008. x, 549-x, p.
Solís Salazar M. The dilemma of combining positive and negative items in scales. Psicothema. 2015;27(2):192–200.
Bowling A. Research methods in health: investigating health and health services. 4th ed. ed. Maidenhead: Open University, McGraw-Hill;; 2014.
Download references
The authors would like to thank all the participants of this study, and partners in the FALLPREVENT research project.
Open access funding provided by OsloMet - Oslo Metropolitan University. Internal founding was provided by OsloMet. The funding bodies had no role in the design, data collection, data analysis, interpretation of the results or decision to submit for publication.
Open access funding provided by OsloMet - Oslo Metropolitan University
Authors and affiliations.
Department of Rehabilitation Science and Health Technology, Faculty of Health Science, Oslo Metropolitan University, Oslo, Norway
Nils Gunnar Landsverk & Therese Brovold
Department of Health and Functioning, Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway
Nina Rydland Olsen
Faculty of Health Sciences, OsloMet - Oslo Metropolitan University, Oslo, Norway
Are Hugo Pripp
Department of Welfare and Participation, Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway
Kristine Berg Titlestad
You can also search for this author in PubMed Google Scholar
NGL, TB, and NRO initiated the study and contributed to the design and planning. NGL managed the data collection (qualitative interviews and the web-based survey) and conducted the data analyses. NGL, TB, NRO, and KBT formed the panel group that developed the interview guide, discussed the results of the interviews in several meetings, and made minor linguistic revisions to the items. AHP assisted in planning the cross-sectional survey, performing statistical analyses, and interpreting the results of the statistical analyses. NGL wrote the manuscript draft, and TB, NRO, and KBT reviewed and revised the text in several rounds. All authors contributed to, reviewed, and approved the final manuscript.
Correspondence to Nils Gunnar Landsverk .
Ethics approval and consent to participate, consent for publication.
Not Applicable.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
The EBP2-N questionnaire
The interview guide
Details on item revisions
Reporting guideline
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Landsverk, N.G., Olsen, N.R., Titlestad, K.B. et al. Adaptation and validation of the evidence-based practice profile (EBP 2 ) questionnaire in a Norwegian primary healthcare setting. BMC Med Educ 24 , 841 (2024). https://doi.org/10.1186/s12909-024-05842-z
Download citation
Received : 09 April 2024
Accepted : 30 July 2024
Published : 06 August 2024
DOI : https://doi.org/10.1186/s12909-024-05842-z
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6920
Published on 8.8.2024 in Vol 8 (2024)
Authors of this article:
1 Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
2 Centre for Women's Mental Health During the Reproductive Lifespan (WOMHER), Uppsala University, Uppsala, Sweden
3 Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
Ayesha-Mae Bilal, MSc
Department of Medical Sciences
Uppsala University
Academic Hospital
Entrance 10, Floor 4
Uppsala, 751 85
Phone: 46 737240915
Email: [email protected]
Background: Perinatal depression affects a significant number of women during pregnancy and after birth, and early identification is imperative for timely interventions and improved prognosis. Mobile apps offer the potential to overcome barriers to health care provision and facilitate clinical research. However, little is known about users’ perceptions and acceptability of these apps, particularly digital phenotyping and ecological momentary assessment apps, a relatively novel category of apps and approach to data collection. Understanding user’s concerns and the challenges they experience using the app will facilitate adoption and continued engagement.
Objective: This qualitative study explores the experiences and attitudes of users of the Mom2B mobile health (mHealth) research app (Uppsala University) during the perinatal period. In particular, we aimed to determine the acceptability of the app and any concerns about providing data through a mobile app.
Methods: Semistructured focus group interviews were conducted digitally in Swedish with 13 groups and a total of 41 participants. Participants had been active users of the Mom2B app for at least 6 weeks and included pregnant and postpartum women, both with and without depression symptomatology apparent in their last screening test. Interviews were recorded, transcribed verbatim, translated to English, and evaluated using inductive thematic analysis.
Results: Four themes were elicited: acceptability of sharing data, motivators and incentives, barriers to task completion, and user experience. Participants also gave suggestions for the improvement of features and user experience.
Conclusions: The study findings suggest that app-based digital phenotyping is a feasible and acceptable method of conducting research and health care delivery among perinatal women. The Mom2B app was perceived as an efficient and practical tool that facilitates engagement in research as well as allows users to monitor their well-being and receive general and personalized information related to the perinatal period. However, this study also highlights the importance of trustworthiness, accessibility, and prompt technical issue resolution in the development of future research apps in cooperation with end users. The study contributes to the growing body of literature on the usability and acceptability of mobile apps for research and ecological momentary assessment and underscores the need for continued research in this area.
Perinatal depression (PND) impacts anywhere from 12% to 20% of women during pregnancy and after birth [ 1 ]. In Sweden, universal screening for PND takes place during a postpartum visit to the children’s health center and is done using the Edinburgh Postnatal Depression Scale (EPDS) [ 2 ]. Although efforts are being made to improve screening in the perinatal period, there are many barriers at both the patient and system level that prevent timely detection and intervention [ 3 - 5 ]. As such, early identification remains a challenge, with one Swedish study reporting that anywhere between 30% and 45% of women do not get screened, with some groups being at greater risk than others of being missed [ 6 ]. Early identification of individuals at risk of depression in the perinatal period is imperative for the implementation of timely and cost-effective interventions and improved prognosis [ 7 ].
Technological advancements in mobile health (mHealth) apps offer the opportunity to overcome barriers in health care provision. In 2019, over 90% of the population in Sweden owned a smartphone [ 8 ]. Their ubiquity and ability to unobtrusively amass large amounts of data in real time regarding the user’s functions and behaviors in their everyday life make them feasible tools to monitor mental health symptoms and identify users at risk of poor well-being.
Data collected can include both passive data from smartphone sensors, logs, and metadata as well as ecological momentary assessments [ 9 ], which are in situ, real-time data collection methods, such as app-based self-report scales. These data can be leveraged to develop social, behavioral, and cognitive phenotypes of individuals, which can subsequently be used to infer the user’s psychological state and other health indicators in a process termed “digital phenotyping” [ 10 ]. Smartphone-based digital phenotyping maintains the objectivity and the temporal and contextual integrity of diagnostically relevant information, as it overcomes the reliance on retrospective self-reporting from patients [ 9 ]. This enables the collection of rich, multivariable, and large-scale data sets that can be combined with advanced machine learning techniques to personalize health care, improve diagnostic validity, and predict disease and treatment outcomes [ 11 ].
Smartphone apps are being increasingly used as tools for digital phenotyping in psychiatry to support diagnosis and screening, as they enable the collection of data from both smartphone sensors and logs as well as subjective self-reports, cognitive tests, and other participation-based tasks. Recent studies have used the data collected from such digital phenotyping apps to apply machine learning methods to predict symptoms of mental illness, such as depression and anxiety [ 12 - 14 ], bipolar disorder [ 15 , 16 ], psychosis [ 17 ], and schizophrenia [ 18 , 19 ], and have focused on various vulnerable groups, such as veterans [ 20 ], students [ 21 ], as well as women in the perinatal period [ 22 - 25 ].
However, there are significant practical, social, and ethical challenges that impact users’ acceptance of and continued engagement with the app and raise concerns regarding privacy and data security [ 26 , 27 ]. Understanding the users’ diverse needs and priorities will enable us to alleviate these challenges and provide appropriate incentives. Exploring the issue of low engagement is especially relevant in populations that are experiencing depression, as its symptoms can diminish the influence of incentives [ 28 ]. These issues can lead to missing data, which can create biases in and reduce the accuracy of prediction models that can result from these data [ 17 ]. Although a few studies have investigated the feasibility and user experience of digital phenotyping apps [ 20 , 24 , 29 ], continued research is needed to explore user perspectives in more diverse populations and with more complex apps. Doing so will enable us to incorporate this knowledge in the initial stages of the app and study design process to enhance the acceptability and feasibility of such apps.
Mom2B (Uppsala University) is a smartphone app–based research study that aims to collect digital phenotyping data to ultimately develop and evaluate prediction models for PND [ 30 ]. The Mom2B app was developed as a means to collect digital phenotyping data from research participants. Data collected include active data, in the form of in-app self-report surveys and voice recording tasks, and passive data, that is, data collected from smartphone sensors and logs regarding the user’s mobility and sleep patterns, internet and smartphone use, and social media activity. Privacy protection measures are put in place to ensure that GPS data only concern relative movement, not actual location, and social media data only include the frequency of activity, not actual content. Depression symptoms are assessed as the outcome measure using the EPDS at various time points throughout the perinatal period. The array of data collected from this app is subsequently used to develop and evaluate prediction models for PND [ 31 ]. The models are not used within the app itself but can be evaluated in clinical settings in future studies.
Participants can enable or disable any type of data from being collected as part of their consent preferences at any point in the study. Apart from the surveys and voice recording tasks, the main interactive features of the app are the weekly information reports and the statistical graphs. More information about these features as well as an overview of the main pages, content, and features of the Mom2B are presented in Figure 1 .
This study aimed to explore the experiences and attitudes of Mom2B app users during the perinatal period. We particularly sought to investigate the acceptability of the app and participants’ concerns about providing data in this way.
To explore users’ experiences with and attitudes toward using the Mom2B app, a qualitative focus group study was conducted. Focus group interviews were chosen to gather a larger amount of information with limited time resources [ 32 ]. We used inductive thematic analysis as described by Braun and Clarke [ 33 ] to analyze the data. The 32-item COREQ (Consolidated Criteria for Reporting Qualitative Studies) checklist [ 34 ] was followed as a guide to ensure the quality of reporting.
The study was approved by the Swedish Ethical Review Authority (2020/06645), and all participants provided informed consent. Participants were provided with participant information, where they were informed about their right to withdraw from the study at any time. The option to withdraw anytime was emphasized again at the start of the interviews. All interviews were kept confidential, with transcripts pseudoanonymized to remove any identifiable details. Additionally, to safeguard against potential identification, interview transcripts were not uploaded to public data repositories, and the data remained within the research team. Participants did not receive any compensation for their involvement.
Users of the Mom2B app were recruited from the existing Mom2B cohort between December 2021 and May 2022. Participants had to have been active users of the app for at least 6 weeks in the perinatal period they were recruited to representative and must have completed at least 1 of their last 3 EPDS surveys. Women who had not updated their delivery date post partum, withdrawn from the study, or not consented to being contacted for participation in substudies (like this one) were excluded. To ensure a representative participant group and capture perspectives on the full scope of the app, including features only available to women exhibiting depression symptoms or to women in the pregnancy or postpartum period, we elected to use purposive random sampling. We stratified the cohort into 4 categories based on participants’ perinatal status (pregnant or postpartum at the time of recruitment) and whether or not they reported recently experiencing depression symptoms (women were considered depressed if their latest EPDS score on the app was 12 or above and considered as not depressed if the score was 10 or below).
We aimed for focus groups of 5 to 6 participants, with an equal distribution of women from all 4 stratification categories in each focus group. In total, 65 women consented to participate in the study; however, 24 dropped out before the interview, most often because of reasons related to their newborn infant. We continued to recruit participants to new focus groups until we agreed that information saturation had been reached [ 35 ].
A total of 41 participants were interviewed in the form of 13 focus groups, ranging in size from 2 to 5, and 1 was interviewed individually due to other expected participants in that group dropping out. Participants’ duration of app use ranged from 16 to 130 weeks. Additional participant characteristics are detailed in Tables 1 and 2 .
Characteristics | Values, n (%) | ||
18-29 | 3 (7) | ||
30-34 | 20 (49) | ||
35-45 | 18 (44) | ||
Sweden | 40 (98) | ||
Other Nordic country | 1 (2) | ||
Postsecondary education | 35 (85) | ||
Secondary or lower | 5 (12) | ||
Unknown | 1 (2) | ||
Full-time | 24 (59) | ||
Part-time | 3 (7) | ||
Student | 2 (5) | ||
Unknown | 12 (29) | ||
Primigravida | 18 (44) | ||
Multigravida | 21 (51) | ||
Unknown | 2 (5) |
Stratification categories | Values, n (%) | EPDS score | |||
Mean (SD) | Range | ||||
Depressive symptoms present | 7 (17) | 13.8 (2.1) | 12-17 | ||
No depressive symptoms | 6 (15) | 3 (2.3) | 1-7 | ||
Depressive symptoms present | 7 (17) | 14.1 (2) | 12-18 | ||
No depressive symptoms | 21 (51) | 3.8 (2.4) | 0-8 |
a EPDS: Edinburgh Postnatal Depression Scale.
Participants were recruited via email invitation and signed consent forms digitally. They were then able to select the focus group that best suited their availability and were sent 2 reminder emails before the interview with a brief of the interview topics to allow participants to have some time to reflect and gather their thoughts before the interview. The first author (AMB, female) served as a moderator, recorded the session, and took notes, while the second author (KP, female) conducted the interviews in Swedish over video conference.
An interview guide ( Multimedia Appendix 1 ) with semistructured questions was developed by the research team and used to prompt topics of discussion in the focus group. The first 2 focus group interviews were considered pretests to allow the research team to make any necessary revisions. Based on these interviews, minor changes were made to the recruitment procedures and the wording of some questions in the interview guide to improve clarity. The pretests were judged as contributing relevant data and were included in the analysis. The participants were first given background information on the Mom2B app–based research study and an overview of the purpose of this interview study. Finally, participants were debriefed about how their answers would be used and where to reach out with questions or concerns. Interviews lasted for 20-50 minutes and were recorded, and the audios were submitted for transcription.
We carefully considered the research question and its focus on capturing the voices and perspectives of participants, as well as the relative novelty of user experience research with digital phenotyping apps, and deemed inductive thematic analysis to be the most appropriate approach. We analyzed the English-translated transcripts following Braun and Clarke’s model for reflexive thematic analysis based on the model “Codebook” analysis [ 33 , 36 ]. The analysis was performed in NVivo (version 13; Lumivero) by 3 of the authors.
The first author (AMB) thoroughly read and reread all transcripts to familiarize with the data and then systematically analyzed the data to generate initial codes using an open, semantic approach, as we were interested in exploring users’ stated opinions and impressions. The first and last author (AMB and CÖ) frequently discussed their different perspectives and revised codes in an iterative process as transcripts continued to be coded, as well as after coding was completed.
Preliminary themes were then generated based on meaningful patterns emerging within the coded data. Themes were initially largely categorical to allow for more intuitive sorting, and many codes were sorted into more than 1 theme at this point. The second author (KP) independently evaluated the codes to assess their validity and identify themes, and then all 3 authors (AMB, KP, and CÖ) came together to discuss how the codes can be refined and to further develop the themes so that they are sufficiently unique, make sense in the context of the data set, and truly reflect the cruces of the focus group discussions. Finally, themes and subthemes were defined in an iterative process.
In total, 4 themes were identified and are described below (in no particular order) with subthemes and supporting quotations. Figure 2 gives an overview of the identified themes and subthemes.
Given the large amount of data (with much of it being sensitive health and personal information) being collected, users’ perceptions on sharing that data are an important consideration.
Participants had mixed opinions about all the different types of data they had to consent to. Some reported initially feeling uncomfortable about the idea of the app accessing their social media or GPS data or giving access to their medical records. Others felt that the data collected was reasonable, considering that they were part of a research study. However, participants were reassured by their control over what data they chose to share as well as the understanding that social media and GPS data tracked activity and mobility, as opposed to content and location.
It would have been a deal-breaker...[but] it doesn’t keep track of what I write in social media, but just that I interact, how much I, for example, liked something... [Participant 41, focus group 14]
The app being affiliated with Uppsala University and the data being collected for research purposes and being handled by researchers were important mitigating factors for users’ willingness to share sensitive and personal data. Participants trusted how their data are stored and used as well as the information they get from the app.
I wouldn’t have agreed to [the consent forms] if it wasn’t a research study, or that it wasn’t from a university or the healthcare system or something. I wouldn’t have agreed to this if it was the private sector. That made me also trust that [my data] was handled correctly... [Participant 41, focus group 14]
Closely related to participant’s perceptions of invasiveness and trust was their expressed desire to know more about why that data were collected and what they had been used for. It was more so a matter of curiosity than concern; however, it may still impact their motivation to submit data.
These audio recordings, for example, in what way can it be used in research? I’m a little curious about how you can use it. [Participant 5, focus group 1]
I would have liked to see even more information about it, like, how to use the results and how it can benefit others. [Participant 27, focus group 9]
In some cases, the information participants wanted was, in fact, available in the participant information and consent forms; however, it appeared that users’ perceptions of trust in the study had led them to skim through these forms and miss relevant information.
I didn’t read everything in detail, but kind of felt that in a study conducted by a serious group, I trust that the information will be used in a way that is safe for me...I probably skimmed most of it. [Participant 2, focus group 1]
Participants expressed concern about whether the data they provided, particularly on mood-related surveys, accurately reflected the truth. Many participants felt that poor scores on mood surveys were more reflective of the effects of social isolation during the COVID-19 pandemic, as well as preexisting mental health conditions, rather than being pregnant or having given birth.
For me, my mood was more based on the fact that I was kind of trapped, because I wasn’t allowed to go to work, and it was a bit misleading because the pregnancy itself wasn’t a problem, but it was more the circumstance... [Participant 4, focus group 1]
I have a background of fatigue syndrome [a Sweden-specific diagnosis equivalent to burnout] and a neuropsychiatric disorder, ADHD combined, which makes my mood automatically fluctuate and maybe is worsened in certain situations in life, just like childbirth...the research is not adapted to people with for example anxiety problems or a neuropsychiatric diagnosis, and then it becomes misleading in the research because it shows that I’m suffering from for example depression, although I’m not. [Participant 40, focus group 13]
When informed that the research team accounts for extraneous factors that impact their mood, users reiterated their desire to know more about how the data are used. Issues with accurate tracking of physical activity patterns were another concern for users. In general, participants’ uncertainties about the quality of their data led to hesitations about continuing participation.
...you get a little worried [that you don’t actually contribute] if you think that “I’m collecting a lot of data here, but [I] don’t feel that it might be right.” [Participant 27, focus group 9]
Motivators and incentives refer to factors that drive users to join the study and continue participating.
Participants unanimously described their initial motivation to download the app being the desire to contribute to research, particularly on women’s health. Participants felt confident in the value and credibility of the findings that would result from their participation, which also made them feel good about themselves.
The reason why I downloaded the app was precisely to answer these research questions and to be part of the research study itself, so for me it was just to sort of answer the questions. [Participant 1, focus group 1]
[This app] is not just trying to sell us products and buy more, but this really has value on a higher level, which hopefully can help others. [Participant 33, focus group 11]
I have a sister who got postpartum depression, and so I thought it was kind of good to be part of a study like that and to keep an eye on yourself as well. [Participant 9, focus group 2]
Since the app is continuously present on the phone and sends notifications when new surveys arrive, participants reflected on how that afforded them flexibility and convenience, especially for new mothers or participants with other children. They found it less effortful than submitting data by other means of collection, such as email, paper, or in-person surveys.
It feels more accessible than getting a link in an email that you have to open. But the surveys are there when you open the app, and then there is a reminder that “you have a survey to answer in the app.” [Participant 29, focus group 10]
It’s nice to be able to answer when you can and do it from home, and not have to set aside so much time each time, but you can sort of start and then pause if you don’t get it done and then it stays. [Participant 36, focus group 12]
Furthermore, participants reported feeling more comfortable and answering more honestly on PND questionnaires when done on the app versus in person with the midwife.
[My midwife] is not really judgmental, but...I would find it difficult to answer anything other than very positively to a survey that you fill in while someone is staring at you. [Participant 6, focus group 2]
One participant described the experience as an “information exchange,” as users benefitted from both general and personalized information and support for their well-being in exchange for providing data. Participants valued getting a statistical overview of their mood and activity patterns based on their data over time, as it enabled them to reflect on their well-being and how to improve it. It incentivized participants to respond to surveys more seriously, knowing that it helps them as much as it would help the research team.
It has been valuable to me both during the pregnancy and after because I’ve had tendencies towards postpartum depression, and I was also a bit vulnerable before the birth...it has been interesting to be able to see [your statistics] and use it in your self-analysis... [Participant 12, focus group 3]
It was very clear to me how [my mood] was connected with sleep and so on, then it felt easy, getting it so black and white, it made it easier to sort of plan or prioritize...and keep an eye on my mood a bit. [Participant 14, focus group 4]
Furthermore, answering questions about mood prompted users to reflect on their mental well-being and check in with themselves regularly. It was particularly constructive for new mothers and participants with other children to be reminded to self-reflect. In fact, some participants disliked that these surveys became less frequent after birth and would have liked to continue answering them regularly. Moreover, seeing surveys concerning mental health helped normalize and reduce the stigma surrounding poor well-being. Participants felt that “because [the researchers] ask this, there must be others too” (participant 33, focus group 11), and just having the app made them feel less alone through their perinatal journey.
Most of all, users valued the notification they received when their scores on the EPDS were high, as it forced them to acknowledge and take their symptoms seriously and to consider seeking help.
I don’t think I understood myself that I felt as bad as I really did...so for me it has been the absolute best thing about this app that you get detected, so there was still a purpose to follow how you feel... [Participant 21, focus group 7]
I was diagnosed with depression during pregnancy...even starting to seek treatment for it at all, it was a combination of the app signaling it, and then you were given the opportunity to talk to someone on the app...the person I spoke to on the Mom2B app said “get in touch with your midwife because you’re not feeling well,” so that led me to seek care... [Participant 39, focus group 13]
However, some participants felt that the 5-point scale for the well-being surveys was unable to capture the nuances in their moods, and as such, they felt they incorrectly received notifications to seek help.
In addition, participants found the weekly information reports fun to read and educational and described them as “reliable” and “factual.” However, there was a general dissatisfaction with their conciseness, and desire for them to be more detailed and informative. As such, users did not consider Mom2B as their primary source of general perinatal information but would have preferred to do so to “have everything in one place,” especially since they trusted it more than a commercial app. It also allowed participants to keep track of what perinatal week they are in, which was otherwise confusing for some. Some participants would have liked to get more practical tips and advice from the reports and relevant content for those having multiple pregnancies.
It’s not as comprehensive as a lot of other apps are, so if I want to know something about the baby’s development, or what’s happening in that week of pregnancy, I’ll go to some other app. [Participant 31, focus group 10]
The completeness of data collected is a vital characteristic of its quality. Mom2B participants had to complete surveys and voice recordings regularly, and our results highlight 4 main reasons that hindered them from doing so.
Women described abandoning surveys due to either not wanting to answer or not knowing or remembering information. The majority of responses were related to recording weight, and it was clear that women found the task of weighing themselves distressful and wished to avoid it.
I don’t know my weight, and don’t want to know my weight, it’s not good for me to know my weight, and then I can’t answer them...it would have been nice to just write “I don’t know” instead of not being able to answer that survey... [Participant 14, focus group 4]
Participants found it difficult to complete voice recording tasks, as they struggled to find the time or a quiet environment. This was especially the case for women with a newborn infant or with other young children at home and was exacerbated by the stay-at-home policies imposed due to the COVID-19 pandemic.
Participants experienced confusion with completing certain tasks that they felt lacked clear instructions. One participant described uncertainty in how fast to speak or what tone to use when recording voice. Another described how it “wasn’t entirely clear when to use periods or commas when entering weight” (participant 26, focus group 9).
With frequently recurring surveys in particular, such as the weekly well-being checks, some participants reported the monotony of the questions inhibited them from reflecting on how they really felt.
I really tried to stop and think “how am I really feeling? How has the last week been?” it was a great way to pause, but I’m afraid that somewhere subconsciously I still answered a little habitually...a week goes by so fast, so it feels as if you have just answered them. [Participant 15, focus group 4]
Participants agreed that the number of surveys they need to complete in any given week can feel overwhelming and daunting. One participant reported feeling “constantly behind.” On the other hand, participants also found the individual surveys to be short and easy to answer and appreciated that the surveys were categorized by priority so they knew what is most important to answer.
Users discussed usability, customizability, and accessibility as impactful determinants of their user experience.
Usability refers to how easily and frictionlessly the user interacts with the app and uses its features. For the most part, participants described the app as user-friendly and “easy to understand.” Although some felt the interface was a little unsophisticated and boring, others felt that its simplicity made the app feel secure and serious.
I don’t feel that the app is really bad, but when time, money and resources are available, you can improve it. [Participant 21, focus group 7]
Participants often felt an inadequacy of guidance and information in the app for performing tasks and resolving common issues. Uncertainty about the length and expiry time of tasks often led to hesitations to start the task or to miss it.
...it’s good to have some kind of time indication...so that you can think “okay, I have two days, I might not be able to do it right now, but I’ll still try to do it within two days.” [Participant 1, focus group 1]
Insufficient information may have also affected the discoverability of features and content in the app, as several users were unaware they could adjust their labor date, view statistics based on their data, and continue the study after birth or if they had a second child.
There are several features in the app that I only noticed now that I looked through it a little more closely to be part of this interview, like statistics...in other apps it’s a bit more smooth-flowing, it’s hard to miss functions. [Participant 6, focus group 2]
Participants would have liked to have more and clearer information available related to resolving technical issues and frequently experienced problems to avoid the inconvenience of waiting for email responses from the support team. One participant described switching to a new phone and being unable to log back into the app for 2 months while waiting for a response from technical support:
There are quite a lot of people who change their phones often, so it’s a very unnecessary omission [of information] when it’s so common. [Participant 20, focus group 6]
Technical issues were a major source of friction, and it was apparent that they affected participants’ motivations to continue, especially when the issue was related to providing data. Social media and movement data were not recorded correctly for most participants, which lowered the incentivizing impact of the statistics graphs. Users often had difficulties logging back into their accounts, for example, if they switched to a new phone, and found that their previous activity had not been saved on the app.
I was in contact with you [the support team], and you said that “just ignore that [tasks] you have already answered, because that data is sent in, just continue to answer the new ones that come in.” But then there was so much now, and it wasn’t really possible to tell them apart. [Participant 36, focus group 12]
Other issues participants described as frustrating were difficulties logging in, app draining battery or crashing, and glitches with surveys. Some participants experienced incorrectly occurring notifications to answer surveys; however, most participants were content with the frequency of notifications and considered them necessary reminders.
Customizability refers to enabling the users to personalize the app according to their needs and priorities. Participants with multiple pregnancies or health conditions wished for information from the app that felt more relevant to them. Participants also wanted the option to connect the app with smartwatches or pedometers to track movement better. The task of recording weight was dividing in particular. Most found it undesirable and expressed the need for opt-out response options such as “Prefer not to answer” so that they could remove such surveys and also not have to simply abandon them. Others wanted to track it more often and proposed being able to record weight manually as a solution.
In order for it not to be triggering and if you yourself want more statistics, you could make it so that you could add them more often yourself. [Participant 12, focus group 3]
I felt like “God, I can’t even send [the survey] away...and there isn’t an alternative”... [Participant 32, focus group 10]
Accessibility refers to how comfortably users with different needs and abilities are able to use the app and its features. One participant described the font as being too small to read comfortably. Two others commented on the complexity of the text for people with reading disabilities and suggested having the option to choose simplified Swedish.
You [should be able to] choose whether you want simplified text or not, because there are a lot of people who have hidden dyslexia, and may not understand all the concepts. [Participant 40, focus group 13]
One participant also noted how maternity clothes often lack pockets to carry one’s phone in, which makes it problematic to share movement data:
even if I had [been physically active], or I have a job where I stand and walk a lot at times, it kind of didn’t show up in the app at all because the phone was on a bench. [Participant 28, focus group 9]
App-based digital phenotyping is a rapidly growing method in health care research with little to no studies evaluating user experience and the barriers and facilitators of user engagement [ 37 ]. This study explored pregnant and postpartum women’s views and experiences with the Mom2B app, including how they perceived various features, and the factors that impacted their continued use of the app. Overall, participants deemed app-based digital phenotyping as an acceptable and feasible method of sharing data for research, especially longitudinal research, as it afforded them convenience and flexibility while also allowing them to benefit personally from the data they share by monitoring their well-being. Our findings highlight a duality in how the Mom2B app is perceived by users as both a tool for research and an mHealth app. While data collection for research is the primary function of the app and plays a bigger role in the initial acquisition of users, the health features are what motivate the long-term retention and continued engagement of users, which are essential for minimizing the risk of missing data.
It is important to consider the cultural context of this study, being focused on the Swedish population. Sweden, like other Nordic countries, has one of the highest rates of smartphone penetration in the world within all age groups as well as high rates of digital health care practices among the general population [ 8 ]. This makes smartphone-based digital phenotyping exceptionally efficient in this population due to the commonplaceness of the technology and its use in health care activities. Trust and engagement in research as well as openness to technological developments and use also make the Swedish population uniquely easy to implement such technologies with [ 38 ], although the barriers and challenges to usability and continued engagement are not very different from other populations.
Transparency, as a characteristic of digital phenotyping research, was valued by all participants. Participants evaluated this research positively for transparency but expressed their desire to better understand why each type of data is needed and how exactly it is being used in the research, as it related directly to their willingness to consent to sharing different types of personal data. Participants also appreciated the control they have over deciding which type of data they want to share or not, which is consistent with studies showing that users prefer dynamic and flexible consent models that give them more control [ 39 ].
These findings also aligned with previous research [ 40 , 41 ], showing that participants felt more willing to share data with and use an app that was developed by university-affiliated researchers, as it led to better expectations of protection of their data in comparison with commercially developed apps. Moreover, despite concerns regarding the sensitive and personal nature of data requested from participants, studies have shown that they are generally motivated to consent to sharing data for the purpose of research and improving health care provision [ 24 , 39 ]. In fact, the majority of the participants in this study were motivated primarily by the desire to support the research effort and possibly help other women. While these findings agree with previous studies done in various countries, it is also important to note here the exceptionally high public trust and commitment to research in Sweden. A 2022 report by the Swedish nonprofit organization, Vetenskap & Allmänhet (Public & Science), shows that 89% of women in Sweden have high confidence in researchers and universities and believe it is important to be involved in research [ 38 ]. However, for continued engagement with the app, more direct and personal incentives are important for users [ 20 ]. Three features were particularly incentivizing for our participants to continue sharing data and engaging with the app: the statistics and the high EPDS score alert, which enabled users to self-monitor their well-being throughout the perinatal period, as well as the weekly reports, which participants found enjoyable, interesting, and educational.
Fundamentally, it appears that women are motivated by a sense of social responsibility, concern for their health, and curiosity and interest. Intrinsically motivated behaviors have been described in the literature as generating persistence and long-term stability in behavior [ 20 , 42 ], which is especially valuable in longitudinal studies like Mom2B. Furthermore, self-monitoring mechanisms in mHealth apps have been shown to motivate long-term use of such tools because of the value of understanding one’s own psychological well-being [ 20 , 24 , 43 ]. These findings emphasize the importance of designing features that provide clear personal benefits to the user to increase the perceived use of the app. Considering the perceived duality of this app, it is important to keep in mind that although the primary function of this app is to conduct research and acquire data from users, it is the personal benefits they get from the app that largely motivate them to share data and engage with the app over time. Engagement with the app is needed for the continuous collection of passive data, as long periods of inactivity can compromise or stop passive data collection altogether [ 44 ].
Participants in this study offered several suggestions on how the Mom2B app could be improved, as the general preference was to have a single app from a trustworthy source that met all expectations in terms of features, instead of having to use other commercially developed apps that participants considered less reliable. Weekly information reports should be at least on par with commercial apps in terms of the detail and length of information and be customizable to women experiencing multiple pregnancies. Customizability of the app is an important area of improvement, as giving users a sense of control over the app directly impacts the perceived ease of use, efficiency, and user satisfaction [ 29 , 41 , 45 ]. One feature users wanted more control over was weight tracking, which received mixed reactions. Enabling users to additionally input weight manually would amplify interaction and engagement from those who wish to track it more often. On the other hand, for those who find it undesirable to track weight, enabling them to skip weight questions would minimize frustration and perceived task load due to unwanted lingering surveys. In general, task load and survey repetition should be carefully determined in mobile research apps, as too many surveys accumulating after brief periods of inactivity were overwhelming for participants and deterred participation. Giving users alternative response options that allow them to skip certain sensitive surveys and remove them from their task list can reduce the perceived task load and improve user experience.
Notifications were an important feature for participants and were evaluated as sufficient and facilitative. Participants were only notified when new surveys or weekly reports became available in the app, which was quite frequent. As such, the Mom2B app does not send reminder notifications, as they may pose a risk of being considered bothersome. Finding the right balance for notification frequency can be complicated, which is an important reason for customizability and is enabling users to alter notification preferences within the app [ 29 , 46 ]. Another issue is that of technical problems and system errors, which can decrease the perceived ease of use and the motivation to continue using the app. It is important for users that system errors are appropriately explained and that a solution is available without much effort or that support for technical issues is easy to access and resolves the issue quickly [ 29 ].
According to our findings, most participants found it difficult to make voice recordings after birth when the infant’s needs and frequent crying can be a barrier to record. Implementing accessible designs is especially necessary for user groups such as women in the perinatal period, due to the various barriers and limitations they may experience. Pregnancy clothing often not having any pockets is another limitation experienced by participants, preventing them from accurately recording and tracking their mobility. Designing for inclusivity can be facilitated by testing the designs with users or including them in the design process. Our findings emphasize the importance of user testing of the app in an early stage, as it would also refine the overall usability and improve the acceptability of the app and the study.
This study included a large number of women interviewed about their perspectives on the Mom2B app as users. We made an effort to recruit a participant group that was representative of women who had used the app in both the pregnancy and the postpartum period as well as both those who had and had not experienced symptoms of depression while using the app. This was done to ensure we captured user perspectives and experiences reflecting the full scope of app features, some of which may not have been available, for example, for those who did not display depression symptoms or did not participate during pregnancy, as well as to ensure that women displaying depressive symptoms were sufficiently represented. Not surprisingly, women with experiences of depression were a relatively small group due to a higher rate of participation cancellation; however, purposive sampling may have prevented this group from completely being lost to attrition.
Furthermore, although focus group interviews are traditionally conducted in person, findings from recent studies substantiate that output, engagement, and participant satisfaction are not affected by engaging remotely [ 47 , 48 ]. In fact, remote interviews were especially suited for our population, as they allowed us to recruit from a more diverse pool of participants in Sweden and build a more representative sample. Face-to-face participation would have been challenging for our participants, as most were either in the late stages of pregnancy or newly delivered mothers, and the inconvenience of unnecessarily traveling even short distances would have likely led to far more dropouts.
Ultimately, the number of participants in most focus groups was still less than what is generally considered the ideal (5-8 participants) [ 32 ]. However, this was observed to be advantageous for this particular group, as it afforded each participant more time to share and discuss their experiences given their limited availability. Nevertheless, it is possible that smaller focus groups may have been deprived of achieving the same quality of discussion as larger groups. One focus group interview turned into an individual interview due to other participants dropping out, which resulted in a lack of group discussion. We decided to include this interview anyway, as the participant shared unique insights on their experience that we considered important.
Moreover, since an open invitation to participate was sent to Mom2B participants, we considered the possibility that the participants may have predominantly been users who are more technologically savvy and frequent users; however, based on our conversations with interview participants, this did not seem to be the case. Interviews were conducted by a female research assistant, which was considered important to allow the participating women to feel at ease and reduce participant bias.
One important limitation to consider is the lack of usability testing in this study. Having participants actively use and explore the Mom2B app during the interview as well as giving them tasks to perform such as answering a survey, checking their monthly activity, or reading the recent weekly report may have enhanced the detail and specificity of the feedback they provided as well as triggered memories of past experiences. Future studies are encouraged to use usability testing in conjunction with focus group interviewing when exploring user experiences of such apps.
This study adds to the limited literature examining user experiences and attitudes toward digital phenotyping apps in the area of mental health research, particularly in the perinatal period. Participants shared their insights on barriers and facilitators of app use and study participation as well as suggestions for the improvement of features and user experience. These results serve as a foundation for app developers and health care researchers in creating apps for research and contribute to our understanding of the opportunities and challenges in designing and implementing apps to support longitudinal research using digital phenotyping.
The authors thank all the participants for their time and contribution. The study was supported by grants from Uppsala Region, the Swedish Association of Local Authorities and Regions, the Swedish Research Council (grant 2020-01965), as well as the Swedish Brain Foundation, the Swedish Medical Association (Söderström-Königska, grant SLS-940670), and Uppsala University WOMHER School. The authors would also like to acknowledge the Swedish Network for National Clinical Studies for their support with the dissemination of study information as well as the National Academic Infrastructure for Supercomputing in Sweden and the Swedish National Infrastructure for Computing for providing resources that enabled the data handling for the Mom2B and associated studies.
The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.
FCP and AS conceived of the study, and FCP and CÖ designed the interview guide. KP interviewed participants with the help of AMB. AMB analyzed and interpreted data, with the assistance of CÖ and KP. AMB drafted the final manuscript, and all authors participated in critical revisions of the manuscript. All authors have read and approved the final manuscript.
None declared.
Interview guide (English translation).
Consolidated Criteria for Reporting Qualitative Studies |
Edinburgh Postnatal Depression Scale |
mobile health |
perinatal depression |
Edited by A Mavragani; submitted 11.10.23; peer-reviewed by C Barnum, J Brooke; comments to author 08.12.23; revised version received 27.02.24; accepted 26.05.24; published 08.08.24.
©Ayesha-Mae Bilal, Konstantina Pagoni, Stavros I Iliadis, Fotios C Papadopoulos, Alkistis Skalkidou, Caisa Öster. Originally published in JMIR Formative Research (https://formative.jmir.org), 08.08.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
Published on 7.8.2024 in Vol 26 (2024)
Authors of this article:
1 School of Nursing, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
2 Hong Kong Lutheran Social Service, Kowloon, China (Hong Kong)
3 Department of Health, The Government of the Hong Kong Special Administrative Region, Hong Kong Island, China (Hong Kong)
Arkers Kwan Ching Wong, PhD
School of Nursing
The Hong Kong Polytechnic University
China (Hong Kong)
Phone: 852 34003805
Email: [email protected]
Background: The use of wearable monitoring devices (WMDs), such as smartwatches, is advancing support and care for community-dwelling older adults across the globe. Despite existing evidence of the importance of WMDs in preventing problems and promoting health, significant concerns remain about the decline in use after a period of time, which warrant an understanding of how older adults experience the devices.
Objective: This study aims to explore and describe the experiences of community-dwelling older adults after receiving our interventional program, which included the use of a smartwatch with support from a community health workers, nurses, and social workers, including the challenges that they experienced while using the device, the perceived benefits, and strategies to promote their sustained use of the device.
Methods: We used a qualitative descriptive approach in this study. Older adults who had taken part in an interventional study involving the use of smartwatches and who were receiving regular health and social support were invited to participate in focus group discussions at the end of the trial. Purposive sampling was used to recruit potential participants. Older adults who agreed to participate were assigned to focus groups based on their community. The focus group discussions were facilitated and moderated by 2 members of the research team. All discussions were recorded and transcribed verbatim. We used the constant comparison analytical approach to analyze the focus group data.
Results: A total of 22 participants assigned to 6 focus groups participated in the study. The experiences of community-dwelling older adults emerged as (1) challenges associated with the use of WMDs, (2) the perceived benefits of using the WMDs, and (3) strategies to promote the use of WMDs. In addition, the findings also demonstrate a hierarchical pattern of health-seeking behaviors by older adults: seeking assistance first from older adult volunteers, then from social workers, and finally from nurses.
Conclusions: Ongoing use of the WMDs is potentially possible, but it is important to ensure the availability of technical support, maintain active professional follow-ups by nurses and social workers, and include older adult volunteers to support other older adults in such programs.
Technological advancements have facilitated the self-management of chronic diseases among community-dwelling older adults. Wearable monitoring devices (WMDs), such as smartwatches, are among the common technological tools that assist older adults with health monitoring, physical and cognitive training, medication reminders, and fall prevention [ 1 , 2 ]. The literature shows that WMDs are effective at reducing the risk of developing cardiovascular diseases [ 3 ], increasing the physical activity levels [ 4 ], and improving the quality of life [ 5 ] of older adults. However, despite the benefits and high adoption rate of these wearable devices, there is a lack of studies demonstrating the adherence rate of older adults in maintaining consistent use of WMDs [ 6 - 8 ]. A survey with a sample of >4000 Canadian adults revealed that 33% of the participants did not use WMDs to monitor their health on a regular basis [ 9 ]. Similarly, another survey conducted in Australia reported an abandonment rate of 29% for WMDs, without specifying the population [ 10 ]. Physical disability, a lack of knowledge about the functions of wearable devices, and technological anxiety were summarized as notable reasons for poor adherence to these devices among older adults [ 11 - 13 ].
Self-determination theory has highlighted that long-term behavioral change is determined by one’s intrinsic motivation, which is defined as one’s action driven by the enjoyment and interest in the activity itself and is affected by 3 factors: competence, autonomy, and relatedness [ 14 ]. When an individual has a sense of competence and autonomy in adopting a new behavior and has someone who is socially and psychologically connected (relatedness) to support the behavior, they are more likely to adhere to, and maintain, the behavior over the long term. Recent studies have focused on providing training sessions to help older adults familiarize themselves with the functions of WMDs and enhance their competence and autonomy. However, the results showed no difference in adherence between the participants who received the training sessions and those who did not [ 15 , 16 ]. Older adults have expressed in a qualitative study that 1 preintervention training session is not sufficient to enhance their knowledge of WMDs or resolve their technological anxiety [ 17 ]. It was suggested that nursing or peer support, with the simultaneous provision of social support, might be necessary throughout the health program to increase the intrinsic motivation of older adults to adopt WMDs [ 15 ]. However, there have been limited studies on the offering of nursing or peer support for older adults in the use of WMDs. In the study by Farivar et al [ 11 ], nursing feedback was provided to older adults when their real-time step counts, which were displayed on the WMDs, were unsatisfactory. The program was found to be feasible and acceptable to older adults, but it encountered challenges such as infrequent updates of the WMDs and low engagement and retention rates. Another study, which designed a similar program that provided nursing support to older adults when abnormal vital signs were detected in the WMDs, demonstrated a high dropout rate of 21% and short-term adherence to the WMDs [ 18 ]. Recent studies emphasize the importance of implementing a clear nursing service model, such as a case management model, that encompasses problem identification, goal setting, and regular follow-up. This model aims to enhance the intrinsic motivation of older adults to consistently use new technologies, such as mobile health apps and WMDs [ 7 , 19 ], rather than relying solely on providing training sessions to them or intervening only when abnormalities in vital signs are detected through WMDs by the older adults.
Because of the perception that they might be causing trouble to others, older adults tended not to actively seek help from health care professionals and peers even when they faced technical problems or when they did not comprehend the medical jargon displayed on the device [ 11 , 13 ]. They were also concerned about their health data being transferred from WMDs to health care professionals but not receiving regular feedback [ 20 ]. In view of this, this study was to have a nurse case manager (NCM) work with the older adults to identify factors that could facilitate or hinder their use of a smartwatch and recommended that the older adults use those features of the smartwatch that are linked with their health and social problems, provided suggestions on the duration and frequency of the use of the smartwatch, and provided instructions on how to use these features in their daily routines during the 3-month intervention period. Older adults had the autonomy to adjust or modify their own schedules to ensure that they could use the features of the smartwatch efficiently and effectively. The NCM also encouraged the family members or primary caregivers of the older adults to participate and provide feedback and support. This paper describes the perceptions and experiences of community-dwelling older adults after receiving our interventional program. More specifically, we explored the challenges that they experienced during their use of the WMD, the benefits of using the WMD, and suggestions on how to promote their sustained use of the WMD. The results may provide useful insights for developing a program that can promote the continued adoption of WMDs and, in turn, improve the long-term benefits of the WMDs on health self-management among older adults.
A qualitative descriptive design was adopted for this study [ 21 ]. This approach is not associated with any philosophical or theoretical orientation but draws on naturalistic inquiry to understand and describe how people experience a phenomenon [ 21 ]. The qualitative descriptive study is the method of choice when straight descriptions of phenomena are desired, which made it appropriate for this study. This study is reported according to the SRQR (Standards for Reporting Qualitative Research) checklist [ 22 ].
This study was conducted between June 2022 and March 2023 in collaboration with 5 community centers run by a local nongovernmental organization in Hong Kong. Using a purposive sampling approach, the members of these community centers who were interested in this program were screened and recruited into this study if they (1) were aged ≥60 years [ 23 ], (2) owned a smartphone, (3) were able to communicate in Cantonese or Mandarin, and (4) had internet access. They were excluded if they (1) had been diagnosed with cognitive impairment, (2) were bedbound, (3) owned a smartwatch, and (4) were involved in other studies using WMDs.
Staff working at the collaborating community centers invited their members to join the program using Facebook Live. Those who were interested provided their name to the staff, and trained research assistants screened them via telephone. Eligible members were invited to meet the research assistants at the community centers to receive an explanation of the details of the study and to give their written consent to take part in it. All participants received a health monitoring package that included a smartwatch with an alarm setting, a prepaid SIM card, a blood pressure monitor, and a pulse oximeter.
Before the program, a 1-hour web-based training session and a practical test were delivered to all participants to explain the basic operation of the WMD. The number of a telephone manned during office hours by staff of the community center was provided to participants to call should they face any technical problems during use.
The participants were provided with a package that included a WMD (ProVista Care smartwatch), a prepaid SIM card, a blood pressure monitor, and a pulse oximeter. ProVista Care was selected as the WMD for this study due to its validated performance, affordability, and comparable functionality to other similar devices. These functions encompass fall detection; location and activity tracking; blood pressure, pulse, and oxygen saturation monitoring; medication and appointment reminders; and calls to preset numbers and SOS calls. This selection enhances the applicability of the study’s findings to real-world implementation. Data collected from ProVista Care can be synchronized and transferred to the server via the ProVista Care app installed on participants’ personal smartphones. The WMD was designed to be worn on the wrist, securely fastened with an elastic band. Participants were instructed to wear the WMD as frequently and for as long as possible throughout the study duration.
Ten trained community health workers (CHWs), NCMs, and social workers were the interventionists in this 3-month program. The participants in the intervention group received a home visit by a CHW and the NCM in the first month and biweekly telephone calls by the CHW from the 3rd to the 12th weeks. In the first home visit, using the Omaha System, the NCM explored the features of the smartwatch that each participant might find beneficial. The Omaha System is a comprehensive assessment-intervention-evaluation instrument for community-based practice [ 24 ]. There were 21 health and social problems listed in the Omaha System that were relevant to the features of the smartwatch used in this study; for example, the feature of fall detection in the smartwatch might help participants with musculoskeletal problems or lower limb weakness. The NCM empowered the participants to set goals and action plans in the first meeting, while the CHWs followed up, recalling the goals and action plans with the participants and, in subsequent telephone calls, motivating the participants to regularly use the smartwatch. The NCM also monitored the vital signs of the participants that were automatically uploaded by the smartwatches at the backend. When abnormal vital signs were detected by the smartwatch, the NCM would call the participants via telephone and provide appropriate interventions, such as a referral to a social worker, based on the validated protocols.
A total of 6 in-depth focus group discussions with 22 participants were conducted at the end of this program. In-depth focus group interviews are conducted to evaluate participants’ experiences after an intervention through group interactions [ 25 ]. For studies using focus group discussions, it has been suggested that groups ranging from 2 to 40 may be adequate to attain data saturation depending on the phenomenon under investigation [ 25 ]. Thus, 6 groups were considered adequate for this study to attain data saturation. All discussions were conducted with a guide developed by the research team. The focus group interviews were conducted in Cantonese and each session lasted for 25 to 65 minutes. All interviews were audio recorded with the consent of the participants. Interview transcripts were written up by members of the research team. To ensure the consistency of coding and interpretation of data, an audit trail was conducted, and all discrepancies were resolved through discussion and consensus.
All data collected from the focus group discussions were analyzed inductively using the approach to constant comparison analysis formulated by Maykut and Morehouse [ 26 ], who proposed a 4-step approach to the constant comparison of focus group data: inductive categorization, refinement of categories, exploring relationships across the categories, and integration of data [ 26 ]. In inductive categorization, AKCW and JB read and reread the interview transcripts in both English (JB) and Cantonese (AKCW) to identify recurring concepts independently. Next, overlapping concepts across the groups were categorized and combined by the 2 independent coders (AKCW and JB) to formulate provisional codes. In the second stage, that is, refining the categories of codes, the provisional list of codes was concurrently examined alongside a review of the interview transcripts. The process of categorization was undertaken through discussion with the wider research group to attain consensus. Subsequently, similar codes were grouped to formulate categories for each group. The emerging categories were then concurrently compared across the groups, with recurring categories further refined and grouped. For the third stage, we further refined the categories by grouping them under a distinct umbrella. Categories with common elements were grouped to make broader groups or emerging themes. With these themes, we worked through each group and the associated categories to attain a complete understanding and create patterns of meaning from the data.
The trustworthiness of this study was evaluated according to four criteria: (1) credibility, (2) dependability, (3) confirmability, and (4) transferability [ 27 ]. To enhance credibility and dependability, the summarized results were sent to those participants who had agreed to check them for further clarification and to give feedback on the researchers’ interpretation. Audit trails and peer debriefings were also conducted during the analysis of data to ensure the consistency of the interpreted data to achieve confirmability. A thick description was ensured in reporting the study to enhance transferability. To attain analytical rigor, we ensured that analyses were undertaken in both Cantonese and English and compared to ensure consistency. The authors responsible for this section were fluent in Cantonese and English. The iterative approach to analysis also ensured consistent coding, with an audit trail on the decisions on coding and categorization. In addition, the constant comparison approach ensured that our focus was not only on individual-level analyses but also on analyses within and across the groups.
This study was conducted under the principles of the Declaration of Helsinki and approved by the ethics committee of the Hong Kong Polytechnic University (HSEARS20220429001). All eligible participants were given the right to refuse participation and the right to withdraw from the interview at any time. Written informed consent was collected from all participants. To protect the participants’ privacy, all data collected from this program were kept confidential and anonymized and were only accessible to the members of the research team.
A total of 22 community-dwelling older adults were involved in 6 focus group discussions. Of these, 5 (23%) were male, and 17 (77%) were female, with ages ranging from 62 to 78 years. Only 1 (5%) participant had experience in using a smartwatch before inclusion in the study. A total of 17 (77%) had a primary level of education, and 5 (23%) had a secondary level of education or higher. The clinical characteristics of the participants have been reported in a previous study [ 19 ].
Three themes and 7 categories emerged from the focus group data, as shown in Textbox 1 .
Challenges associated with the use of the wearable monitoring device (WMD)
Perceived benefits of using the WMD
Strategies to promote use of the WMD
This theme describes challenges and concerns that affected the participants’ use of the smartwatch. The emerging categories were (1) individual-related challenges and (2) system-related or technical challenges.
Participants across all groups emphasized that they were slow in learning to use the WMD and required a great deal of face-to-face instruction to be fully oriented to the device and its functionalities before being able to use it effectively. This issue particularly resonated with those who were using such a device for the first time. Some of the older adults either could not understand the instructions or needed more time to assimilate them. It took weeks to months for the older adults to become familiar with the device:
This is my first time wearing a smartwatch. When you wear the watch for the first time, you will definitely not know how to use all the functions. So, I wanted to ask everyone if they have experienced this situation before. [Participant 1]
Well...at first, it was difficult to use. But after using it for a while, it was basically okay, and we can use it on our own.... Hmm yes, actually, if someone teaches you face-to-face, you can learn it clearly first. [Participant 15]
How much time? I think three to four months to learn to use it well. [Participant 20]
For me, at first, it took a long time to use it. Sometimes, I just could not get it to work. But after a while, it became much better. For example, measuring blood oxygen and blood pressure readings became much easier for me with time. [Participant 4]
The first time I tried, I did not know how to turn on the device or turn it off. It was difficult at first. [Participant 7]
The participants also highlighted the issue of forgetfulness, which affected how well they used the device. They noted that with their increasing age, forgetfulness was a common occurrence. Some of the participants mentioned forgetting how to operate the device and the functions available during the initial period of use, although over time they were able to become better at using the device consistently:
I’m so dumb sometimes that I forget what I am doing. I sometimes cannot even figure out how to wrap a scarf around my head, not to mention how to use the watch. [Participant 13]
I am already in my late years. If you even ask me what I ate yesterday, I cannot remember. [Participant 2]
System-related or technical challenges were encountered across all groups. The size of the WMD was considered an issue. Participants described the WMD as big, which made it difficult to wear regularly. Occasionally, the size of the watch was considered a source of ridicule. Despite the potential for being ridiculed, some of the older adults noted that they were more concerned about the functionality and capacity of the watch than its size. In addition, the smooth, glass surface of the watch’s touchscreen became slippery and unresponsive when used by the older adults in cold weather, creating usability issues:
The watch can measure blood pressure and blood oxygen levels. Your watch looks much better and looks great. Our watches are big, like big turtles, and sometimes people make fun of it. [Participant 9]
You can see that your watch is smaller than ours. Our watch is bigger, and it obstructs a lot. But even if it’s still bulky and unattractive, I think we can still wear it because it will help us. [Participant 20]
It feels really troublesome to use during cold weather. There is no problem in hot weather. [Participant 7]
The power capacity of the WMD presented a significant challenge for the participants. Participants wanted to use the WMD, but the need for frequent charging made it rather inconvenient to do so. In some cases when they wanted to use it when going out, they noticed that the WMD was low on battery. Coupled with the previously mentioned issue of forgetfulness, this meant that they missed the opportunity to charge it before going out. The participants also reported that the need to frequently charge the WMD prevented them from using it for a longer period throughout the day. This issue deterred some of the older adults from using the WMD altogether on some occasions:
Hmmm, if you know everything, the main problem with the watch is the frequent charging. That battery needs to be charged frequently—like every day. If you don’t charge it, it will just run down fast. Yes, it is so fast and when there is no electricity, things will become difficult. The need to charge is too frequent for us. [Participant 5]
Oh, so you realize that the battery is down when you wear it and then you must put it back to charge for a while. Yes, that’s right. It is very troublesome to do this every day before going out. The battery runs out quickly all the time. [Participant 9]
Another technical issue that was identified was the fact that some of the participants felt that the WMD had several functions they did not know how to use. Interestingly, other older adults still struggled to navigate through even the few functions that they were taught to use, and they occasionally experienced digital fatigue after constant use:
There may be some functions we cannot use. The watch seems to have many functions, but we do not know them all and also don’t know how to use them all. [Participant 16]
But, I realized there are so many functions on that watch that we cannot use them all. Also, some functions that were possible to use before, people found it annoying to continue to use them. That is why we do not use them frequently anymore. [Participant 9]
All participants were enthusiastic about the ability of the device to count their steps as they walked about. However, the older adults mentioned that the device gave them incorrect step counts. In 1 group, the participants mentioned that the step count function also did not display correctly. Occasionally, they used their mobile phones to obtain correct step counts. In addition to this challenge, some of the participants reported occasional challenges with uploading or transmitting data on their vital parameters:
The pedometer was malfunctioning and gave incorrect figures. When you count how many steps you take yourself and then check the watch, it doesn’t match at all. The watch and the phone both have incorrect counts all the time. [Participant 8]
It shows only a few steps, even though I walked quite a lot. Yes, our watches cannot measure many steps. My phone shows 10,000 steps, but my watch shows 2000 or 3000 steps. To be honest, the watch is not accurate when it comes to the step count. The step count displayed on my phone is not the same as the one on my phone. Yes, that is how it is. There are some differences, yes. [Participant 1]
Actually the step count is important, but it is not accurate at all. I often check it myself. Usually, I check how many steps I have taken, especially since I sit in an office for most of the day. But it is not correct when I check the watch and the phone. [Participant 16]
I tell him about my blood pressure on that day. I tell him about my blood pressure and how many steps I took that day. Sometimes, the watch cannot display the values correctly. [Participant 9]
Some of the participants also found the device to be extremely sensitive, which occasionally caused discomfort because the alarm went off immediately when it sensed a slight movement:
But the watch is too sensitive. Sometimes when I move my hands or feet, it shakes and triggers the alarm. And then it keeps telling me how long it has been and what to do. [Participant 5]
Regardless of the notable challenges, participants highlighted the benefits of using the WMD. These were (1) self-monitoring and health promotion and (2) convenience.
Participants across all groups stated that the WMDs offered them an opportunity to self-monitor some vital parameters, such as blood pressure and oxygen saturation levels. The older adults found this feature to be particularly helpful because it helped them to record their parameters, track them, and share them with health care professionals and to ascertain whether they were maintaining a good health status. Indeed, the use of the device boosted the confidence of older adults across all groups in their ability to actively participate in self-management, particularly because the NCM actively followed up to enable them to attain their health goals:
Um, measure blood pressure and blood oxygen levels at the same time. Well, we know now. We know our blood pressure and blood oxygen levels. It helps us to maintain our health by making us aware of the condition of our body and whether it is normal or not.... At night, I have a blood pressure machine and I can measure my blood pressure every night. [Participant 2]
Also, it gives a different perspective on managing your health with more information available to you. For instance, I know how much I walked today. [Participant 8]
Yes, definitely. Using the watch gave me a lot of confidence. I wear it at home and when I go out. The nurse also reminded me to walk a certain amount of time every day, and even though I forget, I still try my best to walk more. The most important thing is to try and walk more. [Participant 4]
And at home, I don’t know how high or low the blood sugar is. If I know, I can control it by myself at home. If it is high, I will eat less. It is good to be clear about the blood sugar. For the nurses, it would be helpful if they could find my place and remind me of something regularly. [Participant 14]
The best function would be to be able to monitor your health and detect any potential illnesses. [Participant 19]
The participants expressed a desire for more regular follow-ups by the nurses and an option to monitor their blood glucose levels in addition to blood pressure and blood oxygen levels:
I just think it would have been helpful if the device can also help you to monitor your blood sugar levels just like it helps to monitor blood pressure and blood oxygen levels. [Participant 5]
Oh, she sometimes follows up on us with home visits and phone interviews. Yes, but what about the rest of the time? If the nurse does not contact you, you won’t actively look for her, right? Besides, the nurse does not come to the center every day. The nurse is also busy with her work, so where would she have the time? So, in some instances, if you are not feeling better, you go and see a doctor. [Participant 13]
Although the step count feature of the WMD was described as inaccurate, the participants felt that it was still helpful to know how many steps they had taken because that motivated them to go out more often rather than stay at home. Being able to compare their step counts with others gave them a sense of accomplishment, especially if they found them to be higher than those of their colleagues:
But I don’t really care so much about how many steps I take in a day. However, it can still calculate something for you. For example, if the doctor tells you how many steps you need to take in a day, the watch can help you to keep track of it. Maybe we don’t really need it because our phones can also count the steps. [Participant 2]
I take so many steps every day. Many people can vouch for me. I am the best here; I take so many steps. After finishing my chores at home, I come down and do some healthy dancing, and walk around the center. According to them, I am the best. [Participant 14]
Because you can show off to others, like the person you are exercising with, and say, look I have burned this much fat, right? [Participant 9]
Participants also mentioned that the device helped them to not only record their vital parameters but also to view these records regularly. Regarding the promotion of health, the participants noted that the device helped them to participate in regular exercises and to build the confidence they need to meet health-related goals:
So, wearing a watch can make you want to do more exercise, right? Because when you wear a watch, you want to see how many steps you have taken, which makes you want to move more. [Participant 6]
For participants across the groups in this study, the WMD offered a sense of convenience in being able to monitor their vital parameters, record the values, track them, and share them with the nurse if required. The notion of convenience was also noted to be related to the ease with which the older adults navigated the device to inform their self-management strategies. In addition, that they did not have to be in a hospital to assess these basic vital parameters was something the older adults considered very convenient. In fact, they could monitor the basic vital parameters from the comfort of their home and even when moving about in the community:
With the watch, there is a guide, and I am afraid to be lazy about moving around and not walking around. But when I think about the watch, I have the confidence to do it. In the past, I just sat at home all day, talking on the mobile phone about how many thousands of steps I have to walk, and now I just go out and do it myself. [Participant 10]
In addition, the aspect of being able to reach out and interact with a nurse or having a nurse follow up on an older adult whose parameters were outside the normal range was considered to be convenient. This may be related to the fact that the participants felt that they were not only using the device but also being professionally supported by a nurse:
Yes, if the nurse thinks the blood oxygen is low, she will remind you to do it before and again. Then if something happens to you, you will know to see a doctor. [Participant 5]
At least, the blood pressure can be seen by the nurse. And the blood oxygen levels can be seen with a press of the finger. However, the step count is not accurate. [Participant 15]
The social aspect of the watch, such as being able to take pictures and share these with families and friends, was considered helpful and made life more convenient for the older adults. In other words, it added a bit of fun to using the device:
I discovered a new function or new feature. It is completely possible to use the watch to take a photo and share. Yes, so it is so much more convenient. [Participant 3]
It is best if there is nothing wrong with it. The best thing to do is to take a photo of that watch and the stick together after we finish the test, and it will be the most accurate. It is comfortable and makes life more convenient, I think. [Participant 5]
Another source of convenience was noted to be related to the fact that the wearable devices afforded older adults or their families a unique opportunity to track their whereabouts. The older adults found this feature particularly helpful because they considered themselves to be forgetful on occasion, and this feature helped them to retrace their steps to their original location or helped others to know where they were:
The best feature of the device is the tracking. Some people have a poor memory, or they may not be able to find their way home. In that case, their family members can locate them using the tracking feature. [Participant 21]
Mr Choi once tracked us. I got lost and could not find my way home. I got scared and started sweating. Mr Choi tracked my watch and found me at the Che Kung Temple. [Participant 2]
This is where technology has advanced. The most useful thing is when someone is lost. If he wears the watch, you can find him and track where he has been. Then you can find him using the tracking function. [Participant 14]
Sometimes when I go somewhere far, I don’t know where I am, and I cannot see clearly due to my glaucoma. One time, I had to go to the other side for the lunar new year, but I took the wrong bus and did not know where I was. Luckily, I was able to use the watch to track my way. [Participant 6]
This emerging theme discusses approaches observed in the data that highlight strategies to sustain continual use of the device. The following categories were captured: (1) availability of technical support, (2) ongoing follow-up professional support, and (3) peer and family support.
The plethora of technical issues emerging from the use of the device warrants ongoing availability of technical support. This great need was mentioned by participants in all groups and was particularly felt when the device developed a fault or broke down, or the participants needed more assistance in navigating through the functions:
The watch broke down and we did not know how to fix it. Someone at the center said he knew how to turn it back on. We tried for a while, but it still did not work. So, I said forget it. I did not wear it. I only wore it for ten days before, and just for measuring blood pressure at home. [Participant 14]
Although some of the participants sought assistance from the social workers, most older adults hesitated to disturb the personnel and therefore avoided seeking assistance altogether, regardless of the technical challenges that they were facing:
So, it is changed. Actually, you also changed and regarded it as a planned situation, and I did not dare to worry the nurse or the supervisor. So, if there is a problem with the watch, I must handle it on my own. [Participant 10]
In addition, the participants mentioned that they needed more technical support to access other functions on the watch because they found it difficult to perform this task:
And I don’t understand why so many functions need to be locked, except the panic button. I wondered if there was help for us to unlock these functions on the watch. [Participant 7]
We tried to figure it out but could not do it and we needed lots of help. In the end, it suddenly made a sound, and we could not figure out how long it had been, it just happened. [Participant 16]
There are too many things to handle. If you suddenly introduce ten functions for us to use, how can we remember them? You are not teaching a class, you won’t be able to remember them either. [Participant 11]
Although the device was helpful in various ways, the older adults still preferred to have nurses follow up actively with them. For the participants across all groups, this form of support was generally limited, and they wished that they had interacted more with the nurses to be able to interpret the values that they obtained and to seek more health-related information. Perhaps the nurse support centered on following up on older adults with abnormal readings. Thus, those who maintained readings within normal ranges had limited contact with the nurses. The participants also felt that the limited support that they received from the nurses might have affected how well they met their health-related goals:
They [the nurses] do a good job when they call or visit you. With the watch, you set a goal with the nurse, which motivates you to do more. But they are not always there. It is helpful if they can find my place so that they can remind me of something I don’t know. [Participant 19]
Aside from ongoing professional support from nurses to keep the participants motivated in meeting their health-related goals, support from social workers is equally important to promote their continued use of the devices. Social workers played critical roles in promoting the use of the device by offering troubleshooting support, helping the participants to navigate through the device, and offering encouragement. In fact, it seemed as though the older adults who participated in the study trusted the social workers more than they did the nurses and were willing to always seek assistance from them. The older adults seem to have built a strong relationship with the social workers, which made it easier for them to seek assistance from them if required:
They do help us a lot and encourage us. Whenever there is a problem, we always look for him to help us out. He is the most reliable. He is very responsible, and he is always willing to help us. [Participant 5]
I did not even know how to turn off my phone. He said to turn off my phone, do it this way. He really taught us a lot of things. [Participant 10]
Peer support from the CHWs also emerged as a critical factor to sustain the ongoing use of the WMD. These older adult volunteers or older adult ambassadors often offered encouragement to the participants to continue to use the device, record their values, and work toward meeting their health-related goals. Participants across the groups highlighted that it was occasionally difficult to gain access to a nurse; thus, the older adult volunteers or older adult ambassadors became the first point of call for assistance before reaching out to the social workers:
It is not so easy to find or see a nurse on some days. The volunteers have done this before, so we can reach out to them. There are days when you will forget to write the values, and they will remind you to do so. [Participant 16]
Aside from peer support, family support was also observed to be helpful in encouraging the older adults to use the WMD as required. Thus, older adults who resided alone with limited or no family support found it difficult to monitor and continually use the device to promote health:
They said that I fall frequently and have fallen several times before. I must be careful now that I am getting older. If anything happens to me, it would be troublesome because I live alone. [FG2]
Emerging technologies such as wearable devices are advancing care and support for older adults in communities across the globe. Despite the plethora of literature highlighting the importance of wearable devices, significant concerns remain about the decline in use after a period of time. The world’s aging population is booming, but only a limited amount of work has been done to unearth the experiences of older adults regarding the use of wearable devices. This critical gap informed this study, which was part of a large trial program. The findings bring to the fore the challenges experienced by older adults regarding the use of wearable devices, which were identified as individual- and system-related challenges. The findings of the study further highlight the perceived benefits of the devices, particularly in the areas of self-monitoring, health promotion, and convenience. In addition, the study identified a hierarchical pattern of health-seeking behaviors of older adults when using the devices. Put together, the findings highlight that ongoing use of the devices is possible, although there is a critical need to ensure the availability of technical support and ongoing active professional follow-up by the health care team (notably nurses and social workers) and to include older adult volunteers to support other older adults in such programs.
Previous studies have uncovered various technical issues associated with using wearable devices. In a recent study, the authors identified interoperability, battery issues, the bulky nature of the device, a lack of personalization, and a lack of support as key issues that affected the use of the device [ 28 ]. Similar to this finding, our study also noted similar technical issues that affected the use of the devices. By contrast, however, it was noted in our study that regardless of these issues, older adults were willing to continue using the device because they believed that doing so was to their benefit. Despite this, we observed that issues related to individuals can also affect the use of wearable devices among older adults. For most of the older adults included in this study, this was the first time they had to use a wearable device, and they needed more time to become acquainted with it. Although issues such as forgetfulness may be considered part of the aging process, these findings suggest that aside from intensive training on how to use the device, there is still a need for ongoing technical support to boost its use. In addition, instruction manuals need to be more user-friendly and easily comprehensible for older adults. Comprehensibility is essential; we observed in this study that the user manuals were unclear, which may have had an impact on how well the participants made use of the WMD.
The inclusion of professional and peer support in this study is particularly noteworthy. An existing study showed that it might be necessary to provide nursing or peer support throughout the duration of the health program to increase the intrinsic motivation of older adults to adapt to WMDs and to provide social support at the same time [ 17 ]. In our study, which combined both professional and peer support, we observed that older adults did not want to disturb the nurses. Rather, they felt more comfortable consulting the older adult volunteers first, before reaching out to the social workers and, last of all, to the nurses if necessary. This hierarchical pattern of health-seeking behaviors may be an indication that older adults viewed the older adult volunteers as peers sharing similar experiences and conditions, which made it easier to relate to them than to the professionals. Nurses were perceived by older adults to be busy professionals. Thus, the participants would rather resort to seeking support from social workers, although they wished they had more interactions with the nurses. Put together, the findings suggest that nurses may need to take an active role in reaching out to older adults and being available when needed, regardless of whether they are using the wearable device. The concept of peer support also needs to be promoted further by engaging other older adults as volunteers to support older adults who are transitioning to using wearable devices. A recent study has shown that peer-to-peer support for community-dwelling older adults has the potential to not only promote adherence to therapeutic regimens but also to improve their quality of life, which warrants further exploration [ 29 ].
Furthermore, we observed that the ongoing availability of technical support and family support is also essential to promoting the use of wearable devices. It is possible that technical support may be available but unknown to the older adults or that they may not want to disturb others. Thus, older adults need to be encouraged to seek help if needed and should know where to obtain this help. Regarding family support, it remains a major cornerstone of support for older adults [ 30 ]. The absence of this form of critical support may lead to loneliness, which can exacerbate health issues and interfere with therapeutic regimens, including the use of wearable devices [ 31 ]. Undoubtedly, expanding on the notion of family support is beyond the scope of this study, but it is possible to recommend that older adults with limited or no family support need to be identified and appropriate strategies devised to assist them.
Moreover, we identified both individual- and system-related issues that can adversely impact the use of WMDs. Individual-related factors such as slow learning patterns and forgetfulness were highlighted by the participants as impacting how they initially navigated the WMD. Undoubtedly, aging is not a disease, although it can be associated with forgetfulness, which can impact activities of daily living [ 32 ]. Forgetfulness coupled with the slow learning patterns emphasize the need for continuous, gradual education to enable older adults to use WMDs effectively [ 33 ]. System-related challenges such as the size of the WMD and its limited power capacity are concerns that need to be addressed in subsequent design studies. In addition, concerns regarding the WMD generating incorrect readings also emerged as a system-related challenge. Previous studies have reported that a common problem with wearable devices is the likelihood of experiencing automatic loss of synchronization, making it difficult or impossible to update data or resulting in an incorrect report [ 34 , 35 ]. Although loss of synchronization was not examined in this study, it may have potentially contributed to the incorrect readings observed by the older adults in this study.
The strength of this study lies in the use of a rigorous approach to collecting and analyzing data with a focus on individual, within-group, and across-group variations to attain a thick description of what it means to experience the use of a wearable device. This strength notwithstanding, some limitations are noteworthy. First, the experiences of the participants are related to the use of a particular wearable device with distinct features. Thus, the findings may not necessarily be transferable to all wearable devices, although they may offer a useful resource on how older adults are likely to experience using the devices. Second, the study was undertaken in a region with distinct sociocultural features. The findings should therefore be interpreted taking these unique features into consideration. In addition, the nature of the program was such that the older adults were required to have some technological abilities. Thus, the findings may not be transferable to older adults who find it difficult to use technological applications.
Emerging technologies, such as wearable devices, for supporting community-dwelling older adults warrant more work on how users are experiencing these devices. The findings from this study bring to the fore the barriers and benefits of wearable devices and offer insight into strategies that can be considered to improve use. Because of issues that might emerge, it may be helpful to consider the availability of ongoing technical support, professional follow-up support, peer support, and family support. In fact, a personalized approach is needed to promote the use of wearable devices among older adults.
The authors would like to thank Hong Kong Lutheran Social Service for providing the smartwatches and participating in and contributing to this study. The study was funded by the Departmental General Research Fund, The Hong Kong Polytechnic University (G-UAQ2).
The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.
AKCW and FKYW conceptualized the interventional program. AKCW, JB, JJS, FKYW, KKSC, BPW, SMW, and AYLL provided intellectual input on the study design, methodology, and evaluation. AKCW and JB drafted the manuscript. AKCW analyzed the data. All authors contributed to, reviewed, and approved the manuscript.
None declared.
community health worker |
nurse case manager |
Standards for Reporting Qualitative Research |
wearable monitoring device |
Edited by T de Azevedo Cardoso; submitted 27.05.23; peer-reviewed by M Keivani, I Madujibeya, A AL-Asadi; comments to author 06.12.23; revised version received 14.01.24; accepted 24.05.24; published 07.08.24.
©Arkers Kwan Ching Wong, Jonathan Bayuo, Jing Jing Su, Karen Kit Sum Chow, Siu Man Wong, Bonnie Po Wong, Athena Yin Lam Lee, Frances Kam Yuet Wong. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 07.08.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
IMAGES
VIDEO
COMMENTS
Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...
The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...
Don't make the reader do the analytic work for you. Now, on to some specific ways to structure your findings section. 1). Tables. Tables can be used to give an overview of what you're about to present in your findings, including the themes, some supporting evidence, and the meaning/explanation of the theme.
One of the most important parts of writing about qualitative research is presenting the data in a way that makes its richness and value accessible to readers. As the discussion of analysis in the prior chapter suggests, there are a variety of ways to do this. ... Of course, the main goal of writing up the results of a research project is to ...
Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.
how to best present qualitative research, with rationales and illustrations. The reporting standards for qualitative meta-analyses, which are integrative analy-ses of findings from across primary qualitative research, are presented in Chapter 8. These standards are distinct from the standards for both quantitative meta-analyses and
4. Provide a summary of the literature relating to the topic and what gaps there may be. Rationale for study. 5. Identify the rationale for the study. The rationale for the use of qualitative methods can be noted here or in the methods section. Objective. 6. Clearly articulate the objective of the study.
Qualitative research presents "best examples" of raw data to demonstrate an analytic point, not simply to display data. Numbers (descriptive statistics) help your reader understand how prevalent or typical a finding is. Numbers are helpful and should not be avoided simply because this is a qualitative dissertation.
A-85). Successful writing requires a writer to pay quiet diligent attention to the construction of the genre they are working in. Each genre has its own sense of verisimilitude—the bearing of truth. Each places different constraints on the writer and has different goals, forms, and structure.
INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...
The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...
2. Reporting Qualitative Findings. A notable issue with reporting qualitative findings is that not all results directly relate to your research questions or hypothesis. The best way to present the results of qualitative research is to frame your findings around the most critical areas or themes you obtained after you examined the data.
Purpose - This paper aims to offer junior scholars a front-to-back guide to writing an academic, theoretically positioned, qualitative research article in the social sciences. Design/methodology ...
Charlesworth Author Services; 11 November, 2021; How to write the analysis and discussion chapters in qualitative (SSAH) research. While it is more common for Science, Technology, Engineering and Mathematics (STEM) researchers to write separate, distinct chapters for their data/results and analysis/discussion, the same sections can feel less clearly defined for a researcher in Social Sciences ...
Tables to Present the Groups of Codes That Form Each Theme. As noted previously, most of our dissertation assistance clients use a thematic analysis approach, which involves multiple phases of qualitative analysis that eventually result in themes that answer the dissertation's research questions. After initial coding is completed, the analysis process involves (a) examining what different ...
Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...
Chapter 4 for Qualitative Research carries different titles such as 'Analysis of Data', 'Results of Study', 'Analysis and Results'
Include all relevant results as text, tables, or figures. Report the results of subject recruitment and data collection. For qualitative research, present the data from all statistical analyses, whether or not the results are significant. For quantitative research, present the data by coding or categorizing themes and topics.
Close the data and quotes. Write a bare bones generic description of the section. Go back to Atlas, skim each code to "check" my analytic summary, add specificity. Add 1-3 high-value (surprising, pithy, unusual) quotes to paragraphs. Choose 1-2 longer quotes on different themes for the table. As a result, I finished my task of writing the ...
evaluate and interpret the results of your study or paper, draw inferences and conclusions from it, and communicate ... Use the present tense when writing the Discussion section. ... quantitative and qualitative research papers can be found in Sections 3.8 and 3.16 of the Publication Manual of the American Psychological Association
Qualitative Research Methodology. ... How to Write Research Methodology. Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It's an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your ...
The Chicago Guide to Writing about Multivariate Analysis by Jane E. Miller Many different people, from social scientists to government agencies to business professionals, depend on the results of multivariate models to inform their decisions. Researchers use these advanced statistical techniques to analyze relationships among multiple variables, such as how exercise and weight relate to the ...
Objective To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research.Methods A question and answer format is used to address considerations for writing and evaluating qualitative research.Results and Conclusions When producing qualitative research, individuals are encouraged to address the qualitative research ...
But it's full of qualitative analysis and observations as well. Also, since the results you get from Google Scholar can be anything from books to articles to journals, some sources have more qualitative data than others. When you need secondary research that's also qualitative, Google Scholar is an ideal place to find it. #7.
The pros and cons of qualitative data analysis. Qualitative data analysis looks at the human side of data. It offers insights that numbers alone can't provide. But like all research methods, even qualitative data analysis methods have their strengths and weaknesses, especially when it comes to shaping a marketing plan that hits the mark.
A portrait of MWrite as a research program: A review of research on writing-to-learn in STEM through the MWrite program. International Journal for the Scholarship of Teaching and Learning , 17(1), Article 18.
Study design. This study was conducted in two phases: Phase 1 comprised a qualitative assessment of the content validity of the EBP 2-N, followed by minor linguistic adaptions and a pilot test of the adapted version. Phase 2 comprised an assessment of structural validity and internal consistency of the EBP 2-N based on the result from a web-based cross-sectional survey.
Background: Perinatal depression affects a significant number of women during pregnancy and after birth, and early identification is imperative for timely interventions and improved prognosis. Mobile apps offer the potential to overcome barriers to health care provision and facilitate clinical research. However, little is known about users' perceptions and acceptability of these apps ...
The focus group discussions were facilitated and moderated by 2 members of the research team. All discussions were recorded and transcribed verbatim. We used the constant comparison analytical approach to analyze the focus group data. Results: A total of 22 participants assigned to 6 focus groups participated in the study.