Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Creating a Culture of Continuous Improvement

  • Aravind Chandrasekaran
  • John S. Toussaint

further research and improvement

How one Wisconsin health system did it.

A number of health systems have scored impressive gains in improving outcomes and patient satisfaction and lower costs by applying the Toyota Production System (TPS) to redesign “lean” clinical and administrative processes, eliminating waste and boosting quality. But in all too many cases, when the leader who championed TPS left his or her organization, these efforts began slipping. The authors know this firsthand: This happened at Wisconsin-based ThedaCare. When one of the authors (John Toussaint) left in 2008, its performance in terms of quality (as measured by the Centers for Medicare & Medicaid Services’s metrics for Next Generation accountable care organizations) fell from best in the nation to middle of the pack. Through the authors’ research in health care and other industries, they’ve identified a set of practices that can stop this collapse and sustain a culture of continuous improvement after the departure of a leader who was passionate about TPS. They include the following: incorporating TPS in succession planning for the CEO and board members, instilling lean behaviors in managers at all levels, creating your own success stories, and establishing a TPS operating system.

A number of health systems have scored impressive gains in improving outcomes and patient satisfaction and lower costs by applying the Toyota Production System (TPS) to redesign “lean” clinical and administrative processes, eliminating waste and boosting quality. But in all too many cases, when the leader who championed TPS left his or her organization, these efforts began slipping. We know this firsthand: This happened at Wisconsin-based ThedaCare . When one of us (John Toussaint) left in 2008, its performance in terms of quality (as measured by the Centers for Medicare & Medicaid Services’s metrics for Next Generation accountable care organizations) fell from best in the nation to middle of the pack.

further research and improvement

  • Aravind Chandrasekaran is an associate professor of operations and academic director of Master of Business Operational Excellence (MBOE) at the Ohio State University’s Fisher College of Business.
  • John S. Toussaint M.D., is the founder and executive chairman of Catalysis, a nonprofit educational institute, and an adjunct professor at Ohio State University’s Fisher College of Business. He is the former CEO of a health care system and coaches teams on Toyota Production System principles.

Partner Center

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Research and Quality Improvement: How Can They Work Together?

Affiliations.

  • 1 Director, Data Science, Quality Insights, Williamsburg, VA.
  • 2 Chair, ANNA Research Committee.
  • 3 President, ANNA's Old Dominion Chapter.
  • 4 Instructor, Case Western Reserve University, Cleveland, OH.
  • 5 Associate Degree Nursing Instructor, Northeast Wisconsin Technical College, Green Bay, WI.
  • PMID: 35503694

Research and quality improvement provide a mechanism to support the advancement of knowledge, and to evaluate and learn from experience. The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care organizations. Both methods of inquiry broaden our knowledge through the generation of new information and the application of findings to practice. This article in the "Exploring the Evidence: Focusing on the Fundamentals" series provides nephrology nurses with basic information related to the role of research and quality improvement projects, as well as some examples of ways in which they have been used together to advance clinical knowledge and improve patient outcomes.

Keywords: kidney disease; nephrology; quality improvement; research.

Copyright© by the American Nephrology Nurses Association.

PubMed Disclaimer

Conflict of interest statement

The authors reported no actual or potential conflict of interest in relation to this nursing continuing professional development (NCPD) activity.

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Improving care coordination between nephrology and primary care: a quality improvement initiative using the renal physicians association toolkit. Haley WE, Beckrich AL, Sayre J, McNeil R, Fumo P, Rao VM, Lerma EV. Haley WE, et al. Am J Kidney Dis. 2015 Jan;65(1):67-79. doi: 10.1053/j.ajkd.2014.06.031. Epub 2014 Aug 30. Am J Kidney Dis. 2015. PMID: 25183380
  • Quality improvement in pediatric nephrology-a practical guide. Gaudreault-Tremblay MM, McQuillan RF, Parekh RS, Noone D. Gaudreault-Tremblay MM, et al. Pediatr Nephrol. 2020 Feb;35(2):199-211. doi: 10.1007/s00467-018-4175-0. Epub 2019 Jan 5. Pediatr Nephrol. 2020. PMID: 30612204 Review.
  • The Health and Safety of Nephrology Nurses and the Environments in Which They Work: Important for Nurses, Patients, and Organizations. Ulrich BT, Kear TM. Ulrich BT, et al. Nephrol Nurs J. 2018 Mar-Apr;45(2):117-168. Nephrol Nurs J. 2018. PMID: 30303636
  • The learning health system for pediatric nephrology: building better systems to improve health. Varnell CD Jr, Margolis P, Goebel J, Hooper DK. Varnell CD Jr, et al. Pediatr Nephrol. 2023 Jan;38(1):35-46. doi: 10.1007/s00467-022-05526-0. Epub 2022 Apr 20. Pediatr Nephrol. 2023. PMID: 35445971 Free PMC article. Review.
  • Search in MeSH

Grants and funding

  • K23 NR019744/NR/NINR NIH HHS/United States

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Supplements
  • French Abstracts
  • Portuguese Abstracts
  • Spanish Abstracts
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Journal for Quality in Health Care
  • About the International Society for Quality in Health Care
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ISQua
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, why the gap exists, why better alignment is needed, starting the work to bridge the gap.

  • < Previous

Research versus practice in quality improvement? Understanding how we can bridge the gap

  • Article contents
  • Figures & tables
  • Supplementary Data

Lisa R Hirschhorn, Rohit Ramaswamy, Mahesh Devnani, Abraham Wandersman, Lisa A Simpson, Ezequiel Garcia-Elorrio, Research versus practice in quality improvement? Understanding how we can bridge the gap, International Journal for Quality in Health Care , Volume 30, Issue suppl_1, April 2018, Pages 24–28, https://doi.org/10.1093/intqhc/mzy018

  • Permissions Icon Permissions

The gap between implementers and researchers of quality improvement (QI) has hampered the degree and speed of change needed to reduce avoidable suffering and harm in health care. Underlying causes of this gap include differences in goals and incentives, preferred methodologies, level and types of evidence prioritized and targeted audiences. The Salzburg Global Seminar on ‘Better Health Care: How do we learn about improvement?’ brought together researchers, policy makers, funders, implementers, evaluators from low-, middle- and high-income countries to explore how to increase the impact of QI. In this paper, we describe some of the reasons for this gap and offer suggestions to better bridge the chasm between researchers and implementers. Effectively bridging this gap can increase the generalizability of QI interventions, accelerate the spread of effective approaches while also strengthening the local work of implementers. Increasing the effectiveness of research and work in the field will support the knowledge translation needed to achieve quality Universal Health Coverage and the Sustainable Development Goals.

After mixed results from the Millennium Development Goals (MDGs) strategy, the global agenda recognized the critical role of ensuring not just access but quality of health care delivery. As a result, quality and improvement have become a core focus within the Universal Health Coverage movement to achieve the goal of better population health and Sustainable Development Goals (SDGs)[ 1 – 3 ]. In low- and middle-income countries, quality improvement (QI) is used to identify performance gaps and implement improvement interventions to address these problems at the local, sub national and national levels. Methods used by these improvement interventions range from process improvements using incremental, cyclically implemented changes appropriate to the local context, to system-level interventions and policies to improve and sustain quality. Regardless of the scope of improvement efforts and methods employed, the impact and spread of QI has often fallen short. Causes of these lost opportunities include how decisions about improvement interventions are made, the methodology for measuring the effectiveness of the intervention, what data are collected and used and how the information on both the implementation and the intervention is communicated to drive spread and knowledge translation [ 4 , 5 ]. Practitioners engaged in improvement in their organizations are frustrated by research reviews which often show a lack of conclusiveness about the effectiveness of QI when many of them see the local benefits from their work. Researchers complain about the lack of rigor in the application of QI methods in practice sittings and about poor documentation of the implementation process [ 6 ].

There is a growing realization of the need for common ground between implementers and researchers that promotes use of more systematic and rigorous methods to assess the improvement intervention effectiveness when appropriate but does not demand that all QI implementations be subject to the experimental methods commonly considered to be the gold standard of evidence. To explore the causes of this gap and address how to bridge the gap and better engage the targeted consumers of generated knowledge, including communities, governments and funders, a session ‘Better Health Care: How do we learn about improvement?’ was organized by Salzburg Global Seminar (SGS) [ 7 ]. The session brought together experts from a range of fields and organizations, including researchers, improvement implementers from the field, policy makers, and representatives from countries and international organizations.

For a partnership between researchers and implementers to become more consistent in improvement projects and studies, the incentives and priorities of each of these groups need to be better aligned in QI work and its evaluation. In this paper, we build on the Salzburg discussions, existing literature, and our own experience to explore the barriers to collaboration and offer suggestions on how to start to address these barriers. In the spirit of quality improvement, we hope that these recommendations are adopted and tried by groups interested in advancing the research and the practice of QI.

Both groups use data to evaluate whether improvements have taken place and are interested in the question of ‘did it work’. However, these gaps have occurred in part because of differences in goals, evidence needs and methods used and incentives for results and dissemination.

Selected participants and stakeholders in quality improvement work and research and their incentives and goals

GoalsIncentives
QI team members and institutional championsImplement effective QI projects and promote and support change in their institutions through good improvement practiceLocal improvement and disseminate the best local knowledge about what works
Policy makers whose goals arePrioritization to invest in improvement projects based on best available evidence from academic research and practical wisdomMake effective, yet timely and practical decisions given constraints on time and knowledge to choose and spread efficient, effective and sustainable improvement
Embedded (practice-based) researchers, QI implementers engaged in researchDrive improvement in their own setting, advance the best improvement methods in their own settings and create generalizable knowledge to make a plausible case linking the QI activities to observed outcomes for broader disseminationCreate practical yet generalizable knowledge linking improvement activities to observed outcomes for dissemination to both practice and research audiences
Academic and other researchersEstablish strong causal relationships between QI and outcomes, promoting more rigorous experimental research in QIUse of rigorous science that can be published in peer-reviewed journals and establish objective standards of evidence
GoalsIncentives
QI team members and institutional championsImplement effective QI projects and promote and support change in their institutions through good improvement practiceLocal improvement and disseminate the best local knowledge about what works
Policy makers whose goals arePrioritization to invest in improvement projects based on best available evidence from academic research and practical wisdomMake effective, yet timely and practical decisions given constraints on time and knowledge to choose and spread efficient, effective and sustainable improvement
Embedded (practice-based) researchers, QI implementers engaged in researchDrive improvement in their own setting, advance the best improvement methods in their own settings and create generalizable knowledge to make a plausible case linking the QI activities to observed outcomes for broader disseminationCreate practical yet generalizable knowledge linking improvement activities to observed outcomes for dissemination to both practice and research audiences
Academic and other researchersEstablish strong causal relationships between QI and outcomes, promoting more rigorous experimental research in QIUse of rigorous science that can be published in peer-reviewed journals and establish objective standards of evidence

Incentives for results and dissemination

The differences in goals and evidence are related to often competing incentives. Implementers are incentivized to improve quality and meet the demands of stakeholders, whether local communities, government or funders. Researchers are rewarded through dissemination of evidence in high-impact peer-reviewed journals, research grants and academic promotions. Policy makers are rewarded by timely response to gaps with broad visible changes in their populations. Timeframes of these incentives are also often different, with the most rigorous studies taking years to measure impact, followed by careful analysis and dissemination. Implementers and policy makers, however, are often under pressure to show short-term change and respond to new and emerging issues even as they continue with existing improvement work.

The goals of documentation and dissemination of projects can also differ between researchers and implementers and their stakeholders. There is a strong recognition that the evidence generated by even the best QI efforts is not effectively translated into further spread and adoption [ 8 ]. This is because implementers working on QI interventions in their organizations are incentivized by improvement and do not usually have a demand to document their work beyond communication with organizational leaders. While there are growing venues for sharing of case reports through learning collaboratives and local meetings designed to facilitate peer learning, this documentation typically involves a description of the process of implementation, but not at a level of detail or rigor of value to researchers and the broader community. There are a number of disincentives for implementers to increase the rigor and detail of their local work including competing demands to deliver services and ongoing improvement, and the paucity of journals interested in publishing even well- documented local results because they prioritize rigorous results of evaluations with strong designs involving carefully constructed QI research studies. Researchers are incentivized by more academic dissemination through these peer-reviewed journals and presentation at conferences. This nonalignment results in practitioners being deprived of access to broader venues to disseminate their work and researchers losing rich contextual data that is critically important to evaluate the effectiveness of QI.

Evidence needed and methods prioritized

The differences in the goals and incentives of different stakeholders lead to differences in the amount of evidence that is considered adequate and the methods used to generate this evidence. Implementers are interested in the evidence of change in their local projects, with less emphasis on transferring or generalizing what they did for use in other settings. They may rely on a combination of pre-and-post intervention data, QI statistical methods such as run charts and tacit organizational knowledge to assess the evidence of change in their projects. Policy makers have an interest in evidence which is robust enough from the QI to inform resource allocation, but may still have a focus on a specific geography rather than generalizability at scale. They are interested in generalizable knowledge about successful QI methods, but are sensitive to the burden and costs and time of requiring rigorous research methods on implementing groups.

Researchers aim for evidence which is robust enough to provide globally relevant conclusions with limited threats to internal validity. This group is most supportive of the use of rigorous experimental research designs to generate the highest possible standards of evidence. Traditionally, this had been limited to a small set of rigid experimental designs with appropriate controls or comparison groups driven in part by research funders and academic standards to be able to attribute change to the improvement interventions. This set of designs has been expanding in the past few years as better understanding of the value of quasi-experimental methods has emerged. [ 9 , 10 ]

QI interventions differ from many fixed clinical or public health interventions [ 11 ]. In this supplement, Ramaswamy and others describe QI interventions as complex (multi-pronged and context-specific) interventions in complex systems (non-linear pathways and emergent behaviors). For better learning from QI, implementers, policy makers and researchers both need to know not just effectiveness (the focus of local measurement, outcomes research and impact evaluation) but also 'how and why' the change happened (implementation), cost and sustainability ensuring that the evidence produced will be more relevant to the stakeholders at the local and broader level. Therefore, finding a common ground through ‘development of a culture of partnership’ [ 12 ] to co-identify appropriate methods and data collection to understand and disseminate implementation strategies is critical to inform how to how to create the different knowledge products: generalizable evidence for dissemination (researchers), insights into how to scale (policy makers) and how to sustain the improvements (implementers) [ 13 ]. A well-known and commonly cited example is the Surgical Safety Checklist, which was found to improve adherence to evidence-based practices and save lives across a range of settings [ 14 ]. However, attempts to replicate these successes were not always effective since capturing generalizable knowledge on how to introduce and support the implementation of this intervention with fidelity was not part of the original research dissemination, [ 15 ] a lesson understood by the original researchers and addressed through accompanying toolkits [ 16 ].

Another important area where collaboration between implementers and researchers is needed to improve learning from QI in understanding the impact of different contextual factors to identify which aspects of an improvement intervention are generalizable, which are context-specific and which are critical to address when planning replication. During the seminar, a study of antenatal corticosteroids (ANCS), an intervention found in higher income settings to reduce death among premature infants, was discussed to identify how contextual factors can be better addressed through local knowledge to inform implementation [ 17 ]. The randomized controlled trial showed that implementation of ANCS in low-resource settings resulted in increased mortality among some of the infants who were given steroids; the published conclusion was that ANCS was not a recommended improvement intervention in these settings. The group identified the gap in the translation of ANCS use from resource richer settings did not consider the different contextual factors which required adaption such as the lack of capacity to accurately determine prematurity needed to determine eligibility for the steroids.

Aligning project goals and joint planning : Before QI projects get launched, the initial work must start with implementers and researchers discussing and agreeing on the goals and objectives of the work including and beyond local improvement. In addition to alignment of improvement goals, all stakeholders must be engaged at the start of the QI project to agree on the purposes and uses of the results, local learning or broader dissemination or both. This work needs to happen at the design phase and continue with ongoing planned communication throughout the work. This will ensure that all stakeholders are jointly engaged in identifying the most appropriate research questions and the most appropriate methods to answer them.

Choosing the right research design . The joint framing of goals and research questions can lead to a selection of evaluation and research designs at an appropriate, mutually agreed upon level of rigor including right research methodology for success [ 18 ]. This balancing of rigor versus flexibility, described in the meeting as a ‘bamboo scaffold that bends in the wind’ can only be accomplished when there is an open discussion of trade-offs between investments in data collection for research and data collection for demonstrating local improvements. Detailed documentation of implementation approaches is time consuming and resource intensive, and cannot be routinely expected for every project. On the other hand, some improvement in documentation as part of routine practice will benefit practitioners by providing important insights about local sustainability, and can be used by researchers to assess generalizability, attribution and scale.

The need to understand both process and context in the evaluation and study of QI interventions also cannot be accomplished without engaging both researchers and practitioners in the process [ 13 ]. The knowledge about how the project was implemented, and what was relevant to the context often resides with those responsible for implementation. However, as mentioned previously, the implementers often have neither the incentives nor the support to systematically document and disseminate this knowledge in a way that makes it available for general use. Researchers can play a key role in influencing the QI research integration by supporting systematic documentation of the implementation process in addition to an evaluation of outcomes and by partnering with implementers to make this happen. Introduction of adaptive designs such as SMART trials into improvement research may also offer a common ground where improvement implementers and researchers can collaborate introducing use of data to make mid-course changes to the implementation design.

Building implementer research capacity. Building capacity of implementers as potential producers of and better consumers of research and evaluation results in another important approach to bridge the gap. For example, empowerment evaluation is designed to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement and evaluate their own program [ 19 ]. Building capacity within implementing organizations through technical support provided by researchers for interested implementers can establish a viable infrastructure for practitioners and researchers to work together more effectively. For example, multi-year research practice partnerships in facilities in Kenya has led to sustainable QI programs with dissemination of methods and results through co-authored peer-reviewed journals and conference presentations [ 20 ] Similar results were seen for research capacity building targeting implementers in the Africa Health Initiative in five countries in Africa [ 21 ]. Support for practice-based researchers to build their capacity in QI and in process evaluation using implementation science methods can also increase the potential of improvement projects to produce the knowledge needed about the implementation to spread learning within and beyond their organization.

Aligning incentives to drive collaboration : Creating areas of shared incentives will require initiatives from funders and universities to appreciate the higher value of co-produced research, reward capacity building of researchers in the field and fund innovative models of embedded research where researchers are part of or embedded into the implementing organization [ 22 ]. In addition, offering opportunities for meaningful participation in research and building capacity for this work among implementers has also been associated with better improvement and dissemination [ 23 ].

Simplifying documentation for dissemination of learning : As mentioned earlier, it is useful for both implementers and researchers if documenting the implementation of QI programs becomes part of routine practice. However, this will not happen without simplifying documentation standards. SQUIRE and TiDieR guidelines are very helpful for academic publications. However, they are not always a good fit for projects whose primary purpose is not research but who have the potential to add to the knowledge needed to improve QI [ 24 , 25 ]. Researchers could partner with implementers to develop simpler, practice-based research guidelines and to create other venues such as through existing organizations focused on quality and improvement where methods and results could be posted using these guidelines without a formal peer-review process. Templates and examples could be provided to improve the quality of documentation as well as editorial staff to assist with structure and formatting. The incentive for implementers is to get their stories told, and at the same time provide an opportunity for researchers to get data on where to focus further research. In addition, there are growing options to share knowledge and research findings such as the WHO Global Learning Lab for Quality UHC which provides a forum for implementers to disseminate work available to broader community [ 26 ].

To improve learning from and effectiveness of QI work requires involvement and collaboration between both researchers and practitioners. Researchers can advance the field by creating generalizable knowledge on the effectiveness of interventions and on implementation strategies and practitioners improve outcomes on the ground by implementing QI interventions. By increasing the collaboration, more systematic evaluations of interventions in local contexts and better design of research will result in production of the generalizable knowledge needed to increase the impact of QI. In order for this to take place, there needs to be an intentional effort to address the gaps that challenge researchers and practitioners working together. This can occur by aligning incentives, increasing the value and utility of produced research to implementers, and as a shared community developing new guidance to bring these different groups to more effective collaboration. The growing experience in QI and improvement science offers many opportunities for better collaboration between researchers and implementers to increase the value of this partnership to accelerating progress toward quality Universal Health Coverage and the Sustainable Development Goals.

M.D. received financial support from SGS to attend this seminar.

Wold Health Organization . Why quality UHC? Available from http://www.who.int/servicedeliverysafety/areas/qhc/quality-uhc/en/ . (5 January 2018, date last accessed).

Leatherman S , Ferris TG , Berwick D et al.  . The role of quality improvement in strengthening health systems in developing countries . Int J Qual Heal Care [Internet] 2010 ; 22 : 237 – 43 . http://www.ncbi.nlm.nih.gov/pubmed/20543209 .

Google Scholar

Victora CG , Requejo JH , Barros AJD et al.  . Countdown to 2015: a decade of tracking progress for maternal, newborn, and child survival . Lancet 2016 ; 387 : 2049 – 59 .

Kruk ME , Pate M , Mullan Z . Introducing the Lancet Global Health Commission on high-quality health systems in the SDG era . Lancet Glob Health 2017 ; 5 : e480 – 1 .

Dixon-Woods M , Martin GP . Does quality improvement improve quality? Futur Hosp J 2016 ; 3 : 191 – 4 .

Shojania KG , Grimshaw JM . Evidence-based quality improvement: the state of the science . Health Aff 2005 ; 24 : 138 – 50 .

Salzburg Global Seminar . Better Health Care: How do we learna about improvement?. [Internet]. Vol. 55, Salzburg Global Seinar Seriesm. 2016. Available from: http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N&PAGE=fulltext&D=medl&AN=19346632

Tabak RG , Khoong EC , Chambers DA et al.  . Bridging research and practice: models for dissemination and implementation research . Am J Prev Med 2012 ; 43 : 337 – 50 .

Parry GJ , Carson-Stevens A , Luff DF et al.  . Recommendations for evaluation of health care improvement initiatives . Acad Pediatr [Internet] 2013 ; 13 : S23 – 30 . http://dx.doi.org/10.1016/j.acap.2013.04.007 .

Kairalla J a , Coffey CS , Thomann M a et al.  . Adaptive trial designs: a review of barriers and opportunities . Trials 2012 ; 13 : 145 .

Davidoff F . Improvement interventions are social treatments, not pills . Ann Intern Med 2014 ; 161 : 526 – 7 .

Marshall MN . Bridging the ivory towers and the swampy lowlands; increasing the impact of health services research on quality improvement . Int J Qual Heal Care [Internet] 2014 ; 26 : 1 – 5 . http://intqhc.oxfordjournals.org/cgi/doi/10.1093/intqhc/mzt076 .

Vindrola-Padros C , Pape T , Utley M et al.  . The role of embedded research in quality improvement: a narrative review . BMJ Qual Saf 2017 ; 26 : 70 – 80 .

Haynes A , Weiser T , Berry W et al.  . A surgical safety checklist to reduce morbidity and mortality in a global population . N Engl J Med 2009 ; 360 : 491 – 9 .

Gillespie BM , Marshall A . Implementation of safety checklists in surgery: a realist synthesis of evidence . Implement Sci 2015 ; 10 : 137 .

WHO : WHO surgical safety checklist and implementation manual. http://www.who.int/patientsafety/safesurgery/ss_checklist/en/ (Last date accessed, 2 October 2014).

Althabe F , Belizán JM , McClure EM et al.  . A population-based, multifaceted strategy to implement antenatal corticosteroid treatment versus standard care for the reduction of neonatal mortality due to preterm birth in low-income and middle-income countries: the ACT cluster-randomised trial . Lancet 2015 ; 385 : 629 – 39 . doi:10.1016/S0140-6736(14)61651-2 .

Colquhoun H , Leeman J , Michie S et al.  . Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems and policies . Implement Sci. 2014 ; 9 : 154 .

Fetterman DM , Wandersman A . Empowerment evaluation . Eval Pract. 1994 ; 15 : 1 – 15 .

Ramaswamy R , Rothschild C , Alabi F et al.  . Quality in practice using value stream mapping to improve quality of care in low-resource facility settings . Int J Qual Health Care 2017 ; 29 : 961 – 5 .

Hedt-Gauthier BL , Chilengi R , Jackson E et al.  . Research capacity building integrated into PHIT projects: leveraging research and research funding to build national capacity . BMC Health Serv Res 2017 ; 17 : 825 . https://doi.org/10.1186/s12913-017-2657-6 .

Ghaffar A , Langlois E , Rasanathan K et al.  . Strengthening health systems through embedded research . Bull World Health Organ 2017 ; 95 : 87 .

Sherr K , Requejo JH , Basinga P . Implementation research to catalyze advances in health systems strengthening in sub-Saharan Africa: the African Health Initiative . BioMed Cent Heal Serv Res 2013 ; 13 : S1 .

Revised Standards For Quality Improvement Reporting Excellentce SQUIRE 2.0 Guidelines . http://squire-statement.org/index.cfm?fuseaction=Page.ViewPage&PageID=471 (Last date accessed, 24 December 2017).

Hoffmann TC , Glasziou PP , Boutron I et al.  . Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide . BMJ 2014 ; 348 : 1 – 12 .

World Health Organization (WHO) WHO Global Learning Laboratory for Quality UHC . http://www.who.int/servicedeliverysafety/areas/qhc/gll/en/ (Last date accessed, 15 December 2017).

  • quality improvement
  • sustainable development
Month: Total Views:
February 2018 161
March 2018 54
April 2018 345
May 2018 352
June 2018 135
July 2018 116
August 2018 207
September 2018 190
October 2018 211
November 2018 195
December 2018 111
January 2019 123
February 2019 107
March 2019 171
April 2019 171
May 2019 155
June 2019 138
July 2019 167
August 2019 161
September 2019 216
October 2019 221
November 2019 171
December 2019 114
January 2020 145
February 2020 147
March 2020 148
April 2020 63
May 2020 129
June 2020 199
July 2020 277
August 2020 249
September 2020 247
October 2020 156
November 2020 126
December 2020 93
January 2021 87
February 2021 100
March 2021 133
April 2021 115
May 2021 142
June 2021 105
July 2021 87
August 2021 55
September 2021 151
October 2021 163
November 2021 132
December 2021 75
January 2022 105
February 2022 89
March 2022 119
April 2022 118
May 2022 98
June 2022 72
July 2022 65
August 2022 97
September 2022 100
October 2022 110
November 2022 71
December 2022 75
January 2023 87
February 2023 210
March 2023 152
April 2023 104
May 2023 93
June 2023 78
July 2023 72
August 2023 68
September 2023 74
October 2023 75
November 2023 74
December 2023 111
January 2024 132
February 2024 85
March 2024 103
April 2024 85
May 2024 106
June 2024 84
July 2024 70
August 2024 56

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1464-3677
  • Print ISSN 1353-4505
  • Copyright © 2024 International Society for Quality in Health Care and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Conclusion and suggestions for further research

  • First Online: 06 September 2019

Cite this chapter

further research and improvement

  • Maximilian Schosser 3  

Part of the book series: Schriftenreihe der HHL Leipzig Graduate School of Management ((SHL))

803 Accesses

This final chapter concludes with the four research questions (sections 8.1.1 to 8.1.4) and provides general insights from across the study (section 8.1.5). The contribution to the scientific body of knowledge is summarized in section 8.1.6, which is followed by the second sub-chapter 8.2 suggesting methodological enhancements (section 8.2.1) and content extensions (section 8.2.2) for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Author information

Authors and affiliations.

HHL Leipzig Graduate School of Management, Heinz-Nixdorf Chair of IT-based Logistics, Leipzig, Germany

Maximilian Schosser

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Maximilian Schosser .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Schosser, M. (2020). Conclusion and suggestions for further research. In: Big Data to Improve Strategic Network Planning in Airlines. Schriftenreihe der HHL Leipzig Graduate School of Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-27582-2_8

Download citation

DOI : https://doi.org/10.1007/978-3-658-27582-2_8

Published : 06 September 2019

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-27581-5

Online ISBN : 978-3-658-27582-2

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Quality improvement...

Quality improvement into practice

Read the full collection.

  • Related content
  • Peer review
  • Adam Backhouse , quality improvement programme lead 1 ,
  • Fatai Ogunlayi , public health specialty registrar 2
  • 1 North London Partners in Health and Care, Islington CCG, London N1 1TH, UK
  • 2 Institute of Applied Health Research, Public Health, University of Birmingham, B15 2TT, UK
  • Correspondence to: A Backhouse adam.backhouse{at}nhs.net

What you need to know

Thinking of quality improvement (QI) as a principle-based approach to change provides greater clarity about ( a ) the contribution QI offers to staff and patients, ( b ) how to differentiate it from other approaches, ( c ) the benefits of using QI together with other change approaches

QI is not a silver bullet for all changes required in healthcare: it has great potential to be used together with other change approaches, either concurrently (using audit to inform iterative tests of change) or consecutively (using QI to adapt published research to local context)

As QI becomes established, opportunities for these collaborations will grow, to the benefit of patients.

The benefits to front line clinicians of participating in quality improvement (QI) activity are promoted in many health systems. QI can represent a valuable opportunity for individuals to be involved in leading and delivering change, from improving individual patient care to transforming services across complex health and care systems. 1

However, it is not clear that this promotion of QI has created greater understanding of QI or widespread adoption. QI largely remains an activity undertaken by experts and early adopters, often in isolation from their peers. 2 There is a danger of a widening gap between this group and the majority of healthcare professionals.

This article will make it easier for those new to QI to understand what it is, where it fits with other approaches to improving care (such as audit or research), when best to use a QI approach, making it easier to understand the relevance and usefulness of QI in delivering better outcomes for patients.

How this article was made

AB and FO are both specialist quality improvement practitioners and have developed their expertise working in QI roles for a variety of UK healthcare organisations. The analysis presented here arose from AB and FO’s observations of the challenges faced when introducing QI, with healthcare providers often unable to distinguish between QI and other change approaches, making it difficult to understand what QI can do for them.

How is quality improvement defined?

There are many definitions of QI ( box 1 ). The BMJ ’s Quality Improvement series uses the Academy of Medical Royal Colleges definition. 6 Rather than viewing QI as a single method or set of tools, it can be more helpful to think of QI as based on a set of principles common to many of these definitions: a systematic continuous approach that aims to solve problems in healthcare, improve service provision, and ultimately provide better outcomes for patients.

Definitions of quality improvement

Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3

The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4

Using a systematic change method and strategies to improve patient experience and outcome. 5

To make a difference to patients by improving safety, effectiveness, and experience of care by using understanding of our complex healthcare environment, applying a systematic approach, and designing, testing, and implementing changes using real time measurement for improvement. 6

In this article we discuss QI as an approach to improving healthcare that follows the principles outlined in box 2 ; this may be a useful reference to consider how particular methods or tools could be used as part of a QI approach.

Principles of QI

Primary intent— To bring about measurable improvement to a specific aspect of healthcare delivery, often with evidence or theory of what might work but requiring local iterative testing to find the best solution. 7

Employing an iterative process of testing change ideas— Adopting a theory of change which emphasises a continuous process of planning and testing changes, studying and learning from comparing the results to a predicted outcome, and adapting hypotheses in response to results of previous tests. 8 9

Consistent use of an agreed methodology— Many different QI methodologies are available; commonly cited methodologies include the Model for Improvement, Lean, Six Sigma, and Experience-based Co-design. 4 Systematic review shows that the choice of tools or methodologies has little impact on the success of QI provided that the chosen methodology is followed consistently. 10 Though there is no formal agreement on what constitutes a QI tool, it would include activities such as process mapping that can be used within a range of QI methodological approaches. NHS Scotland’s Quality Improvement Hub has a glossary of commonly used tools in QI. 11

Empowerment of front line staff and service users— QI work should engage staff and patients by providing them with the opportunity and skills to contribute to improvement work. Recognition of this need often manifests in drives from senior leadership or management to build QI capability in healthcare organisations, but it also requires that frontline staff and service users feel able to make use of these skills and take ownership of improvement work. 12

Using data to drive improvement— To drive decision making by measuring the impact of tests of change over time and understanding variation in processes and outcomes. Measurement for improvement typically prioritises this narrative approach over concerns around exactness and completeness of data. 13 14

Scale-up and spread, with adaptation to context— As interventions tested using a QI approach are scaled up and the degree of belief in their efficacy increases, it is desirable that they spread outward and be adopted by others. Key to successful diffusion of improvement is the adaption of interventions to new environments, patient and staff groups, available resources, and even personal preferences of healthcare providers in surrounding areas, again using an iterative testing approach. 15 16

What other approaches to improving healthcare are there?

Taking considered action to change healthcare for the better is not new, but QI as a distinct approach to improving healthcare is a relatively recent development. There are many well established approaches to evaluating and making changes to healthcare services in use, and QI will only be adopted more widely if it offers a new perspective or an advantage over other approaches in certain situations.

A non-systematic literature scan identified the following other approaches for making change in healthcare: research, clinical audit, service evaluation, and clinical transformation. We also identified innovation as an important catalyst for change, but we did not consider it an approach to evaluating and changing healthcare services so much as a catch-all term for describing the development and introduction of new ideas into the system. A summary of the different approaches and their definition is shown in box 3 . Many have elements in common with QI, but there are important difference in both intent and application. To be useful to clinicians and managers, QI must find a role within healthcare that complements research, audit, service evaluation, and clinical transformation while retaining the core principles that differentiate it from these approaches.

Alternatives to QI

Research— The attempt to derive generalisable new knowledge by addressing clearly defined questions with systematic and rigorous methods. 17

Clinical audit— A way to find out if healthcare is being provided in line with standards and to let care providers and patients know where their service is doing well, and where there could be improvements. 18

Service evaluation— A process of investigating the effectiveness or efficiency of a service with the purpose of generating information for local decision making about the service. 19

Clinical transformation— An umbrella term for more radical approaches to change; a deliberate, planned process to make dramatic and irreversible changes to how care is delivered. 20

Innovation— To develop and deliver new or improved health policies, systems, products and technologies, and services and delivery methods that improve people’s health. Health innovation responds to unmet needs by employing new ways of thinking and working. 21

Why do we need to make this distinction for QI to succeed?

Improvement in healthcare is 20% technical and 80% human. 22 Essential to that 80% is clear communication, clarity of approach, and a common language. Without this shared understanding of QI as a distinct approach to change, QI work risks straying from the core principles outlined above, making it less likely to succeed. If practitioners cannot communicate clearly with their colleagues about the key principles and differences of a QI approach, there will be mismatched expectations about what QI is and how it is used, lowering the chance that QI work will be effective in improving outcomes for patients. 23

There is also a risk that the language of QI is adopted to describe change efforts regardless of their fidelity to a QI approach, either due to a lack of understanding of QI or a lack of intention to carry it out consistently. 9 Poor fidelity to the core principles of QI reduces its effectiveness and makes its desired outcome less likely, leading to wasted effort by participants and decreasing its credibility. 2 8 24 This in turn further widens the gap between advocates of QI and those inclined to scepticism, and may lead to missed opportunities to use QI more widely, consequently leading to variation in the quality of patient care.

Without articulating the differences between QI and other approaches, there is a risk of not being able to identify where a QI approach can best add value. Conversely, we might be tempted to see QI as a “silver bullet” for every healthcare challenge when a different approach may be more effective. In reality it is not clear that QI will be fit for purpose in tackling all of the wicked problems of healthcare delivery and we must be able to identify the right tool for the job in each situation. 25 Finally, while different approaches will be better suited to different types of challenge, not having a clear understanding of how approaches differ and complement each other may mean missed opportunities for multi-pronged approaches to improving care.

What is the relationship between QI and other approaches such as audit?

Academic journals, healthcare providers, and “arms-length bodies” have made various attempts to distinguish between the different approaches to improving healthcare. 19 26 27 28 However, most comparisons do not include QI or compare QI to only one or two of the other approaches. 7 29 30 31 To make it easier for people to use QI approaches effectively and appropriately, we summarise the similarities, differences, and crossover between QI and other approaches to tackling healthcare challenges ( fig 1 ).

Fig 1

How quality improvement interacts with other approaches to improving healthcare

  • Download figure
  • Open in new tab
  • Download powerpoint

QI and research

Research aims to generate new generalisable knowledge, while QI typically involves a combination of generating new knowledge or implementing existing knowledge within a specific setting. 32 Unlike research, including pragmatic research designed to test effectiveness of interventions in real life, QI does not aim to provide generalisable knowledge. In common with QI, research requires a consistent methodology. This method is typically used, however, to prove or disprove a fixed hypothesis rather than the adaptive hypotheses developed through the iterative testing of ideas typical of QI. Both research and QI are interested in the environment where work is conducted, though with different intentions: research aims to eliminate or at least reduce the impact of many variables to create generalisable knowledge, whereas QI seeks to understand what works best in a given context. The rigour of data collection and analysis required for research is much higher; in QI a criterion of “good enough” is often applied.

Relationship with QI

Though the goal of clinical research is to develop new knowledge that will lead to changes in practice, much has been written on the lag time between publication of research evidence and system-wide adoption, leading to delays in patients benefitting from new treatments or interventions. 33 QI offers a way to iteratively test the conditions required to adapt published research findings to the local context of individual healthcare providers, generating new knowledge in the process. Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible. 34

Scenario: QI for translational research

Newly published research shows that a particular physiotherapy intervention is more clinically effective when delivered in short, twice-daily bursts rather than longer, less frequent sessions. A team of hospital physiotherapists wish to implement the change but are unclear how they will manage the shift in workload and how they should introduce this potentially disruptive change to staff and to patients.

Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this article?

Adopting a QI approach, the team realise that, although the change they want to make is already determined, the way in which it is introduced and adapted to their wards is for them to decide. They take time to explain the benefits of the change to colleagues and their current patients, and ask patients how they would best like to receive their extra physiotherapy sessions.

The change is planned and tested for two weeks with one physiotherapist working with a small number of patients. Data are collected each day, including reasons why sessions were missed or refused. The team review the data each day and make iterative changes to the physiotherapist’s schedule, and to the times of day the sessions are offered to patients. Once an improvement is seen, this new way of working is scaled up to all of the patients on the ward.

The findings of the work are fed into a service evaluation of physiotherapy provision across the hospital, which uses the findings of the QI work to make recommendations about how physiotherapy provision should be structured in the future. People feel more positive about the change because they know colleagues who have already made it work in practice.

QI and clinical audit

Clinical audit is closely related to QI: it is often used with the intention of iteratively improving the standard of healthcare, albeit in relation to a pre-determined standard of best practice. 35 When used iteratively, interspersed with improvement action, the clinical audit cycle adheres to many of the principles of QI. However, in practice clinical audit is often used by healthcare organisations as an assurance function, making it less likely to be carried out with a focus on empowering staff and service users to make changes to practice. 36 Furthermore, academic reviews of audit programmes have shown audit to be an ineffective approach to improving quality due to a focus on data collection and analysis without a well developed approach to the action section of the audit cycle. 37 Clinical audits, such as the National Clinical Audit Programme in the UK (NCAPOP), often focus on the management of specific clinical conditions. QI can focus on any part of service delivery and can take a more cross-cutting view which may identify issues and solutions that benefit multiple patient groups and pathways. 30

Audit is often the first step in a QI process and is used to identify improvement opportunities, particularly where compliance with known standards for high quality patient care needs to be improved. Audit can be used to establish a baseline and to analyse the impact of tests of change against the baseline. Also, once an improvement project is under way, audit may form part of rapid cycle evaluation, during the iterative testing phase, to understand the impact of the idea being tested. Regular clinical audit may be a useful assurance tool to help track whether improvements have been sustained over time.

Scenario: Audit and QI

A foundation year 2 (FY2) doctor is asked to complete an audit of a pre-surgical pathway by looking retrospectively through patient documentation. She concludes that adherence to best practice is mixed and recommends: “Remind the team of the importance of being thorough in this respect and re-audit in 6 months.” The results are presented at an audit meeting, but a re-audit a year later by a new FY2 doctor shows similar results.

Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this paper?

Contrast the above with a team-led, rapid cycle audit in which everyone contributes to collecting and reviewing data from the previous week, discussed at a regular team meeting. Though surgical patients are often transient, their experience of care and ideas for improvement are captured during discharge conversations. The team identify and test several iterative changes to care processes. They document and test these changes between audits, leading to sustainable change. Some of the surgeons involved work across multiple hospitals, and spread some of the improvements, with the audit tool, as they go.

QI and service evaluation

In practice, service evaluation is not subject to the same rigorous definition or governance as research or clinical audit, meaning that there are inconsistencies in the methodology for carrying it out. While the primary intent for QI is to make change that will drive improvement, the primary intent for evaluation is to assess the performance of current patient care. 38 Service evaluation may be carried out proactively to assess a service against its stated aims or to review the quality of patient care, or may be commissioned in response to serious patient harm or red flags about service performance. The purpose of service evaluation is to help local decision makers determine whether a service is fit for purpose and, if necessary, identify areas for improvement.

Service evaluation may be used to initiate QI activity by identifying opportunities for change that would benefit from a QI approach. It may also evaluate the impact of changes made using QI, either during the work or after completion to assess sustainability of improvements made. Though likely planned as separate activities, service evaluation and QI may overlap and inform each other as they both develop. Service evaluation may also make a judgment about a service’s readiness for change and identify any barriers to, or prerequisites for, carrying out QI.

QI and clinical transformation

Clinical transformation involves radical, dramatic, and irreversible change—the sort of change that cannot be achieved through continuous improvement alone. As with service evaluation, there is no consensus on what clinical transformation entails, and it may be best thought of as an umbrella term for the large scale reform or redesign of clinical services and the non-clinical services that support them. 20 39 While it is possible to carry out transformation activity that uses elements of QI approach, such as effective engagement of the staff and patients involved, QI which rests on iterative test of change cannot have a transformational approach—that is, one-off, irreversible change.

There is opportunity to use QI to identify and test ideas before full scale clinical transformation is implemented. This has the benefit of engaging staff and patients in the clinical transformation process and increasing the degree of belief that clinical transformation will be effective or beneficial. Transformation activity, once completed, could be followed up with QI activity to drive continuous improvement of the new process or allow adaption of new ways of working. As interventions made using QI are scaled up and spread, the line between QI and transformation may seem to blur. The shift from QI to transformation occurs when the intention of the work shifts away from continuous testing and adaptation into the wholesale implementation of an agreed solution.

Scenario: QI and clinical transformation

An NHS trust’s human resources (HR) team is struggling to manage its junior doctor placements, rotas, and on-call duties, which is causing tension and has led to concern about medical cover and patient safety out of hours. A neighbouring trust has launched a smartphone app that supports clinicians and HR colleagues to manage these processes with the great success.

This problem feels ripe for a transformation approach—to launch the app across the trust, confident that it will solve the trust’s problems.

Before continuing reading think about your own organisation— What do you think will happen, and how would you use the QI principles described in this article for this situation?

Outcome without QI

Unfortunately, the HR team haven’t taken the time to understand the underlying problems with their current system, which revolve around poor communication and clarity from the HR team, based on not knowing who to contact and being unable to answer questions. HR assume that because the app has been a success elsewhere, it will work here as well.

People get excited about the new app and the benefits it will bring, but no consideration is given to the processes and relationships that need to be in place to make it work. The app is launched with a high profile campaign and adoption is high, but the same issues continue. The HR team are confused as to why things didn’t work.

Outcome with QI

Although the app has worked elsewhere, rolling it out without adapting it to local context is a risk – one which application of QI principles can mitigate.

HR pilot the app in a volunteer specialty after spending time speaking to clinicians to better understand their needs. They carry out several tests of change, ironing out issues with the process as they go, using issues logged and clinician feedback as a source of data. When they are confident the app works for them, they expand out to a directorate, a division, and finally the transformational step of an organisation-wide rollout can be taken.

Education into practice

Next time when faced with what looks like a quality improvement (QI) opportunity, consider asking:

How do you know that QI is the best approach to this situation? What else might be appropriate?

Have you considered how to ensure you implement QI according to the principles described above?

Is there opportunity to use other approaches in tandem with QI for a more effective result?

How patients were involved in the creation of this article

This article was conceived and developed in response to conversations with clinicians and patients working together on co-produced quality improvement and research projects in a large UK hospital. The first iteration of the article was reviewed by an expert patient, and, in response to their feedback, we have sought to make clearer the link between understanding the issues raised and better patient care.

Contributors: This work was initially conceived by AB. AB and FO were responsible for the research and drafting of the article. AB is the guarantor of the article.

Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

Provenance and peer review: This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ , including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ ’s quality improvement editor post are funded by the Health Foundation.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

  • Olsson-Brown A
  • Dixon-Woods M ,
  • Batalden PB ,
  • Berwick D ,
  • Øvretveit J
  • Academy of Medical Royal Colleges
  • Nelson WA ,
  • McNicholas C ,
  • Woodcock T ,
  • Alderwick H ,
  • ↵ NHS Scotland Quality Improvement Hub. Quality improvement glossary of terms. http://www.qihub.scot.nhs.uk/qi-basics/quality-improvement-glossary-of-terms.aspx .
  • McNicol S ,
  • Solberg LI ,
  • Massoud MR ,
  • Albrecht Y ,
  • Illingworth J ,
  • Department of Health
  • ↵ NHS England. Clinical audit. https://www.england.nhs.uk/clinaudit/ .
  • Healthcare Quality Improvement Partnership
  • McKinsey Hospital Institute
  • ↵ World Health Organization. WHO Health Innovation Group. 2019. https://www.who.int/life-course/about/who-health-innovation-group/en/ .
  • Sheffield Microsystem Coaching Academy
  • Davidoff F ,
  • Leviton L ,
  • Taylor MJ ,
  • Nicolay C ,
  • Tarrant C ,
  • Twycross A ,
  • ↵ University Hospitals Bristol NHS Foundation Trust. Is your study research, audit or service evaluation. http://www.uhbristol.nhs.uk/research-innovation/for-researchers/is-it-research,-audit-or-service-evaluation/ .
  • ↵ University of Sheffield. Differentiating audit, service evaluation and research. 2006. https://www.sheffield.ac.uk/polopoly_fs/1.158539!/file/AuditorResearch.pdf .
  • ↵ Royal College of Radiologists. Audit and quality improvement. https://www.rcr.ac.uk/clinical-radiology/audit-and-quality-improvement .
  • Gundogan B ,
  • Finkelstein JA ,
  • Brickman AL ,
  • Health Foundation
  • Johnston G ,
  • Crombie IK ,
  • Davies HT ,
  • Hillman T ,
  • ↵ NHS Health Research Authority. Defining research. 2013. https://www.clahrc-eoe.nihr.ac.uk/wp-content/uploads/2014/04/defining-research.pdf .

further research and improvement

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 24, Issue 5
  • How to study improvement interventions: a brief overview of possible study types
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Margareth Crisóstomo Portela 1 , 2 ,
  • Peter J Pronovost 3 ,
  • Thomas Woodcock 4 ,
  • Pam Carter 1 ,
  • Mary Dixon-Woods 1
  • 1 Social Science Applied to Healthcare Research (SAPPHIRE) Group, Department of Health Sciences , School of Medicine, University of Leicester , Leicester , UK
  • 2 Department of Health Administration and Planning , National School of Public Health, Oswaldo Cruz Foundation , Rio de Janeiro, RJ , Brazil
  • 3 Departments of Anesthesiology, Critical Care Medicine, and Surgery , Armstrong Institute for Patient Safety and Quality, School of Medicine, and Bloomberg School of Public Health, Johns Hopkins University , Baltimore, Maryland , USA
  • 4 NIHR CLAHRC for Northwest London, Imperial College London, Chelsea and Westminster Hospital , London , UK
  • Correspondence to Dr Margareth C Portela, Departamento de Administração e Planejamento em Saúde, Escola Nacional de Saúde Pública, Fundação Oswaldo Cruz, Rua Leopoldo Bulhões 1480, sala 724—Manguinhos, Rio de Janeiro, RJ 21041-210, Brazil; mportela{at}ensp.fiocruz.br

Improvement (defined broadly as purposive efforts to secure positive change) has become an increasingly important activity and field of inquiry within healthcare. This article offers an overview of possible methods for the study of improvement interventions. The choice of available designs is wide, but debates continue about how far improvement efforts can be simultaneously practical (aimed at producing change) and scientific (aimed at producing new knowledge), and whether the distinction between the practical and the scientific is a real and useful one. Quality improvement projects tend to be applied and, in some senses, self-evaluating. They are not necessarily directed at generating new knowledge, but reports of such projects if well conducted and cautious in their inferences may be of considerable value. They can be distinguished heuristically from research studies, which are motivated by and set out explicitly to test a hypothesis, or otherwise generate new knowledge, and from formal evaluations of improvement projects. We discuss variants of trial designs, quasi-experimental designs, systematic reviews, programme evaluations, process evaluations, qualitative studies, and economic evaluations. We note that designs that are better suited to the evaluation of clearly defined and static interventions may be adopted without giving sufficient attention to the challenges associated with the dynamic nature of improvement interventions and their interactions with contextual factors. Reconciling pragmatism and research rigour is highly desirable in the study of improvement. Trade-offs need to be made wisely, taking into account the objectives involved and inferences to be made.

  • Statistical process control
  • Social sciences
  • Quality improvement methodologies
  • Health services research
  • Evaluation methodology

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/

https://doi.org/10.1136/bmjqs-2014-003620

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Literature search strategies employed.

Search in institutional sites:

The Health Foundation (http://www.health.org.uk)

Institute of Healthcare Improvement (http://www.ihi.org)

Improvement Science Research Network (http://www.isrn.net)

Bibliographic search in PUBMED - articles published in English from 2005:

Based on terms:

‘improvement science’; ‘implementation science’; ‘translational research’; ‘science of quality improvement’; ‘quality improvement research’; ‘improvement science and context’; ‘improvement science and theories’; ‘healthcare quality improvement interventions’; ‘designing and evaluating complex interventions’; ‘quality improvement evaluation’; ‘improvement science methods’; ‘implementation science methods’; ‘healthcare quality improvement intervention clinical trials’; ‘healthcare quality improvement intervention effectiveness’; ‘healthcare quality improvement intervention observational studies’; ‘healthcare quality improvement intervention economic evaluations’; ‘healthcare quality improvement intervention cost-effectiveness’; ‘healthcare quality improvement intervention literature reviews’; ‘healthcare quality improvement intervention sustainability’.

Based on authors with extensive production in the field

References identified in the papers selected based on the other strategies, independently of their date.

Studying improvement in healthcare

We begin by noting that a significant body of work in the area of improvement has taken the form of editorial commentary, narrative review, or philosophical analysis rather than empirical studies. 4–8 It has sought, among other things, to lay out a manifesto (or manifestos) for what improvement efforts might achieve, and to produce operational definitions of key terms within the field, such as those relating to quality improvement, 7 complex interventions, 9–11 context, 12–14 and so on. An overlapping corpus of work is dedicated to developing the theoretical base for studies of improvement, including organisational, innovation, social and behavioural theories, 15–20 as well as the mechanisms of change associated with quality improvement interventions. 12 , 14 , 21–32 A small but important stream of work focuses on developing and testing tools to be used as part of improvement efforts, such as measurement instruments or analytical frameworks for characterisation of contexts, assessment of the impact of interventions, 33 or determination of organisational readiness for knowledge translation. 34

These pieces of literature make clear that the study of improvement interventions is currently an emergent field characterised by debate and diversity. One example of this is the use of the term improvement science which, though widely employed, is subject to multiple understandings and uses. 35 The term is often appropriated to refer to the methods associated with Edwards Deming, 36 including techniques, such as Plan-Do-Study-Act (PDSA) cycles and use of statistical process control (SPC) methods, 37 , 38 but that is not its only meaning. The science of improvement can also be used to refer to a broad church of research grounded in health services research, social science, evaluation studies and psychology and other disciplines. Here, Deming's methods and other established techniques for pursuing improvement may be treated as objects for inquiry, not as necessarily generating scientific knowledge in their own right. 39 A rich social science literature is now beginning to emerge that offers important critiques of modes of improvement, including their ideological foundations 40 , 41 and social, ethical, professional and organisational implications, 42 but this work is not the primary focus of this review. Instead, we offer an overview of some of the available study designs, illustrated with examples in table 1 .

  • View inline

Principles, strengths, weaknesses and opportunities for study designs for improvement interventions

In exploring further how improvement efforts might be studied, it is useful to distinguish, albeit heuristically, between quality improvement projects, where the primary goal is securing change, and other types of studies, where the primary goal is directed at evaluation and scientific advance ( table 1 ). Of course, the practical and the scientific are not necessarily opposites nor in conflict with each other, and sometimes the line dividing them is blurry. Many studies will have more than one aim: quality improvement projects may seek to determine whether something ‘works’, and effectiveness studies may also be interested in producing improvement. The differences lie largely in the primary motives, aims and choice of designs.

Quality improvement projects

A defining characteristic of quality improvement projects is that they are established primarily (though not necessarily exclusively) as improvement activities rather than research directed towards generating new knowledge: their principal aim and motive is to secure positive change in an identified service. Such projects are typically focused on a well-defined problem, are oriented towards a focused aim, and are highly practical and often, though not exclusively, local in character.

Many, though by no means all, quality improvement projects use process improvement techniques adapted from industry, such as Lean, Six Sigma and so on. Such projects are often based on incremental, cyclically implemented changes 4 with PDSA cycles a particularly popular technique. PDSA aims to select, implement, test and adjust a candidate intervention 4 , 43 , 44 to identify what works in a local context, allow interventions that do not work to be discarded, and to enable those that appear promising to be optimised and customised. The interventions themselves may be based on a range of inputs (eg, the available evidence base, clinical experience and knowledge of local context). Interventions derived from PDSA cycles can, in principle, be tested in different settings in order to produce knowledge about implementation and outcomes beyond the context of origin. 7

In a typical quality improvement project (including those based on PDSA), measurement and monitoring of the target of change is a key activity, thus enabling quality improvement (QI) projects, if properly conducted, to be self-evaluating in some sense. SPC is often the method of choice for analysis of data in quality improvement work. 45 SPC maps variations over time, 46 seeking to combine ‘the power of statistical significance tests with chronological analysis of graphs of summary data as they are produced’. 47 It is usually designed into an improvement effort prospectively, but can also be used retrospectively to evaluate time-series data for evidence of change over time.

SPC, in brief, comprises an approach to measurement in improvement initiatives as well as a set of statistical tools (control charts, run charts, frequency plots and so on) to analyse and interpret data with a view to taking action. It is especially well-suited to dealing with the dynamic, iteratively evolving nature of improvement work, in contrast with methods more oriented towards statistical hypothesis-testing relating to clearly defined and bounded interventions. It recognises that many clinical and organisational processes are characterised by some inherent random variation, and, in the context of an improvement initiative, it seeks to identify whether any observed change is due to this inherent variation (known as ‘common-cause variation’) or something different (such as the intervention, and known as ‘special-cause variation’).

Among the tools, control charts are popular for picturing the data trend and providing explicit criteria for making decisions about common-cause and special-cause variations. Different types of control charts are constructed based on different statistical distributions to account for different types of data, 48 , 49 but in their simplest form they plot the values of a variable of interest from measurements made regularly over time, and are typically annotated to show when various events occurred (such as the baseline period and the introduction of an intervention). They include a horizontal line showing the average of a measure over particular periods of time. Control limits, lower and upper, are set usually at ±3 SDs of the distribution the data is assumed to follow. Attention is then given to determining whether values outside the control limit indicate (with very small probability of error) that a change has occurred in the system, 47 , 50 , 51 using ‘rules’ that allow detection of deviations in the measure that are unlikely to be due to normal variation. For example, baseline measurement may show that the time between prescription and dispensing medicines to take home demonstrates inherent variability that can be described as ‘common cause’; it is the normal level of variability in the process. When a rule is broken (indicating that a deviation has occurred) an investigation may reveal the underlying special cause. For example, the special cause might be the introduction of an intervention (such as staff training) that appears to be implicated in improvement or deterioration. If no rules are broken, the system is said to be in statistical control: only common-cause variation is being exhibited.

Guidance on the number of data points required is available, including the minimum number of events as a function of average process performance, as well as on the types of control charts needed to deal with infrequent events, and on the construction and interpretation of rules and rule breaks. 45 , 49 This is important, because care has to be taken to ensure that a sufficient number of data points are available for proper analysis, and that the correct rules are used: a control chart with 25 time points using 3SD control limits has an overall false positive probability of 6.5%. 47 A control chart with too few data points may incur a type I error, suggesting that an intervention produced an effect on the system when it did not. Type II errors, where it is mistakenly concluded that no improvement has occurred, are also possible. Care is also needed in using SPC across multiple sites, where there may be a need for adjusting for differences among sites (requiring more formal time-series analysis), and in the selection of baseline and postintervention time periods: this should not be done arbitrarily or post hoc, as it substantially increases the risk of bias.

Attribution of any changes seen to the intervention may be further complicated by factors other than the intervention that may interfere with the system under study and disrupt the pattern of data behaviour. Qualitative or quantitative investigations may be needed to enable understanding of the system under study. Qualitative inquiry may be especially valuable in adding to the understanding of the mechanisms of change, and identifying the reasons why particular interventions did or did not work. 52

Quality improvement projects may be published as quality improvement reports. These reports are a distinctive form of publication, taking a different form and structure from most research reports in the biomedical literature and guided by their own set of publication guidelines. 53 QI reports provide evidence of the potential of quality improvement projects to produce valuable results in practice, particularly in local settings. 54–58 They may be especially useful in providing ‘proof of concept’ that can then be tested in larger studies or replicated in new settings. However, quality improvement projects, and their reports, are not unproblematic. Despite their popularity, the fidelity and quality of reporting of PDSA cycles remain problematic, 59 and the quality of measurement and interpretation of data in quality improvement projects is often strikingly poor. Further, the claims made for improvement are sometimes far stronger than is warranted: 60 control charts and run charts are designed not to assume a sample from a fixed population, but rather a measurement of a constantly changing cause system. It is this property that makes them well suited to evaluation of improvement initiatives, 38 but caution is needed in treating the outputs of quality improvement projects as generalisable new knowledge. 2 , 35 , 44

A further limitation is that many improvement projects tend to demonstrate relatively little concern with the theoretical base for prediction and explanation of the mechanisms of change involved in the interventions. Theories of change in quality improvement reports are often represented in fairly etiolated form, for example, as logic models or driver diagrams that do not make clear the underlying mechanisms. The lack of understanding of what makes change happen is a major challenge to learning and replication. 61

Evaluative studies

Evaluative studies can be distinguished from quality improvement projects by their characteristic study designs and their explicit orientation towards evaluation rather than improvement alone. Some are conceived from the outset as research projects: they are motivated by and set out explicitly to test a hypothesis or otherwise generate new knowledge. Other studies are evaluations of improvement projects where the study is effectively ‘wrapped around’ the improvement project, perhaps commissioned by the funder of the improvement project and undertaken by evaluators who are external to and independent of the project. 62 These two categories of evaluative projects are, of course, not hard and fast, but they often constrain which kind of study design can be selected. The available designs vary in terms of their goals, their claims to internal and external validity, and the ease with which they are feasible to execute given the stubborn realities of inner and outer contexts of healthcare.

Randomised controlled trials (RCT) randomly allocate participants to intervention and control groups, which are then treated identically apart from the intervention. Valued for their potential ability to allow for direct inferences about causality, trials in the area of improvement are typically pragmatic in character, since the interventions are generally undertaken in ‘real world’ service settings. RCTs may be especially suitable whenever interventions are being considered for widespread use based on their face validity and early or preliminary evidence. 63 For improvement work, they are often costly and not always necessary, but they remain highly relevant to quality improvement for their ability, through randomisation, to deal with the effects on the outcomes of important unknown confounders related to patients, providers and organisations. 64 They may be especially important when being wrong about the effectiveness of an intervention likely to be widely deployed or mandated as highly consequential, either because of the cost or the possible impact on patients.

RCTs are, of course, rarely straightforward to design and implement, 65–68 and features of trials that may be critical in the context of medicinal products, such as randomising, and single or double-blinding, may either be impractical or irrelevant when intervening in health service delivery, while others, such as blinding of assessors, will remain essential. RCTs in health services also encounter problems with contamination within and between institutions, and with persuading sites to take part or to engage in randomisation, especially if they have strong previous beliefs about the intervention. Though some of these problems can be dealt with through study design, they remain non-trivial.

Cluster randomised trials have been advocated by some as an alternative to the classical RCT design for studying improvement interventions. 69–72 These designs seek to randomise centres or units rather than individuals, thus helping to avoid some of the contamination that might occur when randomisation occurs within settings. The design does, for technical reasons, require a larger sample size. 73 Other things being equal, a large number of small clusters is better than a small number of large clusters, but increasing the number of clusters may be very expensive. The design also makes analyses of results more complex, since the assumption of independence among observations, on which classical statistical methods rely, is not secure. 64 , 65 , 74

Variants such as stepped wedge and others may also be used, each with strengths and disadvantages in terms of their practical operationalisation and the inferences that can be made. 64 , 65 , 75 The stepped wedge trial design is especially promising as an approach to evaluating improvement interventions. A highly pragmatic design, it consists of a sequential roll-out of an intervention to clusters (organisations) so that all clusters receive the intervention by the end of the study. 76 The stepped wedge design has many strengths, including its reassurance to organisations that none will be deprived of the intervention, reducing resistance to being randomised to a control group. It is particularly advantageous when logistical, practical, or financial constraints mean that implementing the intervention in a phased way will be helpful, and it can even be used as part of a pragmatic, non-funded approach to intervention implementation. On the more negative side, it is likely to lead to a longer duration of trial period than more conventional designs, and additional statistical complexity. 75

Despite the promise of trial designs for evaluating quality improvement interventions, the quality of studies using these methods has often been disappointing. A relatively recent systematic review of 142 trials of quality improvement strategies or financial incentives to improve the management of adult outpatients with diabetes, identified that nearly half the trials were judged to have high risk of bias, and it emphasised the need to improve reporting of quality improvement trials. 77 One major challenge to the deployment of trials in the study of improvement is that improvement interventions may tend to mutate over time in response to learning, but much trial methodology is based on the assumption of a stable, well-defined intervention, and may not give sufficient recognition to the interchange between intervention and context.

Quasi-experimental designs 64 , 65 may be an attractive option when trials are not feasible, though they do mean that investigators have less control over confounding factors. Quasiexperimental designs often found in studies of improvement 64 , 65 include uncontrolled and controlled before-and-after studies, and time-series designs.

Uncontrolled before-and-after studies are simple. They involve the measurement of the variables of interest before and after the intervention in the same-study sites, on the assumption that any difference in measurement ‘after’ compared with ‘before’ is due to the intervention. 64 , 65 Their drawback is that they do not account for secular trends that might be occurring at the same time, 66 something that remains an important problem determining whether a particular intervention or programme has genuinely produced improvement over change that was occurring anyway. 78 , 79

Controlled before-and-after studies offer important advantages over uncontrolled ones. Their many strengths in the study of improvement 66 , 80 include an increased ability to detect the effects of an intervention, and to control for confounders and secular trends, particularly when combined with difference-in-difference analyses. 62 , 81 However, finding suitable controls is often not straightforward. 64–66 , 80 , 82 A frequent problem resulting in inadequate controls is selection solely on the basis of the most superficial structural characteristics of healthcare units, such as size, teaching status, location, etc. The choice of relevant characteristics should also be made based on the anticipated hypotheses concerning the mechanisms of change involved in the intervention, and the contextual influences on how they work (eg, informatics, organisational culture, and so on). Looking at the baseline quality across organisations is also fundamental, since non-comparable baselines or exposure to secular trends may result in invalid attribution of effects to the intervention(s) under evaluation.

Quasi-experimental time-series designs and observational longitudinal designs rely on multiple successive measurements with the aim of separating the effect of the intervention from secular trends. 83 , 84 One question that often arises is whether and when it might be more advantageous to time-series analysis instead of the SPC methods characteristic of QI projects that we discussed earlier. SPC techniques can indeed monitor trends, but are challenging in studies involving multiple sites given the difficulty of adjusting for confounding variables among sites. A QI project in a small microsystem (eg, a hospital ward) usually has small sample sizes, which are offset by taking many measurements. A large-scale effort, such as a QI collaborative deploying a major QI intervention might, however, be better off leveraging its larger sample sizes and using conventional time-series techniques. Other statistical techniques for longitudinal analysis may also allow for identifying changes in the trends attributable to the intervention, accounting for the autocorrelation among observations and concurrent factors. 64–66 , 85 , 86 Observational longitudinal designs may be especially useful in the study of sustainability of quality improvement. 87

Systematic reviews of improvement studies, whether or not they include meta-analyses, are now beginning to appear, 88–92 and are likely to play an important role in providing overviews of the evidence supporting particular interventions or methods of achieving change. Such reviews will require considerable sophistication; low quality and contradictory systematic reviews may result without thoughtful, non-mechanical appraisal of the studies incorporated, detailed descriptions of the interventions and implementation contexts, and consideration of combinations of multiple components and their interactions. Use of methods for synthesis that allow more critique and conceptual development may be especially useful at this stage in the emergence of the field. 93 , 94

The study of improvement interventions should not, of course, be limited to quantitative assessments of the effectiveness of interventions. The field of programme evaluation is a rich but underused source of study designs and insights for the study of improvement interventions. Dating back to the 1960s, this field has identified both the benefits and the challenges of deploying traditional, epidemiologically derived experimental methods in the evaluation of social interventions. 95 , 96 It developed mainly in the context of evaluating social programmes (including those in the area of welfare, justice and education), and it tends to be pragmatic about what is feasible when the priority is programme delivery rather than answering a research question, about the influence of external contexts, and about the mutability of interventions over time. Programs are nowhere near as neat and accommodating as the evaluator expects. Nor are outside circumstances as passive and unimportant as he might like. Whole platoons of unexpected problems spring up. 97

Carol Weiss's logic of analysis in evaluation 99

What went on in the programme over time? Describing .

B. Activities and services

C. Conditions of operation

D. Participants’ interpretation

How closely did the programme follow its original plan? Comparing .

Did recipients improve? Comparing .

A. Differences from preprogramme to postprogramme

B. (If data were collected at several time periods) Rate of change.

C. What did the improvement (or lack of improvement) mean to the recipients?

Did recipients do better than non-recipients? Comparing .

A. Checking original conditions for comparability

B. Differences in the two groups preprogramme to postprogramme

C. Differences in rates of change

Is observed change due to the programme? Ruling out rival explanations .

What was the worth of the relative improvement of recipients? Cost-benefit or cost-effectiveness analysis .

What characteristics are associated with success? Disaggregating .

A. Characteristics of recipients associated with success

B. Types of services associated with success

C. Surrounding conditions associated with success

What combinations of actors, services and conditions are associated with success and failure? Profiling .

Through what processes did change take place over time? Modelling .

A. Comparing events to assumptions of programme theory

B. Modifying programme theory to take account of findings

What unexpected events and outcomes were observed? Locating unanticipated effects .

What are the limits to the findings? To what populations, places and conditions do conclusions not necessarily apply? Examining deviant cases .

What are the implications of these findings? What do they mean in practical terms? Interpreting .

What recommendations do the findings imply for modifications in programme and policy? Fashioning recommendations .

What new policies and programmatic efforts to solve social problems do the findings support? Policy analysis .

Process evaluations are an especially important feature of the evaluation of improvement interventions. Such evaluations make possible the exploration of the components of interventions and the fidelity and uniformity of implementation, as well as testing hypotheses concerning mechanisms of change associated with intervention components, refining theory and improving strategy effectiveness. 70 Ideally, they should be embedded in studies of effectiveness, adding information to clarify whether the target population actually received the planned activities, experiences of those charged with delivering the intervention as well as those receiving it, and what factors inhibited or promoted effectiveness. 70 Process evaluations can combine a range of study methods and cross-sectional or longitudinal designs, including surveys among managers, frontline healthcare professionals and patients, and the measurement of variables, through interviews, direct observation or medical record review.

Use of qualitative methods is invaluable in enabling the understanding of what form a quality improvement intervention takes in practice, as well as providing data about why and how the planned activities succeed or not. 100 Using methods such as interviews, ethnographic observation, and documentary analysis, qualitative studies may be able to capture the extent that the interventions are implemented with fidelity at different organisational levels, and to explicate the mechanisms of change involved. The ‘triangulation’ of data collection and interpretation using quantitative and qualitative approaches makes the findings more reliable and powerful. 62 An explicit grounding in formal theory is likely to support fuller understanding of how the interventions are expected to make a difference, and to contribute to building a knowledge base for improvement. Social science theory combined with the use of qualitative methods is particularly useful for bringing to the surface implicit theories of change held by practitioners, and for distinguishing empirical facts from normative judgements. 101

Finally, economic evaluations of quality improvement interventions, such as those focused on clinical interventions or healthcare programmes, are mainly concerned with appraising whether the differential investment in an intervention is justifiable in face of the differential benefit it produces. 102–106 Quality improvement investments compete with other possible applications of healthcare resources, and economic analyses are necessary to inform rational decisions about interventions to invest in to produce the greatest benefits, and even whether the resources would be better allocated to other social purposes. Contrary to commonly held assumptions, quality improvement efforts, especially those focused on safety, may not be cost-saving, possibly because of the fixed costs of a typical healthcare setting; QI may generate additional capacity rather than savings. 107 Studies are, however, still lacking with, for example, few good-quality comparative economic analyses of safety improvement strategies in the acute care setting, possibly, in part, because of the additional methodological challenges associated with their evaluation. 108 , 109 , 110

Conclusions

This review has identified a wide range of study designs for studying improvement in healthcare. Small-scale quality improvement projects remain a dominant approach, but need to be conducted and reported better, and appropriate caution exercised in treating the data from such projects as equivalent to research-standard evidence. The epidemiological paradigm offers a range of experimental, quasi-experimental, and observational study designs that can help in determining effectiveness of improvement interventions. Studies using these designs typically seek to determine whether an improvement has occurred, and if so, whether it can be attributed to the intervention(s) under study; these methods are less well suited to investigating questions of ‘why’ or ‘how’ any change occurred. They are most powerful when they allow for measurements over time and control for confounding variables. But such studies, particularly those using more experimental designs, are often difficult to conduct in the context of many improvement activities. Interventions that are purposefully evolving over time, as is a common feature of quality improvement interventions, lack many of the stable characteristics generally assumed for studies of effectiveness. Trial-based designs may under-recognise the weak boundaries separating context and intervention, and the multiple interactions that take place between them. Given the complex role played by context in quality improvement, external validity may be very difficult to establish. Quantitative and qualitative methodological approaches can play complementary roles in assessing what works, how, and in what contexts, 111 and the field of programme evaluation has remained under-exploited as a source of methods for studying improvement. Programme evaluation is especially important in stressing the need for theoretically sound studies, and for attention to implementation and fidelity of interventions.

Much could be achieved by improving the rigour with which existing designs are applied in practice, as can be seen from the example of PDSA cycles. Too often, PDSA cycles are contrived as a form of pilot testing rather than formal steps guided by explicit a priori theories about interventions, too often they are reported as a ‘black box’, too often measurement strategies are poor and do not comply with even basic standards of data collection and interpretation, and too often reported claims about the magnitude of improvement are not supported by the design. These limitations act as threats both to internal and external validity, and risk the reputation of the field as well as thwarting learning. At the very least, great care needs to be taken in making claims about the generalisability or achievements of such projects.

As the study of improvement develops, reconciling pragmatism and scientific research rigour is an important goal, but trade-offs need to be made wisely, taking into account the objectives involved and the inferences to be made. There is still much to explore, and quantitative and qualitative researchers will have important and complementary roles in dealing with many yet-unanswered questions. 90 , 100 , 111–114

Acknowledgments

MCP’ s stay at the University of Leicester was funded by the Brazilian Science without Borders Programme, through a fellowship given by the Coordination for the Improvement of Higher Education Personnel—CAPES—(reference 17943-12-4). Mary Dixon-Woods’ contribution to this paper was supported by a Wellcome Trust Senior Investigator award (reference WT097899) and by University of Leicester study leave at the Dartmouth Institute for Health Policy and Clinical Practice. TW is supported by an Improvement Science Fellowship with The Health Foundation. We thank Christine Whitehouse for help in editing of the manuscript.

  • Djulbegovic B
  • Margolis P ,
  • Provost LP ,
  • Schoettker PJ , et al
  • Batalden P ,
  • Davidoff F ,
  • Marshall M , et al
  • Dixon-Woods M ,
  • Amalberti R ,
  • Goodman S , et al
  • Øvretveit J ,
  • Leviton L ,
  • Campbell NC ,
  • Darbyshire J , et al
  • Macintyre S , et al
  • Abraham C ,
  • Eccles MP , et al
  • Damschroder LJ ,
  • Keith RE , et al
  • Øvretveit J
  • Kaplan HC ,
  • Froehle CM , et al
  • Eccles MP ,
  • Grimshaw JM ,
  • MacLennan G , et al
  • Dritz MC , et al
  • Novotná G ,
  • Dobbins M ,
  • Henderson J
  • McCormack B
  • Loftus-Hills A ,
  • Rycroft-Malone J , et al
  • McCormack B ,
  • Harvey G , et al
  • Rycroft-Malone J ,
  • Seers K , et al
  • Kitson AL ,
  • Helfrich CD ,
  • Hagedom HJ , et al
  • Stetler CB ,
  • Helfrich CD , et al
  • Johnston M ,
  • Abraham C , et al
  • O'Connor D ,
  • Francis JJ ,
  • French SD ,
  • O'Connor DA , et al
  • Borduas F ,
  • Jacques A , et al
  • Gagnon MP ,
  • Labarthe J ,
  • Légaré F , et al
  • Marshall M ,
  • Pronovost P ,
  • Dixon-Woods M
  • Radnor ZJ ,
  • Fitzgerald L ,
  • McGivern G ,
  • Dopson S , et al
  • Bishop JP ,
  • Langley GJ ,
  • Nolan KM , et al
  • Lundberg J ,
  • Ask J , et al
  • Shewhart WA
  • Benneyan JC ,
  • Montgomery DC
  • Mohammed MA
  • Mohammed MA ,
  • Worthington P ,
  • Burnett S ,
  • Benn J , et al
  • Mooney SE ,
  • Estrada C , et al
  • Wooldridge JL ,
  • Conway E , et al
  • Lynch-Jordan AM ,
  • Kashikar-Zuck S ,
  • Crosby LE , et al
  • Beckett DJ ,
  • Oswald S , et al
  • Marcano-Belisario J , et al
  • Honeybourne E , et al
  • Taylor MJ ,
  • McNicholas C ,
  • Nicolay C , et al
  • Parand A , et al
  • Leviton L , et al
  • Benning A ,
  • Suokas A , et al
  • Auerbach AD ,
  • Landefeld CS ,
  • Shojania KG
  • Grimshaw J ,
  • Campbell M , et al
  • Campbell M ,
  • Eccles M , et al
  • Shojania KG ,
  • Grimshaw JM
  • Alexander JA ,
  • Montori VM , et al
  • Grimshaw J , et al
  • Holleman G ,
  • van Achterberg T , et al
  • McKenzie JE ,
  • Marsteller JA ,
  • Sexton JB ,
  • Hsu YJ , et al
  • van Breukelen GJ ,
  • Campbell MJ ,
  • Hemming K ,
  • Lilford R ,
  • Tricco AC ,
  • Taljaard M , et al
  • Kirschner K ,
  • Braspenning J ,
  • Maassen I , et al
  • Haynes AB ,
  • Weiser TG ,
  • Berry WR , et al
  • McAlister FA ,
  • Majumdar SR , et al
  • Nwulu U , et al
  • Knapp H , et al
  • Dodson JA ,
  • Lampert R ,
  • Wang Y , et al
  • Del Toro M ,
  • Vallano A ,
  • Cereza G , et al
  • Stommel M ,
  • Holmes-Rovner MM , et al
  • Grimshaw JM , et al
  • Ait Ouakrim D , et al
  • Jamtvedt G ,
  • Flottorp S , et al
  • Rège-Walther M ,
  • Wyatt JC , et al
  • Weaver SJ ,
  • Lubomksi LH ,
  • Wilson RF , et al
  • Agarwal S , et al
  • Agarwal S ,
  • Jones D , et al
  • Shadish WR ,
  • Aveling EL ,
  • McCulloch P ,
  • Aveling EL , et al
  • Drummond MF ,
  • Sculpher MJ ,
  • Torrance GW , et al
  • Taylor CB ,
  • Stevenson M ,
  • Jan S , et al
  • Barasa EW ,
  • Cleary S , et al
  • Rubio-Valera M ,
  • Bosmans J ,
  • Fernández A , et al
  • Salisbury C ,
  • Foster NE ,
  • Hopper C , et al
  • Wadsworth EB ,
  • Weeks WB , et al
  • Etchells E ,
  • Daneman N , et al
  • Armstrong D ,
  • Baker R , et al
  • Pinto A , et al
  • Sinkowitz-Cochran RL ,
  • Garcia-Williams A ,
  • Hackbarth AD , et al
  • Schierhout G ,
  • Si D , et al

Contributors MCP conceived the idea for the study, conducted the searches, and synthesised the findings. MD-W advised on study design and approach. MCP and MD-W led on the drafting. TW, PJP, and PC contributed to identifying suitable references and led the drafting of specific sections. All authors contributed substantially to writing the paper and all reviewed and approved the final draft.

Funding Brazilian Science without Borders Programme, Coordination for the Improvement of Higher Education Personnel – CAPES – (reference 17943-12-4. Wellcome Trust WT097899.

Competing interests None.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper
  • How to Write Recommendations in Research | Examples & Tips

How to Write Recommendations in Research | Examples & Tips

Published on September 15, 2022 by Tegan George . Revised on July 18, 2023.

Recommendations in research are a crucial component of your discussion section and the conclusion of your thesis , dissertation , or research paper .

As you conduct your research and analyze the data you collected , perhaps there are ideas or results that don’t quite fit the scope of your research topic. Or, maybe your results suggest that there are further implications of your results or the causal relationships between previously-studied variables than covered in extant research.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What should recommendations look like, building your research recommendation, how should your recommendations be written, recommendation in research example, other interesting articles, frequently asked questions about recommendations.

Recommendations for future research should be:

  • Concrete and specific
  • Supported with a clear rationale
  • Directly connected to your research

Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

Relatedly, when making these recommendations, avoid:

  • Undermining your own work, but rather offer suggestions on how future studies can build upon it
  • Suggesting recommendations actually needed to complete your argument, but rather ensure that your research stands alone on its own merits
  • Using recommendations as a place for self-criticism, but rather as a natural extension point for your work

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

There are many different ways to frame recommendations, but the easiest is perhaps to follow the formula of research question   conclusion  recommendation. Here’s an example.

Conclusion An important condition for controlling many social skills is mastering language. If children have a better command of language, they can express themselves better and are better able to understand their peers. Opportunities to practice social skills are thus dependent on the development of language skills.

As a rule of thumb, try to limit yourself to only the most relevant future recommendations: ones that stem directly from your work. While you can have multiple recommendations for each research conclusion, it is also acceptable to have one recommendation that is connected to more than one conclusion.

These recommendations should be targeted at your audience, specifically toward peers or colleagues in your field that work on similar subjects to your paper or dissertation topic . They can flow directly from any limitations you found while conducting your work, offering concrete and actionable possibilities for how future research can build on anything that your own work was unable to address at the time of your writing.

See below for a full research recommendation example that you can use as a template to write your own.

Recommendation in research example

Prevent plagiarism. Run a free check.

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

While it may be tempting to present new arguments or evidence in your thesis or disseration conclusion , especially if you have a particularly striking argument you’d like to finish your analysis with, you shouldn’t. Theses and dissertations follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the discussion section and results section .) The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

The conclusion of your thesis or dissertation should include the following:

  • A restatement of your research question
  • A summary of your key arguments and/or results
  • A short discussion of the implications of your research

For a stronger dissertation conclusion , avoid including:

  • Important evidence or analysis that wasn’t mentioned in the discussion section and results section
  • Generic concluding phrases (e.g. “In conclusion …”)
  • Weak statements that undermine your argument (e.g., “There are good points on both sides of this issue.”)

Your conclusion should leave the reader with a strong, decisive impression of your work.

In a thesis or dissertation, the discussion is an in-depth exploration of the results, going into detail about the meaning of your findings and citing relevant sources to put them in context.

The conclusion is more shorter and more general: it concisely answers your main research question and makes recommendations based on your overall findings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, July 18). How to Write Recommendations in Research | Examples & Tips. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/dissertation/recommendations-in-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, how to write a discussion section | tips & examples, how to write a thesis or dissertation conclusion, how to write a results section | tips & examples, what is your plagiarism score.

  • Systematic review
  • Open access
  • Published: 04 May 2020

How and under what circumstances do quality improvement collaboratives lead to better outcomes? A systematic review

  • Karen Zamboni   ORCID: orcid.org/0000-0003-3478-8636 1 ,
  • Ulrika Baker 2 , 3 ,
  • Mukta Tyagi 4 ,
  • Joanna Schellenberg 1 ,
  • Zelee Hill 5 &
  • Claudia Hanson 1 , 2  

Implementation Science volume  15 , Article number:  27 ( 2020 ) Cite this article

23k Accesses

108 Citations

54 Altmetric

Metrics details

Quality improvement collaboratives are widely used to improve health care in both high-income and low and middle-income settings. Teams from multiple health facilities share learning on a given topic and apply a structured cycle of change testing. Previous systematic reviews reported positive effects on target outcomes, but the role of context and mechanism of change is underexplored. This realist-inspired systematic review aims to analyse contextual factors influencing intended outcomes and to identify how quality improvement collaboratives may result in improved adherence to evidence-based practices.

We built an initial conceptual framework to drive our enquiry, focusing on three context domains: health facility setting; project-specific factors; wider organisational and external factors; and two further domains pertaining to mechanisms: intra-organisational and inter-organisational changes. We systematically searched five databases and grey literature for publications relating to quality improvement collaboratives in a healthcare setting and containing data on context or mechanisms. We analysed and reported findings thematically and refined the programme theory.

We screened 962 abstracts of which 88 met the inclusion criteria, and we retained 32 for analysis. Adequacy and appropriateness of external support, functionality of quality improvement teams, leadership characteristics and alignment with national systems and priorities may influence outcomes of quality improvement collaboratives, but the strength and quality of the evidence is weak. Participation in quality improvement collaborative activities may improve health professionals’ knowledge, problem-solving skills and attitude; teamwork; shared leadership and habits for improvement. Interaction across quality improvement teams may generate normative pressure and opportunities for capacity building and peer recognition.

Our review offers a novel programme theory to unpack the complexity of quality improvement collaboratives by exploring the relationship between context, mechanisms and outcomes. There remains a need for greater use of behaviour change and organisational psychology theory to improve design, adaptation and evaluation of the collaborative quality improvement approach and to test its effectiveness. Further research is needed to determine whether certain contextual factors related to capacity should be a precondition to the quality improvement collaborative approach and to test the emerging programme theory using rigorous research designs.

Peer Review reports

Contribution to the literature

Quality improvement collaboratives are a widely used approach. However, solid evidence of their effectiveness is limited and research suggests that achievement of results is highly contextual.

Previous research on the role of context in quality improvement collaboratives has not explored the dynamic relationship between context, mechanisms and outcomes. We systematically explore these through a review of peer-reviewed and grey literature.

Understanding contextual factors influencing intended quality improvement collaborative outcomes and the mechanisms of change can aid implementation design and evaluation. This systematic review offers a novel programme theory to unpack the complexity of quality improvement collaboratives.

Improving quality of care is essential to achieve Universal Health Coverage [ 1 ]. One strategy for quality improvement is quality improvement collaboratives (QIC) defined by the Breakthrough Collaborative approach [ 2 ]. This entails teams from multiple health facilities working together to improve performance on a given topic supported by experts who share evidence on best practices. Over a short period, usually 9–18 months, quality improvement coaches support teams to use rapid cycle tests of change to achieve a given improvement aim. Teams also attend “learning sessions” to share improvement ideas, experience and data on performance [ 2 , 3 , 4 ]. Collaboration between teams is assumed to shorten the time required for teams to diagnose a problem and identify a solution and to provide an external stimulus for innovation [ 2 , 3 ].

QICs are widely used in high-income countries and proliferating in low- and middle-income countries (LMICs), although solid evidence of their effectiveness is limited [ 5 , 6 , 7 , 8 , 9 , 10 , 11 ]. A systematic review on the effects of QICs, largely focused on high-income settings, found that three quarters of studies reported improvement in at least half of the primary outcomes [ 7 ]. A previous review suggested that evidence on QICs effectiveness is positive but highly contextual [ 5 ], and a review of the effects of QICs in LMICs reported a positive and sustained effect on most indicators [ 12 ]. However, there are important limitations. First, with one exception [ 11 ], systematic reviews define QIC effectiveness on the basis of statistically significant improvement in at least one, or at least half of “primary” outcomes [ 7 , 12 ] neglecting the heterogeneity of outcomes and the magnitude of change. Second, studies included in the reviews are weak, most commonly before-after designs, while most randomised studies give insufficient detail of randomisation and concealment procedures [ 7 ], thus potentially overestimating the effects [ 13 ]. Third, most studies use self-reported clinical data, introducing reporting bias [ 8 , 9 , 10 ]. Fourth, studies generally draw conclusions based on facilities that completed the programme, introducing selection bias. Recent well-designed studies support a cautious assessment of QIC effectiveness: a stepped wedge randomised controlled trial of a QIC intervention aimed at reducing mortality after abdominal surgery in the UK found no evidence of a benefit on survival [ 14 ]. The most robust systematic review of QICs to date reports little effect on patient health outcomes (median effect size (MES) less than 2 percentage points), large variability in effect sizes for different types of outcomes, and a much larger effect if QICs are combined with training (MES 111.6 percentage points for patient health outcomes; and MES of 52.4 to 63.4 percentage points for health worker practice outcomes) [ 11 ]. A review of group problem-solving including QIC strategies to improve healthcare provider performance in LMICs, although mainly based on low-quality studies, suggested that these may be more effective in moderate-resource than in low-resource settings and their effect smaller with higher baseline performance levels [ 6 ].

Critiques of quality improvement suggest that the mixed results can be partly explained by a tendency to reproduce QIC activities without attempting to modify the functioning, interactions or culture in a clinical team, thus overlooking the mechanisms of change [ 15 ]. QIC implementation reports generally do not discuss how changes were achieved, and lack explicit assumptions on what contextual factors would enable them; the primary rationale for using a QIC often being that it has been used successfully elsewhere [ 7 ] . In view of the global interest in QICs, better understanding of the influence of context and of mechanisms of change is needed to conceptualise and improve QIC design and evaluation [ 6 , 7 ]. In relation to context, a previous systematic review explored determinants of QIC success, reporting whether an association was found between any single contextual factor and any effect parameter. The evidence was inconclusive, and the review lacked an explanatory framework on the role of context for QIC success [ 16 ]. Mechanisms have been documented in single case studies [ 17 ] but not systematically reviewed.

In this review, we aim to analyse contextual factors influencing intended outcomes and to identify how quality improvement collaboratives may result in improved adherence to evidence-based practices, i.e. the mechanisms of change.

This review is inspired by the realist review approach, which enables researchers to explore how, why and in what contexts complex interventions may work (or not) by focusing on the relationships between context, mechanisms and outcomes [ 18 , 19 , 20 ]. The realist review process consists of 5 methodological steps (Fig. 1 ). We broadly follow this methodological guidance with some important points of departure from it. We had limited expert engagement in developing our theory of change, and our preliminary conceptual framework was conceived as a programme theory [ 21 ] rather than as a set of context-mechanism-outcomes configurations (step 1) [ 22 ]. We followed a systematic search strategy driven by the intervention definition with few iterative searches [ 19 ], and we included a quality appraisal of the literature because the body of evidence on our questions is generally limited by self-reporting of outcomes, selection and publication bias [ 7 , 9 , 15 ].

figure 1

Realist review process, adapted from Pawson R. et al. 2015 [ 18 ]

Clarifying scope of the review

We built an initial conceptual framework to drive our enquiry (Fig. 2 ) in the form of a preliminary programme theory [ 21 , 23 ]. We adapted the Medical Research Council process evaluation framework [ 24 ] using findings from previous studies [ 8 , 16 , 25 , 26 ] to conceptualise relationships between contextual factors, mechanisms of change and outcomes. We defined context as “factors external to the intervention which may influence its implementation” [ 24 ].We drew from Kaplan’s framework to understand context for quality improvement (MUSIQ), which is widely used in high-income countries, and shows promise for LMIC settings [ 27 , 28 ]. We identified three domains for analysis: the healthcare setting in which a quality improvement intervention is introduced; the project-specific context, e.g. characteristics of quality improvement teams, leadership in the implementing unit, nature of external support; and the wider organisational context and external environment [ 29 ].

figure 2

Review conceptual framework (adapted from MRC process evaluation framework)

We defined mechanisms of change as the “underlying entities, processes, or structures which operate in particular contexts to generate outcomes of interest” [ 30 ]. Our definition implies that mechanisms are distinct from, but linked to, intervention activities: intervention activities are a resource offered by the programme to which participants respond through cognitive, emotional or organisational processes, influenced by contextual factors [ 31 ]. We conceptualised the collaborative approach as a structured intervention or resource to embed innovative practices into healthcare organisations and accelerate diffusion of innovations based on seminal publications on QICs [ 2 , 3 ]. Strategies described in relation to implementation of a change, e.g. “making a change the normal way” that an activity is done [ 3 ], implicitly relate to normalisation process theory [ 17 , 32 ] . Spreading improvement is explicitly inspired by the diffusion of innovation theory, attributing to early adopters the role of assessing and adapting innovations to facilitate their spread, and the role of champions for innovation, exercising positive peer pressure in the collaborative [ 3 , 17 , 33 ]. Therefore, we identified two domains for analysis of mechanisms of change: we postulated that QIC outcomes may be generated by mechanisms activated within each organisation (intra-organisational mechanisms) and through their collaboration (inter-organisational mechanisms). When we refer to QIC outcomes, we refer to measures which an intervention aimed to influence, including measures of clinical processes, perceptions of care, patient recovery, or other quality measures, e.g. self-reported patient safety climate.

KZ and JS discussed the initial programme theory with two quality improvement experts acknowledged at the end of this paper. They suggested alignment with the MUSIQ framework and commented on the research questions, which were as follows:

In what kind of health facility settings may QICs work (or not)? (focus on characteristics of the health facility setting)

What defines an enabling environment for QICs? (focus on proximate project-specific factors and on wider organisational context and external environment)

How may engagement in QICs influence health workers and the organisational context to promote better adherence to evidence-based guidelines? (focus on intra-organisational mechanisms)

What is it about collaboration with other facilities that may lead to better outcomes? (focus on inter-organisational mechanisms)

Search strategy

The search strategy is outlined in Fig. 3 and detailed in Additional file 1 . Studies were included if they (i) referred to the quality improvement collaborative approach [ 2 , 5 , 8 , 16 ], defined in line with previous reviews as consisting of all the following elements: a specified topic; clinical and quality improvement experts working together; multi-professional quality improvement teams in multiple sites; using multiple rapid tests of change; and a series of structured collaborative activities in a given timeframe involving learning sessions and visits from mentors or facilitators (ii) were published in English, French or Spanish, from 1997 to June 2018; and (iii) referred to a health facility setting, as opposed to community, administrative or educational setting.

figure 3

Studies were excluded if they focused on a chronic condition, palliative care, or administrative topics, and if they did not contain primary quantitative or qualitative data on process of implementation, i.e. the search excluded systematic reviews; protocol papers, editorials, commentaries, methodological papers and studies reporting exclusively outcomes of QIC collaboratives or exclusively describing implementation without consideration of context or mechanisms of change.

We applied inclusion and exclusion criteria to titles and abstracts and subsequently to the full text. We identified additional studies through references of included publications and backward and forward citation tracking.

Data collection

We developed and piloted data extraction forms in MS Excel. We classified studies based on whether they focused on context or mechanisms of change and captured qualitative and quantitative data under each component. Data extraction also captured the interaction between implementation, context and mechanisms, anticipating that factors may not fit neatly into single categories [ 18 , 19 ].

KZ and MT independently conducted a structured quality appraisal process using the STROBE checklist for quantitative observational studies, the Critical Appraisal Skills Programme checklist for qualitative studies and the Mixed Methods Appraisal Tool for mixed method studies [ 34 , 35 , 36 , 37 ] and resolving disagreement by consensus. To aid comparability, given the heterogeneity of study designs, a score of 1 was assigned to each item in the checklist, and a total score was calculated for each paper. Quality was rated low, medium or high for papers scoring in the bottom half, between 50 and 80%, or above 80% of the maximum score. We did not exclude studies because of low quality: in all such cases, both authors agreed on the study’s relative contribution to the research questions [ 19 , 38 ].

Synthesis and reporting of results

Analysis was informed by the preliminary conceptual framework (Fig. 2 ) and conducted thematically by framework domain by the lead author. We clustered studies into context and mechanism. Under context, we first analysed quantitative data to identify factors related to the framework and evidence of their associations with mechanisms and outcomes. Then, from the qualitative evidence, we extracted supportive or dissonant data on the same factors. Under mechanisms, we identified themes under the two framework domains using thematic analysis. We generated a preliminary coding framework for context and mechanism data in MS Excel. UB reviewed a third of included studies, drawn randomly from the list stratified by study design, and independently coded data following the same process. Disagreements were resolved through discussion. We developed a final coding framework, which formed the basis of our narrative synthesis of qualitative and quantitative data.

We followed the RAMESES reporting checklist, which is modelled on the PRISMA statement [ 39 ] and tailored for reviews aiming to highlight relationships between context, mechanisms and outcomes [ 40 ] (Additional file 2 ). All included studies reported having received ethical clearance.

Search results

Searches generated 1,332 results. After removal of duplicates (370), 962 abstracts were screened of which 88 met the inclusion criteria. During the eligibility review process, we identified 15 papers through bibliographies of eligible papers and authors’ suggestions. Of the 103 papers reviewed in full, 32 met inclusion criteria and were retained for analysis (Table 1 ). Figure 4 summarises the search results.

figure 4

Search flowchart

Characteristics of included studies

Included studies comprised QIC process evaluations using quantitative, qualitative, and mixed methods designs, as well as case descriptions in the form of programme reviews by implementers or external evaluators, termed internal and independent programme reviews, respectively. While the application of QIC has grown in LMICs, evidence remains dominated by experiences from high-income settings: only 9 out of 32 studies were from a LMIC setting of which 4 were in the grey literature (Table 2 ).

Most papers focused on mechanisms of change, either as a sole focus (38%) or in combination with implementation or contextual factors (72%) and were explored mostly through qualitative studies or programme reviews. The relative paucity of evidence on the role of context in relation to QIC reflects the gaps identified by other systematic reviews [ 7 ]. We identified 15 studies containing data on context of which 8 quantitatively tested the association between a single contextual factor and outcomes. Most studies were rated as medium quality (53%) with low ratings attributed to all internal and external programme reviews (Additional file 3 ). However, these were retained for analysis because of their rich accounts on the relationship between context, mechanisms and outcomes and the relative scarcity of higher quality evaluations taking into account this complexity [ 41 ].

We present results by research question in line with the conceptual framework (Fig. 2 ). We identified two research questions to explore three types of contextual factors (Table 3 ).

In what kind of facility setting may QICs work (or not)?

The literature explored four healthcare setting characteristics: facility size, voluntary or compulsory participation in the QIC programme, baseline performance and factors related to health facility readiness. We found no conclusive evidence that facility size [ 42 ], voluntary or compulsory participation in the QIC programme [ 44 ], and baseline performance influence QIC outcomes [ 43 ]. For each of these aspects, we identified only one study, and those identified were not designed to demonstrate causality and lacked a pre-specified hypothesis on why the contextual factors studied would influence outcomes. As for heath facility readiness, this encompassed multiple factors perceived as programme preconditions, such as health information systems [ 42 , 45 , 47 ], human resources [ 42 , 45 , 46 , 48 ] and senior level commitment to the target [ 42 , 45 ]. There was inconclusive evidence on the relationships between these factors and QIC outcomes: the studies exploring this association quantitatively had mixed results and generally explored one factor each. A composite organisational readiness construct, combining the above-mentioned programme preconditions, was investigated in two cross-sectional studies from the same collaborative in a high-income setting. No evidence of an association with patient safety climate and capability was found, but this may have been due to limitations of the statistical model or of data collection on the composite construct and outcome measures [ 42 , 45 ]. However, qualitative evidence from programme reviews and mixed-methods process evaluations of QIC programmes suggests that negative perceptions of the adequacy of available resources, low staff morale and limited availability of relevant clinical skills may contribute to negative perceptions of organisational readiness, particularly in LMIC settings. High-intensity support and partnership with other programmes may be necessary to fill clinical knowledge gaps [ 46 , 48 ]. Bottom-up leadership may foster positive perceptions of organisational readiness for quality improvement [ 42 , 46 , 48 ].

What defines an enabling environment for QICs?

This question explored two categories in our conceptual framework: project-specific and wider organisational contextual factors. Project-specific contextual factors relate to the immediate unit in which a QIC intervention is introduced, and the characteristics of the QIC intervention that may influence its implementation [ 29 ]. We found mixed evidence that adequacy and appropriateness of external support for QIC and functionality of quality improvement teams may influence outcomes.

Medium-high quality quantitative studies suggest that the quality, intensity and appropriateness of quality improvement support may contribute to perceived improvement of outcomes, but not, where measured, actual improvement [ 42 , 46 , 48 , 49 , 50 , 51 ]. This may be partly explained by the number of ideas for improvement tested [ 49 ]. In other words, the more quality improvement teams perceive the approach to be relevant, credible and adequate, the more they may be willing to use the quality improvement approach, which in turn contributes to a positive perception of improvement. In relation to attributes of quality improvement teams, studies stress the importance of team stability, multi-disciplinary composition, involvement of opinion leaders and previous experience in quality improvement, but there is inconclusive evidence that these attributes are associated with better outcomes [ 49 , 52 , 53 , 54 ]. Particularly in LMICs, alignment with existing supervisory structures may be the key to achieve a functional team [ 46 , 48 , 51 , 57 , 58 ].

Wider organisational contextual factors refer to characteristics of the organisation in which a QIC intervention is implemented, and the external system in which the facility operates [ 29 ]. Two factors emerge from the literature. Firstly, the nature of leadership has a key role in motivating health professionals to test and adopt new ideas and is crucial to develop “habits for improvement”, such as evidence-based practice, systems thinking and team problem-solving [ 49 , 51 , 54 , 55 , 56 ]. Secondly, alignment with national priorities, quality strategies, financial incentive systems or performance management targets may mobilise leadership and promote facility engagement in QIC programmes, particularly in LMIC settings [ 46 , 48 , 50 , 51 ]; however, quality of this evidence is medium-low.

Mechanisms of change

In relation to mechanisms of change, we identified two research questions to explore one domain each.

How may engagement in QICs influence health workers and the organisational context to promote better adherence to evidence-based practices?

We identified six mechanisms of change within an organisation (Table 4 ). First, participation in QIC activities may increase their commitment to change by increasing confidence in using data to make decisions and identifying clinical challenges and their potential solutions within their reach [ 17 , 49 , 51 , 55 , 56 , 60 , 61 , 62 ]. Second, it may improve accountability by making standards explicit, thus enabling constructive challenge among health workers when these are not met [ 17 , 62 , 64 , 65 , 66 ]. A relatively high number of qualitative and mixed-methods studies of medium–high quality support these two themes. Other mechanisms, supported by fewer and lower quality studies, include improving health workers’ knowledge and problem-solving skills by providing opportunities for peer reflection [ 46 , 48 , 64 , 67 ]; improving organisational climate by promoting teamwork, shared responsibility and bottom up discussion [ 60 , 61 , 62 , 67 ]; strengthening a culture of joint problem solving [ 48 , 63 ]; and supporting an organisational cultural shift through the development of “habits for improvement” that promote adherence to evidence-based practices [ 17 , 56 , 62 ].

The available literature highlights three key contextual enablers of these mechanisms: the appropriateness of mentoring and external support, leadership characteristics and adequacy of clinical skills. The literature suggests that external mentoring and support is appropriate if it includes a mix of clinical and non-clinical coaching, which ensures the support is acceptable and valued by teams, and if it is highly intensive, particularly in low-income settings that are relatively new to using data for decision-making and may have low data literacy [ 46 , 48 , 51 , 58 ]. For example, in Nigeria, Osibo et al. suggests that reducing resistance to use of data for decision-making may be an intervention in itself and a pre-condition for use of quality improvement methods [ 58 ]. As for leadership characteristics, the literature stresses the role of hospital leadership in fostering a culture of performance improvement, promoting open dialogue, bottom-up problem solving, which may facilitate a collective sense of responsibility and engagement in quality improvement. Alignment with broader strategic priorities and previous success in quality improvement may further motivate leadership engagement [ 46 , 48 , 50 , 51 ]. Adequacy of clinical skills emerges as an enabler particularly in LMICs, where implementation reports observed limited scope for problem-solving given the low competences of health workers [ 46 ] and the need for partnership with training programmes to complement clinical skills gaps [ 48 ].

What is it about collaboration with other hospitals that may lead to better outcomes?

This question explored inter-organisational mechanisms of change. Four themes emerged from the literature (Table 5 ). Firstly, collaboration may create or reinforce a community of practice, which exerts a normative pressure on hospitals to engage in quality improvement, [ 17 , 46 , 50 , 63 , 67 , 68 , 69 ]. Secondly, it may promote friendly competition and create isomorphic pressures on hospital leaders, i.e. pressure to imitate other facilities’ success because they would find it damaging not to. In reverse, sharing performance data with other hospitals offers a potential reputational gain for well-performing hospitals and for individual clinicians seeking peer recognition [ 17 , 46 , 63 , 68 , 69 , 72 ] . A relatively high number of medium-high quality studies support these two themes. Thirdly, collaboration may provide a platform for capacity building by disseminating success stories and methodologies for improvement [ 51 , 67 , 68 , 69 , 70 ]. Finally, collaboration with other hospitals may demonstrate the feasibility of improvement to both hospital leaders and health workers. This, in turn, may galvanise action within each hospital by reinforcing intra-organisational change mechanisms outlined above [ 51 , 63 , 71 ]. However, evidence for this comes from low-quality studies.

Key contextual enablers for these inter-organisational mechanisms include adequate external support to facilitate sharing of success stories in contextually appropriate ways and alignment with systemic pressures on hospital leadership. For example, a study on a Canadian QIC in intensive care units found that pressure to centralise services undermined collaboration because hospitals’ primary goal and hidden agenda for collaboration were to access information on their potential competitors [ 72 ]. The activation of isomorphic pressures also assumes that a community of practice exists or can be created. This may not necessarily be the case, particularly in LMICs where isolated working is common: a study in Malawi attributed the disappointing QIC outcomes partly to the intervention’s inability to activate friendly competition mechanisms due the weakness of clinical networks [ 46 ].

The relative benefit of collaboration was questioned in both high and low-income settings: less importance was attached to learning sessions than mentoring by participants in a study in Tanzania [ 57 ]. Hospitals may fear exposure and reputational risks [ 68 ], and high-performing hospitals may see little advantage in their participation in a collaborative [ 68 , 72 ]. Hospitals may also make less effort when working collaboratively or use collaboration for self-interest as opposed to for sharing their learning [ 69 ].

Figure 5 offers a visual representation of the identified intra- and inter-organisational mechanisms of change and their relationship to the intervention strategy and expected outcomes.

figure 5

Programme theory

To the best of our knowledge, this is the first review to systematically explore the role of context and the mechanisms of change in QICs, which can aid their implementation design and evaluation. This is particularly important for a complex intervention, such as QICs, whose effectiveness remains to be demonstrated [ 6 , 7 , 11 ]. We offer an initial programme theory to understand whose behaviours ought to change, at what level, and how this might support the creation of social norms promoting adherence to evidence-based practice. Crucially, we also link intra-organisational change to the position that organisations have in a health system [ 33 ].

The growing number of publications on mechanisms of change highlights interest in the process of change. We found that participation in quality improvement collaborative activities may improve health professionals’ knowledge, problem-solving skills and attitude; teamwork; shared leadership and habits for improvement. Interaction across quality improvement teams may generate normative pressure and opportunities for capacity building and peer recognition. However, the literature generally lacks reference to any theory in the conceptualisation and description of mechanisms of change [ 7 ]. This is surprising given the clear theoretical underpinnings of the QIC approach, including normalisation process theory in relation to changes within each organisation, and diffusion of innovation theory in relation to changes arising from collaborative activities [ 32 , 33 ]. We see three key opportunities to fill this theoretical gap. First, more systematic application of the Theoretical Domains Framework in design and evaluation of QICs and in future reviews. This is a synthesis of over 120 constructs from 33 behaviour change theories and is highly relevant because the emerging mechanisms of change pertain to seven of its domains: knowledge, skills, reinforcement, intentions, behaviour regulation, social influences and environmental context and resources [ 73 , 74 ]. Its use would allow specification of target behaviours to change, i.e. who should do what differently, where, how and with whom, to consider the influences on those behaviours, and to prioritise targeting behaviours that are modifiable as well as central to achieving change in clinical practice [ 75 ]. Second, we recognise that emphasis on individual behaviour change theories may mask the complexity of change [ 76 ]. Organisational and social psychology offer important perspectives for theory building, for example, postulating that motivation is the product of intrinsic and extrinsic factors [ 77 , 78 ], or that group norms that discourage dissent, for example, by not encouraging or not rewarding constructive criticism act as a key barrier to individual behaviour change [ 79 ]. This warrants further exploration. Third, engaging with the broader literature on learning collaboratives may also help develop the programme theory further and widen its application.

Our findings on contextual enablers complement previous reviews [ 16 , 80 ]. We highlight that activating mechanisms of change may be influenced by the appropriateness of external support, leadership characteristics, quality improvement capacity and alignment with systemic pressures and incentives. This has important implications for QIC implementation. For example, for external support to be of high intensity, the balance of clinical and non-clinical support to quality improvement teams will need contextual adaptation, since different skills mixes will be acceptable and relevant in different clinical contexts. Particularly in LMICs, alignment with existing supervisory structures may be the key to achieve a functional quality improvement team [ 46 , 48 , 51 , 57 , 58 ].

Our review offers a more nuanced understanding of the role of leadership in QICs compared to previous concepts [ 8 , 25 ]. We suggest that the activation of the mechanisms of change, and therefore potentially QIC success, rests on the ability to engage leaders, and therefore leadership engagement can be viewed as a key part of the QIC intervention package. In line with organisational learning theory, the leaders’ role is to facilitate a data-informed analysis of practice and act as “designers, teachers and stewards” to move closer to a shared vision [ 81 ]. This requires considerable new skills and a shift away from traditional authoritarian leadership models [ 81 ]. This may be more easily achieved where some of the “habits for improvement” already exist (13), or where organisational structures, for example, decentralised decision-making or non-hierarchical teams, allow bottom-up problem solving. Leadership engagement in QIC programmes can be developed through alignment with national priorities or quality strategies, alignment with financial incentive systems or facility performance management targets, particularly as external pressures may compete with QIC aims. Therefore, QICs design and evaluation would benefit from situating these interventions in the health system in which they occur.

Improving skills and competencies in using quality improvement methods is integral to the implementation of QIC interventions; however, the analysis of contextual factors suggests that efforts to strengthen quality improvement capacity may need to consider other factors as well as the following: firstly, the availability and usability of health information systems. Secondly, health workers’ data literacy, i.e. their confidence, skills and attitudes towards the use of data for decision-making. Thirdly, adequacy of health workers’ clinical competences. Fourth, leaders’ attitudes to team problem solving and open debate, particularly in settings where organisational culture may be a barrier to individual reflection and initiative. The specific contextual challenges emerging from studies from LMICs, such as low staffing levels and low competence of health workers, poor data systems, and lack of leadership echo findings on the limitations of quality improvement approaches at facility-level in resource constrained health systems [ 1 , 82 ]. These may explain why group-problem solving strategies, including QICs, may be more effective in moderate-resource than in low-resource settings, and their effect larger when combined with training [ 11 ]. The analysis on the role of context in activating mechanisms for change suggests the need for more explicit assumptions about context-mechanism-outcome relationships in QIC intervention design and evaluation [ 15 , 83 ]. Further analysis is needed to determine whether certain contextual factors related to capacity should be a precondition to justify the QIC approach (an “investment viability threshold”) [ 84 ], and what aspects of quality improvement capacity a QIC intervention can realistically modify in the relatively short implementation timeframes available.

While we do not suggest that our programme theory is relevant to all QIC interventions, in realist terms, this may be generalizable at the level of theory [ 18 , 20 ] offering context-mechanism-outcome hypotheses that can inform QIC design and be tested through rigorous evaluations, for example, through realist trials [ 85 , 86 ]. In particular, there is a need for quantitative analysis of hypothesised mechanisms of change of QICs, since the available evidence is primarily from qualitative or cross-sectional designs.

Our review balances principles of systematic reviews, including a comprehensive literature search, double abstraction, and quality appraisal, with the reflective realist review approach [ 19 ]. The realist-inspired search methodology allowed us to identify a higher number of papers compared to a previous review with similar inclusion criteria [ 16 ] through active search of qualitative studies and grey literature and inclusion of low quality literature that would have otherwise been excluded [ 41 ]. This also allowed us to interrogate what did not work, as much as what did work [ 19 , 22 ]. By reviewing literature with a wide range of designs against a preliminary conceptual framework, by including literature spanning both high- and low-resource settings and by exploring dissonant experiences, we contribute to understanding QICs as “disruptive events within systems” [ 87 ].

Our review may have missed some papers, particularly because QIC programme descriptions are often limited [ 7 ]; however, we used a stringent QIC definition aligned with previous reviews, and we are confident that thematic saturation was achieved with the available studies. We encountered a challenge in categorising data as “context” or “mechanism”. This is not unique and was anticipated [ 88 ]. Double review of papers in our research team minimised subjectivity of interpretation and allowed a deep reflection on the role of the factors that appeared under both dimensions.

We found some evidence that appropriateness of external support, functionality of quality improvement teams, leadership characteristics and alignment with national systems and priorities may influence QIC outcomes, but the strength and quality of the evidence is weak. We explored how QIC outcomes may be generated and found that health professionals’ participation in QIC activities may improve their knowledge, problem-solving skills and attitude; team work; shared leadership and the development of habits for improvement. Interaction across quality improvement teams may generate normative pressure and opportunities for capacity building and peer recognition. Activation of mechanisms of change may be influenced by the appropriateness of external support, leadership characteristics, the adequacy of clinical skills and alignment with systemic pressure and incentives.

There is a need for explicit assumptions about context-mechanism-outcome relationships in QIC design and evaluation. Our review offers an initial programme theory to aid this. Further research should explore whether certain contextual factors related to capacity should be a precondition to justify the QIC approach, test the emerging programme theory through empirical studies and refine it through greater use of individual behaviour change and organisational theory in intervention design and evaluation.

Abbreviations

Inter-quartile range

Low and middle-income country

Median effect size

Model for understanding success in improving quality

Quality improvement collaborative

Strengthening the reporting of observational studies in epidemiology

Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, et al. High-quality health systems in the Sustainable Development Goals era: time for a revolution. Lancet Glob Health. 2018;6(11):E1196–E252.

Article   PubMed   PubMed Central   Google Scholar  

Kilo CM. A framework for collaborative improvement: lessons from the Institute for Healthcare Improvement's Breakthrough Series. Qual Manag Health Care. 1998;6(4):1–13.

Article   CAS   PubMed   Google Scholar  

Langley GJ, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance: Wiley; 2009.

Google Scholar  

Wilson T, Berwick DM, Cleary PD. What do collaborative improvement projects do? Experience from seven countries. Jt Comm J Qual Saf. 2003;29(2):85–93.

PubMed   Google Scholar  

Schouten LMT, Hulscher MEJL, van Everdingen JJE, Huijsman R, Grol RPTM. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336(7659):1491.

Rowe AK, Rowe SY, Peters DH, Holloway KA, Chalker J, Ross-Degnan D. Effectiveness of strategies to improve health-care provider practices in low-income and middle-income countries: a systematic review. Lancet Glob Health. 2018;6(11):E1163–E75.

Wells S, Tamir O, Gray J, Naidoo D, Bekhit M, Goldmann D. Are quality improvement collaboratives effective? A systematic review. BMJ Qual Saf. 2018;27(3):226–40.

Article   PubMed   Google Scholar  

Øvretveit J, Bate P, Cleary P, Cretin S, Gustafson D, McInnes K, et al. Quality collaboratives: lessons from research. Qual Saf Health Care. 2002;11(4):345–51.

Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health Aff. 2005;24(1):138–50.

Article   Google Scholar  

Mittman BS. Creating the evidence base for quality improvement collaboratives. Ann Intern Med. 2004;140(11):897–901.

Garcia-Elorrio E, Rowe SY, Teijeiro ME, Ciapponi A, Rowe AK. The effectiveness of the quality improvement collaborative strategy in low- and middle-income countries: a systematic review and meta-analysis. PLoS One. 2019;14(10):e0221919.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Franco LM, Marquez L. Effectiveness of collaborative improvement: evidence from 27 applications in 12 less-developed and middle-income countries. BMJ Qual Saf. 2011;20(8):658–65.

Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273(5):408–12.

Peden CJ, Stephens T, Martin G, Kahan BC, Thomson A, Rivett K, et al. Effectiveness of a national quality improvement programme to improve survival after emergency abdominal surgery (EPOCH): a stepped-wedge cluster-randomised trial. Lancet. 2019.

Dixon-Woods MaM GP. Does quality improvement improve quality? Future Hosp J. 2016;3(3):191–4.

Hulscher MEJL, Schouten LMT, Grol RPTM, Buchan H. Determinants of success of quality improvement collaboratives: what does the literature show? BMJ Qual Saf. 2013;22(1):19–31.

Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q. 2011;89(2):167–205.

Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review--a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10(Suppl 1):21–34.

Rycroft-Malone J, McCormack B, Hutchinson AM, DeCorby K, Bucknall TK, Kent B, et al. Realist synthesis: illustrating the method for implementation research. Implement Sci. 2012;7(1):33.

Pawson R, Tilley N. Realistic evaluation. London: Sage; 1997.

De Silva MJ, Breuer E, Lee L, Asher L, Chowdhary N, Lund C, et al. Theory of Change: a theory-driven approach to enhance the Medical Research Council’s framework for complex interventions. Trials. 2014;15.

Blamey A, Mackenzie M. Theories of Change and Realistic Evaluation. Evaluation. 2016;13(4):439–55.

Breuer E, Lee L, De Silva M, Lund C. Using theory of change to design and evaluate public health interventions: a systematic review. Implement Sci. 2016;11.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350.

de Silva D. Improvement collaboratives in health care. Evidence scan July 2014. London: The Health Foundation; 2014. Available from: http://www.health.org.uk/publication/improvement-collaboratives-health-care .

Kringos DS, Sunol R, Wagner C, Mannion R, Michel P, Klazinga NS, et al. The influence of context on the effectiveness of hospital quality improvement strategies: a review of systematic reviews. BMC Health Serv Res. 2015;15:277.

Kaplan HC, Provost LP, Froehle CM, Margolis PA. The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. 2012;21(1):13–20.

Reed J, Ramaswamy R, Parry G, Sax S, Kaplan H. Context matters: adapting the Model for Understanding Success in Quality Improvement (MUSIQ) for low and middle income countries. Implement Sci. 2017;12((Suppl 1)(48)):23.

Reed JE, Kaplan HC, Ismail SA. A new typology for understanding context: qualitative exploration of the model for understanding success in quality (MUSIQ). BMC Health Serv Res. 2018;18.

Astbury B, Leeuw FL. Unpacking black boxes: mechanisms and theory building in evaluation. Am J Eval. 2010;31(3):363–81.

Dalkin SM, Greenhalgh J, Jones D, Cunningham B, Lhussier M. What's in a mechanism? Development of a key concept in realist evaluation. Implement Sci. 2015;10.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370(9596):1453–7.

Critical appraisal skills programme. CASP Qualitative Checklist 2007 [8th January 2019]. Available from: https://casp-uk.net/wp-content/uploads/2018/01/CASP-Qualitative-Checklist-2018.pdf .

Hong QN PP, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon, M-P GF, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada 2018 [8th January 2019]. Available from: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf .

Hong QN, Gonzalez-Reyes A, Pluye P. Improving the usefulness of a tool for appraising the quality of qualitative, quantitative and mixed methods studies, the Mixed Methods Appraisal Tool (MMAT). J Eval Clin Pract. 2018;24(3):459–67.

Hannes K. Chapter 4: Critical appraisal of qualitative research. In: NJ BA, Hannes K, Harden A, Harris J, Lewin S, Lockwood C, editors. Supplementary Guidance for Inclusion of Qualitative Research in Cochrane Systematic Reviews of Interventions Version 1 (updated August 2011): Cochrane Collaboration Qualitative Methods Group; 2011.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. RAMESES publication standards: realist syntheses. BMC Med. 2013;11(1):21.

Pawson R. Digging for nuggets: how ‘bad’ research can yield ‘good’ evidence. Int J Soc Res Methodol. 2006;9(2):127–42.

Benn J, Burnett S, Parand A, Pinto A, Vincent C. Factors predicting change in hospital safety climate and capability in a multi-site patient safety collaborative: a longitudinal survey study. BMJ Qual Saf. 2012;21(7):559–68.

Linnander E, McNatt Z, Sipsma H, Tatek D, Abebe Y, Endeshaw A, et al. Use of a national collaborative to improve hospital quality in a low-income setting. Int Health. 2016;8(2):148–53.

McInnes DK, Landon BE, Wilson IB, Hirschhorn LR, Marsden PV, Malitz F, et al. The impact of a quality improvement program on systems, processes, and structures in medical clinics. Med Care. 2007;45(5):463–71.

Burnett S, Benn J, Pinto A, Parand A, Iskander S, Vincent C. Organisational readiness: exploring the preconditions for success in organisation-wide patient safety improvement programmes. Qual Saf Health Care. 2010;19(4):313–7.

Colbourn TNB, Costello A. MaiKhanda - Final evaluation report. The impact of quality improvement at health facilities and community mobilisation by women’s groups on birth outcomes: an effectiveness study in three districts of Malawi. London: Health Foundation; 2013.

Amarasingham R, Pronovost PJ, Diener-West M, Goeschel C, Dorman T, Thiemann DR, et al. Measuring clinical information technology in the ICU setting: application in a quality improvement collaborative. J Am Med Inform Assoc. 2007;14(3):288–94.

Sodzi-Tettey ST-DN, Mobisson-Etuk N, Macy LH, Roessner J, Barker PM. Lessons learned from Ghana’s Project Fives Alive! A practical guide for designing and executing large-scale improvement initiatives. Cambridge: Institute for Healthcare Improvement; 2015.

Duckers ML, Spreeuwenberg P, Wagner C, Groenewegen PP. Exploring the black box of quality improvement collaboratives: modelling relations between conditions, applied changes and outcomes. Implement Sci. 2009;4:74.

Catsambas TT, Franco LM, Gutmann M, Knebel E, Hill P, Lin Y-S. Evaluating health care collaboratives: the experience of the quality assurance project. Bethesda: USAID Health Care Improvement Project; 2008.

Marquez L, Holschneider S, Broughton E, Hiltebeitel S. Improving health care: the results and legacy of the USAID Health Care Improvement Project. Bethesda: University Research Co., LLC (URC). USAID Health Care Improvement Project; 2014.

Schouten LM, Hulscher ME, Akkermans R, van Everdingen JJ, Grol RP, Huijsman R. Factors that influence the stroke care team’s effectiveness in reducing the length of hospital stay. Stroke. 2008;39(9):2515–21.

Mills PD, Weeks WB. Characteristics of successful quality improvement teams: lessons from five collaborative projects in the VHA. Jt Comm J Qual Saf. 2004;30(3):152–62.

Carlhed R, Bojestig M, Wallentin L, Lindstrom G, Peterson A, Aberg C, et al. Improved adherence to Swedish national guidelines for acute myocardial infarction: the Quality Improvement in Coronary Care (QUICC) study. Am Heart J. 2006;152(6):1175–81.

Duckers MLA, Stegeman I, Spreeuwenberg P, Wagner C, Sanders K, Groenewegen PP. Consensus on the leadership of hospital CEOs and its impact on the participation of physicians in improvement projects. Health Policy. 2009;91(3):306–13.

Horbar JD, Plsek PE, Leahy K. Nic/Q. NIC/Q 2000: establishing habits for improvement in neonatal intensive care units. Pediatrics. 2003;111(4 Pt 2):e397–410.

Baker U, Petro A, Marchant T, Peterson S, Manzi F, Bergstrom A, et al. Health workers’ experiences of collaborative quality improvement for maternal and newborn care in rural Tanzanian health facilities: a process evaluation using the integrated 'promoting action on research implementation in health services’ framework. PLoS One. 2018;13:12.

Osibo B, Oronsaye F, Alo OD, Phillips A, Becquet R, Shaffer N, et al. Using small tests of change to improve PMTCT services in northern Nigeria: experiences from implementation of a continuous quality improvement and breakthrough series program. J Acquir Immune Defic Syndr. 2017;75(Suppl 2):S165–s72.

Pinto A, Benn J, Burnett S, Parand A, Vincent C. Predictors of the perceived impact of a patient safety collaborative: an exploratory study. Int J Qual Health Care. 2011;23(2):173–81.

Benn J, Burnett S, Parand A, Pinto A, Iskander S, Vincent C. Perceptions of the impact of a large-scale collaborative improvement programme: experience in the UK Safer Patients Initiative. J Eval Clin Pract. 2009;15(3):524–40.

Rahimzai M, Naeem AJ, Holschneider S, Hekmati AK. Engaging frontline health providers in improving the quality of health care using facility-based improvement collaboratives in Afghanistan: case study. Confl Heal. 2014;8:21.

Stone S, Lee HC, Sharek PJ. Perceived factors associated with sustained improvement following participation in a multicenter quality improvement collaborative. Jt Comm J Qual Patient Saf. 2016;42(7):309–15.

Feldman-Winter L, Ustianov J. Lessons learned from hospital leaders who participated in a national effort to improve maternity care practices and breastfeeding. Breastfeed Med. 2016;11(4):166–72.

Ament SM, Gillissen F, Moser A, Maessen JM, Dirksen CD, von Meyenfeldt MF, et al. Identification of promising strategies to sustain improvements in hospital practice: a qualitative case study. BMC Health Serv Res. 2014;14:641.

Duckers ML, Wagner C, Vos L, Groenewegen PP. Understanding organisational development, sustainability, and diffusion of innovations within hospitals participating in a multilevel quality collaborative. Implement Sci. 2011;6:18.

Parand A, Benn J, Burnett S, Pinto A, Vincent C. Strategies for sustaining a quality improvement collaborative and its patient safety gains. Int J Qual Health Care. 2012;24(4):380–90.

Jaribu J, Penfold S, Manzi F, Schellenberg J, Pfeiffer C. Improving institutional childbirth services in rural Southern Tanzania: a qualitative study of healthcare workers’ perspective. BMJ Open. 2016;6:9.

Nembhard IM. Learning and improving in quality improvement collaboratives: which collaborative features do participants value most? Health Serv Res. 2009;44(2 Pt 1):359–78.

Carter P, Ozieranski P, McNicol S, Power M, Dixon-Woods M. How collaborative are quality improvement collaboratives: a qualitative study in stroke care. Implement Sci. 2014;9(1):32.

Nembhard IM. All teach, all learn, all improve?: the role of interorganizational learning in quality improvement collaboratives. Health Care Manag Rev. 2012;37(2):154–64.

Duckers ML, Groenewegen PP, Wagner C. Quality improvement collaboratives and the wisdom of crowds: spread explained by perceived success at group level. Implement Sci. 2014;9:91.

Dainty KN, Scales DC, Sinuff T, Zwarenstein M. Competition in collaborative clothing: a qualitative case study of influences on collaborative quality improvement in the ICU. BMJ Qual Saf. 2013;22(4):317–23.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, et al. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7.

Atkins L, Francis J, Islam R, O'Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12.

Herzer KR, Pronovost PJ. Physician motivation: listening to what pay-for-performance programs and quality improvement collaboratives are telling us. Jt Comm J Qual Patient Saf. 2015;41(11):522–8.

PubMed   PubMed Central   Google Scholar  

Herzberg F. One more time - how do you motivate employees. Harv Bus Rev. 1987;65(5):109–20.

Nickelsen NCM. Five Currents of Organizational Psychology-from Group Norms to Enforced Change. Nord J Work Life Stud. 2017;7(1):87–106.

Dixon-Woods M. The problem of context in quality improvement. In: Health Foundation, editor. Perspectives on context. London: Health Foundation; 2014. p. 87–101.

Senge P. Building learning organizations. In: Pugh DS, editor. Organization Theory - Selected Classic Readings. 5th ed. London: Penguin; 2007. p. 486–514.

Waiswa P, Manzi F, Mbaruku G, Rowe AK, Marx M, Tomson G, et al. Effects of the EQUIP quasi-experimental study testing a collaborative quality improvement approach for maternal and newborn health care in Tanzania and Uganda. Implement Sci. 2017;12(1):89.

Rowe AK, Labadie G, Jackson D, Vivas-Torrealba C, Simon J. Improving health worker performance: an ongoing challenge for meeting the sustainable development goals. BMJ Br Med J. 2018;362.

Colbourn T, Nambiar B, Bondo A, Makwenda C, Tsetekani E, Makonda-Ridley A, et al. Effects of quality improvement in health facilities and community mobilization through women's groups on maternal, neonatal and perinatal mortality in three districts of Malawi: MaiKhanda, a cluster randomized controlled effectiveness trial. Int Health. 2013:iht011.

Bonell C, Warren E, Fletcher A, Viner R. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials. 2016;17(1):478.

Hanson C, Zamboni K, Prabhakar V, Sudke A, Shukla R, Tyagi M, et al. Evaluation of the Safe Care, Saving Lives (SCSL) quality improvement collaborative for neonatal health in Telangana and Andhra Pradesh, India: a study protocol. Glob Health Action. 2019;12(1):1581466.

Moore GF, Evans RE, Hawkins J, Littlecott H, Melendez-Torres GJ, Bonell C, et al. From complex social interventions to interventions in complex social systems: future directions and unresolved questions for intervention development and evaluation. Evaluation (Lond). 2019;25(1):23–45.

Moore GF, Evans RE. What theory, for whom and in which context? Reflections on the application of theory in the development and evaluation of complex population health interventions. SSM Popul Health. 2017;3:132–5.

Shaw J, Gray CS, Baker GR, Denis JL, Breton M, Gutberg J, et al. Mechanisms, contexts and points of contention: operationalizing realist-informed research for complex health interventions. BMC Med Res Methodol. 2018;18(1):178.

Download references

Acknowledgements

We thank Alex Rowe MD, MPH at the Centre for Disease Control and Prevention and Commissioner at the Lancet Global Health Commission for Quality Health Systems for the informal discussions that helped conceptualise the study and frame the research questions in the early phases of this work and for access to the Healthcare Provider Database. We also thank Will Warburton at the Health Foundation, London, UK, for his input in refining research questions in the light of QIC experience in the UK and the Safe Care Saving Lives implementation team from ACCESS Health International for their reflections on the implementation of a quality improvement collaborative, which helped refine the theory of change.

Data availability statement

The datasets analysed during the current study are available in the LSHTM repository, Data Compass.

This research was made possible by funding from the Medical Research Council [Grant no. MR/N013638/1] and the Children’s Investment Fund Foundation [Grant no. G-1601-00920]. The funder had no role in the design, collection, analysis and interpretation of data or the writing of the manuscript in the commissioning of the study or in the decision to submit this manuscript for publication.

Author information

Authors and affiliations.

Department of Disease Control, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK

Karen Zamboni, Joanna Schellenberg & Claudia Hanson

Department of Public Health Sciences, Karolinska Institutet, Stockholm, Sweden

Ulrika Baker & Claudia Hanson

Department of Family Medicine, College of Medicine, University of Malawi, Blantyre, Malawi

Ulrika Baker

Public Health Foundation, Kavuri Hills, Madhapur, Hyderabad, India

Mukta Tyagi

Institute for Global Health, University College London, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

KZ, CH, ZH and JS conceived and designed the study. KZ performed the searches. KZ, UB and MT analysed data. MT and KZ completed quality assessment of included papers. KZ, UB, CH, MT, ZH and JS wrote the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Karen Zamboni .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Consent for publication was received from all individuals mentioned in the acknowledgement section.

Competing interests

The authors declare that they have no competing interests

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Search terms used.

Additional file 2.

Systematic review alignment with RAMESES publication standards checklist.

Additional file 3.

Quality appraisal of included studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zamboni, K., Baker, U., Tyagi, M. et al. How and under what circumstances do quality improvement collaboratives lead to better outcomes? A systematic review. Implementation Sci 15 , 27 (2020). https://doi.org/10.1186/s13012-020-0978-z

Download citation

Received : 22 June 2019

Accepted : 02 March 2020

Published : 04 May 2020

DOI : https://doi.org/10.1186/s13012-020-0978-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quality improvement
  • Realist synthesis
  • Mechanism of change

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

further research and improvement

To read this content please select one of the options below:

Please note you do not have access to teaching notes, integration of continuous improvement strategies with industry 4.0: a systematic review and agenda for further research.

The TQM Journal

ISSN : 1754-2731

Article publication date: 20 August 2020

Issue publication date: 9 February 2021

The purpose of this paper is to provide a review of the history, trends and needs of continuous improvement (CI) and Industry 4.0. Four strategies are reviewed, namely, Lean, Six Sigma, Kaizen and Sustainability.

Design/methodology/approach

Digitalization and CI practices contribute to a major transformation in industrial practices. There exists a need to amalgamate Industry 4.0 technologies with CI strategies to ensure significant benefits. A systematic literature review methodology has been followed to review CI strategy and Industry 4.0 papers ( n  = 92).

Various frameworks of Industry 4.0, their advantages and disadvantages were explored. A conceptual framework integrating CI strategies and Industry 4.0 is being presented in this paper.

Practical implications

The benefits and practical application of the developed framework has been presented.

Originality/value

The article is an attempt to review CI strategies with Industry 4.0. A conceptual framework for the integration is also being presented.

  • Continuous improvement
  • Industry 4.0
  • Sustainability

Vinodh, S. , Antony, J. , Agrawal, R. and Douglas, J.A. (2021), "Integration of continuous improvement strategies with Industry 4.0: a systematic review and agenda for further research", The TQM Journal , Vol. 33 No. 2, pp. 441-472. https://doi.org/10.1108/TQM-07-2020-0157

Emerald Publishing Limited

Copyright © 2020, Emerald Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

The future of feedback: Motivating performance improvement through future-focused feedback

Jackie Gnepp

1 Humanly Possible, Inc., Oak Park, Illinois, United States of America

Joshua Klayman

2 Booth School of Business, University of Chicago, Chicago, Illinois, United States of America

Ian O. Williamson

3 Wellington School of Business and Government, Victoria University of Wellington, Wellington, New Zealand

Sema Barlas

4 Masters of Science in Analytics, University of Chicago, Chicago, Illinois, United States of America

Associated Data

All relevant data are within the manuscript and its Supporting Information files ( S1 Dataset ).

Managerial feedback discussions often fail to produce the desired performance improvements. Three studies shed light on why performance feedback fails and how it can be made more effective. In Study 1, managers described recent performance feedback experiences in their work settings. In Studies 2 and 3, pairs of managers role-played a performance review meeting. In all studies, recipients of mixed and negative feedback doubted the accuracy of the feedback and the providers’ qualifications to give it. Disagreement regarding past performance was greater following the feedback discussion than before, due to feedback recipients’ increased self-protective and self-enhancing attributions. Managers were motivated to improve to the extent they perceived the feedback conversation to be focused on future actions rather than on past performance. Our findings have implications for the theory and practice of performance management.

Introduction

Once again, Taylor Devani is hoping to be promoted to Regional Manager. Chris Sinopoli, Taylor’s new boss, has arranged a meeting to provide performance feedback, especially regarding ways Taylor must change to succeed in a Regional Manager position. Like Taylor’s previous boss, Chris is delighted with Taylor’s award-winning sales performance. But Taylor was admonished in last year’s performance appraisal about cavalier treatment of customers and intolerant behavior toward employees. Taylor was very resistant to that message then and there have been no noticeable improvements since. What can Chris say to get through to Taylor?

This vignette highlights three points that will be familiar to theorists, researchers, and practitioners of performance feedback. First, the vignette reflects that performance feedback often includes a mix of both positive and negative feedback. Second, it reflects the common experience that the recipients do not always accept the feedback they get, let alone act on it. Third, it raises the question of what a feedback provider should say (and perhaps not say) in order to enable and motivate the feedback recipient to improve.

The present research focuses on feedback conversations in the context of work and career, but it has implications far beyond those contexts. Giving feedback about performance is one of the key elements of mentorship, coaching, supervision, and parenting. It contributes to conflict resolution in intimate relationships [ 1 ] and it is considered one of the most powerful activities in education [ 2 ]. In all these instances, the primary goal is to motivate and direct positive behavior change. Thus, a better understanding of where performance feedback conversations go wrong and how they can be made more effective is an important contribution to the psychology of work and to organizational psychology, but also to a broad range of psychological literatures, including education, consulting, counseling, and interpersonal communications.

Across three studies, we provide the first evidence that performance feedback discussions can have counterproductive effects by increasing the recipient’s self-serving attributions for past performance, thereby decreasing agreement between the providers and recipients of feedback. These unintended effects are associated with lower feedback acceptance and with lower motivation to change. Our studies also provide the first empirical evidence that feedback discussions promote intentions to act on the feedback to the extent they are perceived as focusing on future performance, rather than past performance. These findings suggest a new line of investigation for a topic with a long and venerable history.

Performance feedback in the workplace

Performance feedback can be distinguished from other types of managerial feedback (e.g., “production is up 12% from last quarter”) by its focus on the recipients’ conduct and accomplishments–doing the right things the right way with the right results. It is nearly universal in the modern workplace. Even the recent trend toward doing away with annual performance reviews has come with a directive for managers to have more frequent, if less formal, performance feedback conversations [ 3 ].

Psychologists have known for decades that the effects of performance feedback on performance are highly variable and not always beneficial: A meta-analysis by Kluger and DeNisi found that the modal impact on performance is none [ 4 ]. Such findings fostered a focus on employee reactions to performance appraisals and the idea that employees would be motivated to change behavior only if they accepted the feedback and believed there was a need to improve [ 5 – 7 ]. Unfortunately, unfavorable feedback is not easily accepted. People have been shown to cope with negative feedback by disputing it, lowering their goals, reducing commitment, misremembering or reinterpreting the feedback to be more positive, and engaging in self-esteem repair, none of which are likely to motivate efforts to do a better job next time [ 8 – 16 ].

We are not recommending that feedback providers avoid negative feedback in favor of positive. Glossing over discrepancies between actual performance and desired standards of performance is not a satisfactory solution: Both goal-setting theory and ample evidence support the idea that people need summary feedback comparing progress to goals in order to adjust their efforts and strategies to reach those standards or goals [ 17 , 18 ]. The solution we propose is feedback that focuses less on diagnosing past performance and more on designing future performance.

Diagnosing the past

Managers talk to employees about both the nature and the determinants of their performance, often with the goal of improving that performance. Indeed, feedback theorists have long argued that managers must diagnose the causes of past performance problems in order to generate insight into what skills people need to improve and how they should change [ 19 ]. Understanding root causes is believed to help everyone decide future action.

Yet causality is ambiguous in performance situations. Both feedback providers and feedback recipients make causal attributions for performance that are biased, albeit in different ways. Whereas the correspondence bias leads the feedback provider to over-attribute success and failure alike to qualities of the employee [ 20 – 22 ], this bias is modified by a self-serving bias for the feedback recipient. Specifically, feedback recipients are more inclined to attribute successes to their positive dispositional qualities, and failures to external forces such as bad luck and situational constraints [ 23 – 26 ]. These self-enhancing and self-protective attributions benefit both affect and feelings of self-worth [ 27 , 28 ].

Organizational scholars have theorized since the 1970’s that such attribution differences between leaders and subordinates are a likely source of conflict and miscommunication in performance reviews [ 12 , 29 – 31 ]. Despite this solid basis in social psychological theory, little evidence exists regarding the prevalence and significance of attribution misalignment in the context of everyday workplace feedback. In the workplace, where people tend to trust their colleagues, have generally positive supervisor-supervisee relations they wish to maintain, and where feedback often takes place within a longer history of interaction, there may be more agreement about the causes of past events than seen in experimental settings. In Study 1, we explored whether attribution disagreement is indeed prevalent in the workplace by surveying hundreds of managers working in hundreds of different settings in which they gave or received positive or negative feedback. (In this paper, “disagreement” refers to a difference of opinion and is not meant to imply an argument between parties.) If workplace results mirror experimental findings and the organizational theorizing reviewed above, then our survey should reveal that when managers receive negative feedback, they make more externally focused attributions and they view that feedback as lacking credibility.

Can feedback discussions lead the two parties to a consensual understanding of the recipient’s past performance, so that its quality can be sustained or improved? One would be hard pressed these days to find a feedback theorist who did not advocate two-way communication in delivering feedback. Shouldn’t the two parties expect to converge on the “truth” of the matter through a sharing of perspectives? Gioia and Sims asked managers to make attributions for subordinates’ performance both before and after giving feedback [ 32 ]. Following the feedback conversation, managers gave more credit for success and less blame for failure. However, Gioia and Simms did not assess whether the recipients of feedback were influenced to think differently about their performance and that, after all, is the point of giving feedback.

Should one expect the recipients of workplace feedback to meet the providers halfway, taking less credit for success and/or more responsibility for failure following the feedback discussion? There are reasons to suspect not. The self-serving tendency in attributions is magnified under conditions of self-threat, that is, when information is conveyed that questions, contradicts, or challenges a person’s favorable view of the self [ 33 ]. People mentally argue against threatening feedback, rejecting what they find refutable [ 11 , 34 ]. In Studies 2 and 3, we explored the effects of live feedback discussions on attributions, feedback acceptance, and motivation to improve. We anticipated that feedback recipients would find their self-serving tendencies magnified by hearing feedback that challenged their favorable self-views. We hypothesized that the very act of discussing performance would create or exacerbate differences of opinion about what caused past performance, rather than reduce them. We expected this divergence in attributions to result in recipients rejecting the feedback and questioning the legitimacy of the source, conditions that render feedback ineffective for motivating improvement [ 7 , 14 , 35 ].

Focusing on the future

Given the psychological obstacles to people’s acceptance of negative feedback, how can managers lead their subordinates to want to change their behavior and improve their performance? This question lies at the heart of the challenge posed by feedback discussions intended both to inform people and motivate them, sometimes referred to as “developmental” feedback. Despite its intended focus on learning and improvement [ 36 , 37 ], developmental feedback may nonetheless explicitly include a diagnostic focus on the past [ 38 ], such as “why the subjects thought that they had done so poorly, what aspects of the task they had difficulty with, and what they thought their strong points were” (p. 32). In contrast, we propose that the solution lies in focusing on the future: We suggest that ideas generated by a focus on future possibilities are more effective at motivating change than are ideas generated by diagnosing why things went well or poorly in the past. This hypothesis is based on recent theory and findings regarding prospective thinking and planning.

Much prospection (mentally simulating the future) is pragmatic in that it involves thinking about practical actions one can take and behavioral changes one can make to bring about desirable future outcomes [ 39 ]. In the context of mixed or negative performance feedback, such desirable outcomes might include improved performance, better results, and greater rewards. Research comparing forward to backward thinking suggests that people find it easier to come up with practical solutions to problems in the future than to imagine practical ways problems could have been avoided in the past: People are biased toward seeing past events as inevitable, finding it difficult to imagine how things might have turned out differently [ 40 – 42 ]. When thinking about their past failures, people tend to focus on how things beyond their control could have been better (e.g., they might have had fewer competing responsibilities and more resources). In contrast, when thinking about how their performance could be more successful in the future, people focus on features under their control, generating more goal-directed thoughts [ 43 ]. Thinking through the steps needed to achieve desired goals makes change in the future feel more feasible [ 44 ]. And when success seems feasible, contrasting the past with the future leads people to take more responsibility, initiate actions, engage in effortful striving, and achieve more of their goals, as compared to focusing on past difficulties [ 45 ]. For all these reasons, we hypothesize that more prospective, forward looking feedback conversations will motivate intentions toward positive change.

Overview of studies

We report three studies. The first explored the prevalence and consequences of differing attributional perspectives in the workplace. Managers described actual, recently experienced incidents of work-related feedback and the degree to which they accepted that feedback as legitimate. The second study was designed to examine and question the pervasive view that a two-way feedback discussion leads the parties to a shared explanation of past performance and a shared desire for behavior change. We hypothesized instead that the attributions of feedback providers and recipients diverge as a consequence of reviewing past performance. In that study, businesspeople role-played a performance review meeting based on objective data in a personnel file. The third study is a modified replication of the second, with an added emphasis on the developmental purpose of the feedback. Finally, we used data from Studies 2 and 3 to model the connections among provider-recipient attribution differences, future focus, feedback acceptance, and intentions to change. Our overarching theory posits that in the workplace (and in other domains of life), feedback conversations are most beneficial when they avoid the diagnosis of the past and instead focus directly on implications for future action.

We conducted an international survey of managers who described recent work-based incidents in which they either provided or received feedback, positive or negative. We explored how the judgmental biases documented in attribution research are manifested in everyday feedback conversations and how those biases relate to acceptance of feedback. Given well-established phenomena of attribution (correspondence bias, actor-observer differences, self-serving bias), we expected managers to favor internal attributions for the events that prompted the feedback, except for incidents in which they received negative feedback. We hypothesized that managers who received negative feedback would, furthermore, judge the feedback as less accurate and the feedback providers as less qualified, when compared to managers who received positive feedback or who provided feedback of either valence.

Participants

Respondents to this survey were 419 middle and upper managers enrolled in Executive MBA classes in Chicago, Barcelona, and Singapore. They represented a mix of American, European, and Asian businesspeople. Females comprised 18% of participants. For procedural reasons (see Results), the responses of 37 participants were excluded from analysis, leaving a sample of 382. This study was approved by the Institutional Review Board at the University of Chicago, which waived the requirement for written consent as was its customary policy for studies judged to be of minimal risk, involving only individual, anonymized survey responses.

Managers completed the survey online, using the Cogix ViewsFlash survey platform. When they accessed the survey, they were randomly assigned to one of four conditions. Each participant was instructed to think of one recent work-related incident in which they gave another person positive feedback (provider-positive condition), gave another person negative feedback (provider-negative condition), received positive feedback from another person (recipient-positive condition), or received negative feedback from another person (recipient-negative condition). They were asked to describe briefly the incident and the feedback.

The managers were then asked to complete the statement, “The feedback was __% accurate,” and to rate the qualification of the feedback provider on a scale from 0 = unqualified to 10 = completely qualified. Providers were asked, “How qualified were you to give the feedback?” whereas recipients were asked, “The person who gave you the feedback—how qualified was he or she to give the feedback?”

Lastly, the managers were instructed to make causal attributions for the incident. They were told, “Looking back now at the incident, please assign a percentage to each of the following causes, such that they sum to 100%.” Two of the causes corresponded to Weiner’s internal attribution categories (ability and effort) [ 28 ]. The other two causes corresponded to Weiner’s external attribution categories (task and luck). The wording of the response choices varied with condition. For example, in the provider-positive condition, the response choices were __% due to abilities he or she possessed, __% due to the amount of effort he or she put in, __% due to the nature of what he or she had to do, __% due to good luck, whereas for the recipient-negative condition, the attribution choices were __% due to abilities you lacked, __% due to the amount of effort you put in, __% due to the nature of what you had to do, __% due to bad luck. (Full text is provided in S1 Text .)

A review of the incidents and feedback the participants described revealed that 25 managers had violated instructions by writing about incidents that were not work-related (e.g., interactions with family members) and 12 had written about incidents inconsistent with their assigned condition (e.g., describing feedback received when assigned to a feedback provider condition). The data from these 37 managers were excluded from further analysis, leaving samples of 96, 92, 91, and 103 in the provider-positive, provider-negative, recipient-positive, and recipient-negative conditions, respectively. We tested the data using ANOVAs with role (providing vs. receiving feedback) and valence (positive vs. negative feedback) as between-subjects variables.

There were three dependent variables: managers’ ratings of feedback accuracy, of provider qualifications, and of internal vs. external causal attributions (ability + effort vs. task + luck). Analyses of the attribution variable used the arcsine transformation commonly recommended for proportions [ 46 ]. For all three dependent measures, there were significant main effects of role and valence and a significant interaction between them (see Table 1 and Fig 1 ).

An external file that holds a picture, illustration, etc.
Object name is pone.0234444.g001.jpg

Results for each dependent variable are shown by role (provider vs. recipient of feedback) and valence (positive vs. negative feedback). Error bars show standard errors.

Feedback accuracyProvider qualificationsInternal attributions
Role78.0.17141.2.09849.4.115
Valence46.8.11022.0.05541.4.099
Role x Valence39.6.09521.5.05444.9.106

All F (1, 378), all p < .001; effect size measures are partial η 2 . Correlations among dependent measures are shown in S1 Table .

Providers of feedback reported that the incidents in question were largely caused by the abilities and efforts of the feedback recipients. They reported that their feedback was accurate and that they were well qualified to give it. These findings held for both positive and negative feedback. Recipients of feedback made similar judgments when the feedback was positive: They took personal credit for incidents that turned out well and accepted the positive feedback as true. However, when the feedback was negative, recipients judged the failures as due principally to causes beyond their control, such as task demands and bad luck. They did not accept the negative feedback received, judging it as less accurate ( t (192) = 7.50, p < .001) and judging the feedback provider less qualified to give it t (192) = 5.25, p < .001). One manager who defended the reasonableness of these findings during a group debrief put it this way: “We are the best there is. If we get negative feedback for something bad that happened, it probably wasn’t our fault!”

Study 1 confirms that attributional disagreement is prevalent in the workplace and associated with the rejection of negative feedback. Across a large sample of real, recent, work-related incidents, providers and recipients of feedback formed very different impressions of both the feedback and the incidents that prompted it. Despite the general tendency of people to attribute the causes of performance to internal factors such as ability and effort, managers who received negative feedback placed most of the blame outside themselves. Our survey further confirmed that, across a wide variety of workplace settings, managers who received negative feedback viewed it as lacking credibility, rating the feedback as less accurate and the source as less qualified to provide feedback.

These results are consistent with attribution theory and the fact that feedback providers and recipients have access to different information: Whereas providers have an external perspective on the recipients’ observable behavior, feedback recipients have unique access to their own thoughts, feelings, and intentions, all of which drove their performance and behavior [ 24 , 47 ]. For the most part, feedback recipients intend to perform well. When their efforts pay off, they perceive they had personal control over the positive outcome; when their efforts fail, they naturally look for causes outside themselves [ 48 , 49 ]. For their part, feedback providers are prone to paying insufficient attention to situational constraints, even when motivated to give honest, accurate, unbiased, and objective feedback [ 20 ].

In this survey study, every incident was unique: Providers and recipients were not reporting on the same incidents. Thus, the survey method permits an additional mechanism of self-protection, namely, biased selection of congenial information [ 50 ]. When faced with a request to recall a recent incident that resulted in receipt of negative feedback, the managers may have tended to retrieve incidents for which they were not to blame and that did not reflect poorly on their abilities. Such biased recall often occurs outside of conscious awareness [ 51 , 52 ]. For the recipients of feedback, internal attributions for the target incident have direct implications for self-esteem. Thus, they may have tended to recall incidents aligned with their wish to maintain a positive self-view, namely, successes due to ability and effort, and failures due to task demands and bad luck. It is possible, of course, that providers engaged in selective recall as well: They may have enhanced their sense of competence and fairness by retrieving incidents in which they were highly qualified and provided accurate feedback. Biased selection of incidents is not possible in the next two studies which provided all participants with identical workplace-performance information.

In Study 2 we investigated how and how much the feedback conversation itself alters the two parties’ judgments of the performance under discussion. This study tests our hypotheses that feedback discussions do not lead to greater agreement about attributions and may well lead to increased disagreement, that attributional misalignment is associated with rejection of feedback, and that future focus is associated with greater feedback effectiveness, as measured by acceptance of feedback and intention to change. The study used a dyadic role-play simulation of a performance review meeting in which a supervisor (newly hired Regional Manager Chris Sinopoli) gives performance feedback to a subordinate (District Manager Taylor Devani, being considered for promotion). The simulation was adapted from a performance feedback exercise that is widely used in management training. Instructors and researchers who use similar role-play exercises report that participants find them realistic and engaging, and respond as they would to the real thing [ 32 , 53 ].

The decision to use a role-play method involves trade-offs, especially when compared to studying in vivo workplace performance reviews. We chose this method in order to gain greater experimental control and a cleaner test of our hypotheses. In our study, all participants were given identical information, in the form of a personnel file, ensuring that both the providers and recipients of feedback based their judgements on the same information. This control would not be possible inside an actual company, where the two parties might easily be influenced by differential access to organizational knowledge and different exposure to the events under discussion. Additionally, participants in our study completed questionnaires that assessed their perceptions of the feedback-recipient’s performance, the discussion of that performance, and the effects of the feedback discussion. Because this study was a simulation, participants were able to respond honestly to these questionnaires. Participants in an actual workplace performance review might need to balance honesty with concerns for appearances or repercussions; for example, feedback recipients might be hesitant to admit having little intention to change in response to feedback. On the other hand, there are a variety of conditions and motivations that exist in the workplace that cannot be easily simulated in a role-play, such as the pre-existing relationship between the feedback provider and recipient, and the potential long-term consequences of any performance review. Further work will be required to determine how findings from this study apply in workplace settings.

This study comprised two groups that received the same scenarios, but differed with regard to the timing and content of the questionnaires. Recall that the primary goal of Study 2 was to explore how the feedback discussion affects participants’ judgments. For this, we analyzed data from the pre-post group. Participants in this group completed questionnaires both before and after the discussion. Their post-discussion questionnaire included questions evaluating the conduct and consequences of the feedback discussion, including ratings of future focus and intention to change. A second group of participants (the post-only group) completed only a questionnaire after the feedback discussion that did not include future-focus or intention-to-change items. This group allowed us to test whether answering the same questions twice (pre and post the feedback discussion) affected the results.

Participants were 380 executives and MBA students enrolled in advanced Human Resources classes in Australia. They represented an international mix of businesspeople: 59% identified their “main cultural identity” as Australian, 20% as a European nationality or ethnicity, 24% Asian, and 12% other; 5% did not indicate any. (Totals sum to more than 100% because participants were able to choose two identities if they wished.) They averaged 35 years of age, ranging from 23 to 66. Females comprised 35% of the sample. Participants worked in pairs. Five pairs were excluded from analysis because one member of the dyad did not complete the required questionnaires, leaving a sample of 117 dyads in the pre-post group and 68 in the post-only group. This study was approved by the Institutional Review Board at the University of Melbourne. Participants’ written consent was obtained.

Each participant received a packet of materials consisting of (a) background on a fictional telecommunications company called the DeltaCom Corporation, (b) a description of both their role and their partner’s role, (c) task instructions for completing the questionnaires and the role-play itself, (d) a copy of the personnel file for the subordinate, and (e) the questionnaire(s). The names of the role-play characters were pre-tested to be gender neutral. (The full text of the materials is provided in S2 – S7 Texts .)

Personnel file . The personnel file documented a mixed record including both exemplary and problematic aspects of the District Manger’s performance. On the positive side was superior, award-winning sales performance and consistently above-average increases in new customers. On the negative side were consistently below-average ratings of customer satisfaction and a falling percentage of customers retained, along with high turnover of direct reports, some of whom complained of the District Manager’s “moody, tyrannical, and obsessive” behavior. Notes from the prior year’s performance discussion indicated that the District Manager did not fully accept the developmental feedback received at that time, instead defending a focus on sales success and the bottom line.

Questionnaires . Participants in the pre-post group completed a pre-discussion questionnaire immediately following their review of the District Manager’s personnel file. They rated the quality of the District Manager’s job performance on sales, customer retention, customer satisfaction, and ability to manage and coach employees, using 7-point scales ranging from 1 = Very Low to 7 = Very High. They then rated the importance of these four aspects of the recipient’s job performance on 7-point scales ranging from 1 = Not Important to 7 = Very Important. Lastly, participants gave their “opinion about the causes of Taylor Devani’s successes by assigning a percentage to each of the following four causes, such that the four causes together sum to 100%.” They did the same for “Taylor Devani’s failures.” Two response categories described internal attributions: “% due to Taylor’s abilities and personality” and “% due to the amount of effort and attention Taylor applied.” The other two described external attributions: “% due to Taylor’s job responsibilities, DeltaCom’s expectations, and the resources provided” and “% due to chance and random luck.” (We chose the expression “random luck” to imply uncontrollable environmental factors in contrast to a trait or feature of a lucky or unlucky person [ 54 ].) Participants chose a percentage from 0 to 100 for each cause, using scales in increments of 5 percentage points. In 4.4% of cases, participants’ four attribution ratings summed to a total, T , that did not equal 100. In those cases, all the ratings were adjusted by multiplying by (100 / T ).

Participants in both the pre-post group and the post-only group completed a post-discussion questionnaire following their feedback discussion. This questionnaire asked the participants to rate the favorability of the feedback given, on an 11-point scale from 0 = “Almost all negative” to 10 = “Almost all positive”; the accuracy of the feedback, on a scale from 0% to 100% in increments of 5%; and how qualified the provider was to give the feedback, on an 11-point scale from 0 = “Unqualified” to 10 = “Completely qualified.” It continued by asking all of the pre-discussion questionnaire items, allowing us to assess any rating changes that occurred in the pre-post group as a consequence of the intervening feedback discussion. Next, for those in the pre-post group, the questionnaire presented a series of 7-point Likert-scale items concerning the conduct and consequences of the feedback. These included items evaluating future focus and intention to change. Additionally, the post-discussion questionnaires of both groups contained exploratory questions about the behaviors of the individual role-players; these were not analyzed. On the final page, participants provided demographic information about themselves.

Participants were randomly assigned to dyads and to roles within each dyad. They were sent to private study rooms to complete the procedure. Instructions indicated (a) 15 minutes to review the personnel file, (b) 5 minutes to complete the pre-discussion questionnaire (pre-post group only), (c) 20 minutes to hold the feedback discussion, and (d) 15 minutes to complete the post-discussion questionnaire. Participants were instructed to stay in role during the entire exercise, including completion of the questionnaires. They were told to complete all steps individually without consulting their partner except, of course, for the feedback discussion. The feedback provider was directed by the task instructions to focus on the recipient’s “weaknesses as a manager–those aspects of performance Taylor must change to achieve future success if promoted.” The reason for this additional instruction was to balance the discussion of successes and failures. Prior pilot testing showed that without this instruction there was a tendency for role-players to avoid discussing shortcomings at all, a finding consistent with research showing that people are reluctant to deliver negative feedback and sometimes distort it to make it more positive [ 35 , 55 – 57 ]. When they finished, the participants handed in all the materials and took part in a group debrief of the performance review simulation.

We used analyses of variance to study differences in how the participants interpreted the past performance of the feedback recipient. The dependent variables were participant judgments of (a) internal vs. external attributions for the feedback recipient’s performance, (b) the quality of various aspects of job performance, and (c) the importance of those aspects. One set of ANOVAs used post-feedback questionnaire data from both the pre-post and post-only groups to check whether completing a pre-discussion questionnaire affected post-discussion results. The independent variables were role (provider or recipient of feedback), outcomes (successes or failures of the feedback recipient), and group (pre-post or post-only). A second set of ANOVAs used data from the pre-discussion and post-discussion questionnaires of the pre-post group to test our hypothesis that feedback discussions tend to drive providers’ and recipients’ interpretations of performance further apart rather than closer together. The independent variables in these analyses were role , outcomes , and timing (before or after feedback conversation). In all the ANOVAs, the dyad was treated as a unit (i.e., as though a single participant) because the responses of the two members of a dyad can hardly be considered independent of one another. Accordingly, role, outcomes, group, and timing were all within-dyad variables.

A third set of analyses provided tests of our hypotheses that provider-recipient disagreement about attributions interferes with feedback effectiveness, and that a focus on future behavior, rather than past behavior, improves feedback effectiveness. We conducted regression analyses using data from the pre-post group, whose questionnaires included the set of Likert-scale items concerning the conduct and consequences of the feedback discussion. The dependent variables for these regressions were two measures of feedback effectiveness derived from recipient responses: the recipients’ acceptance of the feedback as legitimate and the recipients’ expressed intention to change. The predictors represented five characteristics measured from the post-feedback questionnaire: provider-recipient disagreement about attributions, about performance quality, and about performance importance; how favorable the recipient found the feedback to be; and the extent to which the recipient judged the conversation to be future focused.

Role differences in the interpretation of past performance before and after feedback discussion

Given the results of Study 1 and established phenomena in social psychology, we expected feedback recipients to make internal attributions for their successes and external for their failures more than feedback providers do, to hold more favorable views of their job performance quality than providers do, and to see their successes as more important and/or their failures as less important than providers do. Analyses of the post-discussion ratings in the pre-post and post-only groups ( S1 Analyses ) confirm those expectations for attributions and for performance quality, but not for performance importance. There were no differences between the pre-post and post-only groups on any of those measures, with all partial η 2 < .02. Beyond that, we hypothesized that feedback conversations do not reduce provider-recipient differences in interpretation, and may well make them larger. Accordingly, we report here the analyses that include the timing variable, using data from the pre-post group ( Table 2 ).

Internal attributionsPerformance qualityPerformance importance
Role0.27.605.00214.13< .001.1101.26.264.011
Outcomes1.17.281.0103403.5< .001.96850.34< .001.306
Timing0.03.871~ 03.65.059.0310.23.636.002
Role x Outcomes12.43.001.0970.89.347.0084.28.041.036
Role x Timing2.61.109.0222.16.144.0193.32.071.028
Outcomes x Timing20.97< .001.153<0.01.951~ 01.29.258.011
Role x Outcomes x Timing6.46.012.053<0.01.967~ 06.43.013.053

F (1, 116) for internal attributions, F (1, 114) for performance quality and importance. Underlined values are effects with p < .05 and partial η 2 > .05.

Internal vs . external attributions . Participants in both roles provided attribution ratings before and after the discussion, separately “about the causes of Taylor Devani’s successes” and “about the causes of Taylor Devani’s failures.” There were three significant effects, all of which were interactions. Those were Role x Outcomes, Outcomes x Timing, and Role x Outcomes x Timing. As shown in Fig 2 , the three-way interaction reflects the following pattern: The parties began with only minor (and not statistically significant) differences in attributional perspective. Following the feedback discussion however, those differences were much greater. There were no significant effects involving timing for feedback providers: Their attributions changed only slightly from pre- to post-discussion. Feedback recipients, in contrast, showed a highly significant Outcomes x Timing interaction, F (1, 116) = 19.6, p < .001, η 2 = .14. Following the feedback conversation, recipients attributed their successes more to internal factors than they did before the conversation and they attributed their failures more to external factors than before ( t (116) = 4.5, p < .001 and t (116) = 3.3, p = .001, respectively). At the end, the two parties’ attributions were well apart on both successes and failures ( t (116) = 2.3, p = .024 and t (116) = 3.0, p = .003). In sum, the performance review discussion led to greater disagreement between the feedback providers and recipients due to the recipients of feedback making more self-enhancing and self-protecting performance attributions.

An external file that holds a picture, illustration, etc.
Object name is pone.0234444.g002.jpg

Results are shown by role (provider vs. recipient of feedback), outcomes (successes vs. failures), and timing (before vs. after feedback). Error bars show standard errors.

Performance quality . There were main effects of outcomes and role, but no interactions. As intended, participants rated performance on sales much more highly than they rated the other job aspects (6.72 vs. 3.32 out of 7). Overall, recipients evaluated their performances slightly more positively than the providers did (5.13 vs. 4.91).

Performance importance . There was a main effect of outcome, modified by significant Role x Outcomes and Role x Outcomes x Timing interactions. To understand these effects, we followed up with analyses of role and timing for successes and for failures, separately. Feedback recipients rated their successes as more important than feedback providers did (6.41 and 6.12, respectively; F (1, 115) = 6.20, p = .014, η 2 = .05), with no significant effects of time. In contrast, importance ratings for failures showed a Role x Timing interaction ( F (1, 114) = 7.77, p = .006, η 2 = .06): Providers rated failures as more important before discussion, becoming more lenient following discussion (5.75 vs. 5.42; t (114) = 2.22, p = .028), consistent with the findings of Gioia and Sims [ 32 ]. Recipient ratings showed no significant change as a consequence of discussion.

These analyses suggest that in performance conversations, feedback providers do not lead recipients to see things their way: Recipient interpretations of past performance do not become more like provider interpretations. In fact, following discussion, recipients’ causal attributions are further from those of the providers. Moreover, across dyads, there was no correlation between the recipient’s ratings and the provider’s ratings following discussion: Although a ceiling effect limits the potential for correlations on the quality of sales performance (success), the other measures, especially attributions, show considerable variation in responses across dyads but still no provider-recipient correlations ( S2 and S3 Tables). For performance quality, performance importance, and attributions, for successes and for failures, all | r | < .12 ( p > .22, N = 115 to 117).

Effects of attribution disagreement and future focus on recipients’ acceptance of feedback and intention to change

We hypothesized that provider-recipient disagreement about attributions negatively impacts feedback in two ways, by reducing the extent to which recipients accept the feedback as legitimate, and by reducing the recipient’s intentions to change in response to the feedback. We further hypothesized that a focus on future behavior, rather than past behavior, would engender greater acceptance of feedback and greater intention to change. The present study provides evidence for both of those hypotheses.

We measured feedback acceptance by averaging ratings on feedback accuracy and provider qualifications, both scaled 0 to 100 ( r = .448). We measured intention to change as the average of recipients’ responses to three of the Likert questions in the post-feedback-discussion questionnaire (α = .94):

Based on the feedback, you are now motivated to change your behavior. You see the value of acting on Chris’s suggestions. You will likely change your behavior, based on the feedback received.

We analyzed these two measures of feedback effectiveness using regressions with five variables that might predict the outcome of the discussion: post-feedback disagreement about attributions, performance quality, and performance importance (all scored such that positive numbers indicate that the recipient made judgments more favorable to the recipient than did the provider); how favorable the recipient found the feedback to be (rated from 0 = almost all negative to 10 = almost all positive); and the extent to which the recipient thought the conversation was future focused. This last measure is the average of the recipient’s ratings on the following three Likert questions on the post-feedback questionnaire (α = .75):

You and Chris spent a large part of this session generating new ideas for your next steps. The feedback conversation centered on what will make you most successful going forward. The feedback discussion focused mostly on your future behavior.

We hypothesized that the recipients’ acceptance of feedback and intention to change would be affected by the recipients’ impressions of how future focused the discussion was. That said, we note that the provider’s and the recipient’s ratings of future focus were well correlated across dyads ( r (115) = .423, p < .001), suggesting that recipients’ ratings of future focus reflected characteristics of the discussion that were perceived by both parties.

As shown in Table 3 , recipients’ ratings of future focus proved to be the best predictor of their ratings of both feedback acceptance and intention to change. Recipients’ favorability ratings also significantly predicted their intention to change and, especially, their acceptance of the feedback. Attribution disagreement between providers and recipients predicted lower acceptance of feedback, but not intention to change. Differences of opinion regarding the quality and importance of various aspects of job performance had no significant effects and, as shown by Model 2 in Table 3 , removing them had almost no effect.

Feedback AcceptanceIntention to change
Model 1 [.427]Model 2 [.421]Model 1 [.590]Model 2 [.599]
Beta (109) Beta (113) Beta (109) Beta (113)
Future focus.4065.10< .001.4245.39< .001.69910.39< .001.70910.84< .001
Favorability.3133.85< .001.2843.63< .001.1562.26.025.1422.18.031
Attribution
disagreement
-.207-2.68.009-.173-2.39.019-.014-.22.828-.004-.07.942
Quality
disagreement
-.041-.53.596-.019-.29.773
Importance
disagreement
.1021.39.166.017.27.785

Model 1 includes all five predictor variables. Model 2 excludes the two that showed no significant effects in Model 1. Numbers in brackets are adjusted R 2 s.

As in Study 1, we again observe that the providers and recipients of feedback formed very different impressions about past performance. A new and important finding in this study is that feedback conversations did not merely fail to diminish provider-recipient disagreements about what led to strong and weak performance; they actually turned minor disagreements into major ones. Recipients made more self-enhancing and self-protective attributions following the performance discussion, believing more strongly than before that their successes were caused by internal factors (their ability, personality, effort, and attention) and their failures were caused by external factors (job responsibilities, employer expectations, resources provided, and bad luck). There were also modest disagreements regarding the quality and importance of different aspects of the recipient’s job performance, but these did not worsen following discussion. The most important source of disagreement between providers and recipients then, especially following the feedback conversation, was not about what happened, but about why it happened.

What led recipients of performance feedback to accept it as legitimate and helpful? The best predictor of feedback effectiveness was the extent to which the discussion was perceived as future focused. Unsurprisingly, feedback was also easier to accept when it was more favorable. As predicted, recipients were more likely to accept feedback when they and the feedback providers agreed more about what caused the past events. Greater attribution agreement, however, did not increase recipients’ intention to change. These findings suggest that reaching agreement on the causes of past performance is neither likely to happen (because feedback discussions widen causal attribution disagreement) nor is it necessary for fostering change. What does matter is the extent to which the feedback conversation focuses on generating new ideas for future success. We further explore the relations among all these variables following the reporting of Study 3.

Performance feedback serves goals other than improving performance. For example, performance reviews often serve as an opportunity for the feedback provider to justify promotion and compensation decisions. For the recipient, the conversation may provide an opportunity for image management and the chance to influence employment decisions. People may fail to distinguish between evaluation and improvement goals when providing and receiving feedback. In Study 2, the instructions were intended to be explicit in directing participants to the developmental goal of performance improvement, rather than accountability or rewards. Nevertheless, the providers’ wish to justify their evaluations and the recipients’ wish to influence them might have contributed to the differences we observed in attributions and in judgments about the feedback’s legitimacy. To address this concern, we added a page of detailed company guidelines that emphasized the primacy of the performance-improvement goal over the goals of expressing, justifying, or influencing evaluations. There were two versions of these guidelines, which did not differ in their effects.

Participants were 162 executives and MBA students enrolled in advanced Human Resources classes in Australia. An international mix of businesspeople, 74% said they grew up in Australia or New Zealand, 10% in Europe, 22% in Asia, and 7% other. (Totals sum to more than 100% because some participants indicated more than one.) Participants averaged 39 years of age, ranging from 27 to 60. Females comprised 37% of the participants.

Participants read the same scenario and instructions as in Study 2, with an added page of guidelines for giving developmental feedback ( S8 Text ). They then completed the same post-discussion questionnaires used for the pre-post group of Study 2, minus the ratings of performance quality and importance for various aspects of the job, which showed no effects in Study 2. (The full text of the questionnaires is provided in S9 and S10 Texts). Taken together, these modifications kept the procedure to about the same length as in Study 2. This study was approved by the Institutional Review Board at the University of Melbourne. Written consent was obtained.

Role differences in the interpretation of past performance

As in Study 2, we calculated the sum of the percentages of attributions assigned to internal causes (ability and personality + effort and attention), applying an arcsine transformation. As before, we analyzed the internal attributions measure with a mixed-model ANOVA treating each dyad as a unit. There were two within-dyads variables: role (provider or recipient), and outcomes (successes or failures) and one between-dyads variable (guideline version ). There were no effects involving guideline version (all F < 1). The main effects of role ( F (1, 79) = 50.12, p < .001, η 2 = .39) and outcomes ( F (1, 79) = 113.8, p < .001, η 2 = .59) and the interaction between them ( F (1, 79) = 86.34, p < .001, η 2 = .52) are displayed in Fig 3 , along with the parallel post-feedback results from the previous two studies. As in Study 2, the two parties’ post-discussion attributions were well apart on both successes and, especially, failures ( t (80) = 3.3 and 9.4 respectively, both p ≤ .001). Again, the correlations between the provider’s and the recipient’s post-conversation performance attributions across dyads were not significant for either successes ( r (79) = -.04, p > .69) or failures ( r (79) = -.13, p > .23) suggesting that conversation does not lead the dyad to a common understanding of what led to good or poor performance.

An external file that holds a picture, illustration, etc.
Object name is pone.0234444.g003.jpg

Results are shown by role (provider vs. recipient of feedback) and valence/outcomes (positive feedback for successes vs. negative feedback for failures), following feedback conversation. Error bars show standard errors.

We conducted regression analyses of the recipient’s feedback acceptance and intention to change as in Study 2. The regression models included three predictors: future focus, attribution disagreement, and feedback favorability. Results, shown in Table 4 , replicated our Study 2 finding that future focus is the best predictor of both feedback acceptance and intention to change. As before, attribution disagreement predicted lower acceptance, but in this study it also predicted less intention to change. We again found that feedback favorability ratings were associated with greater acceptance, but this time, not with intention to change. Recipients and providers were again significantly correlated in their judgments of how future focused the conversation was ( r (79) = .299, p = .007).

Feedback Acceptance [.373]Intention to Change [.323]
Beta (77) Beta (77)
Future focus.4114.432< .001.5495.697.001
Attribution disagreement-.193-2.131.036-.198-2.105.039
Favorability.2843.017.003-.050-.516.607

Numbers in brackets are adjusted R 2 s.

Future focus, as perceived by the recipients of feedback, was once again the strongest predictor of their acceptance of the feedback and the strongest predictor of their intention to change. Conversely, attribution disagreement between the provider and recipient of feedback was associated with lower feedback acceptance and weaker intention to change. As in Studies 1 and 2, recipients made more internal attributions for successes than providers did and, especially, more external attributions for failures. The added guidelines in this study emphasizing performance-improvement goals over evaluative ones did not alleviate provider-recipient attribution differences. Indeed, those differences were considerably larger in this study than in the previous one and were more similar to those seen in Study 1 (see Fig 3 ).

Future focus, attributions, favorability, and the effectiveness of feedback

The strongest predictor of feedback effectiveness is the recipient’s perception that the feedback conversation focused on plans for the future rather than analysis of the past. We seek here to elucidate the relationship between future focus and feedback effectiveness by looking at the interrelations among the three predictors of effectiveness we studied: future focus, attribution disagreement, and feedback favorability.

The analyses that follow include data from all participants who were asked for ratings of future focus, namely those in Study 3 and in the pre-post group of Study 2. We included study as a variable in our analyses; no effects involving the study variable were significant. Nonetheless, because the two studies drew from different samples and used slightly different methods, inferential statistics could be impacted by intraclass correlation within each study. Therefore, we also tested for study-specific differences in parameter estimates using hierarchical linear modeling [ 58 , 59 ]. No significant differences between studies emerged, confirming the appropriateness of combining the data. (The HLM results are provided in S2 Analyses .)

The association between future focus and feedback effectiveness could be mediated by the effects of attribution disagreement and/or feedback favorability. Specifically, it could be that perceiving the conversation as more future focused is associated with closer agreement on attributions or with perceiving the feedback as more favorable, and one or both of those latter two effects leads to improved feedback effectiveness. Tests of mediation, following the methods of Kenny and colleagues [ 60 ], suggest otherwise (see Fig 4 ). These analyses partition the total associations of future focus with feedback acceptance and with intention to change into direct effects and indirect effects. Indirect effects via reduced attribution disagreement were 6.2% of the relation of future focus to feedback acceptance and 2.2% to intention to change. Indirect effects via improved perceptions of feedback favorability were 20.8% of the relation of future focus to feedback acceptance and 4.5% to intention to change. Thus, there is little to suggest that closer agreement on attributions or improved perceptions of feedback favorability account for the benefits of future focus on feedback effectiveness.

An external file that holds a picture, illustration, etc.
Object name is pone.0234444.g004.jpg

The two feedback effectiveness measures are feedback acceptance and intention to change. Following Kenny (2018), standardized regression coefficients are shown for the relations between future focus and two hypothesized mediators, attribution disagreement and feedback favorability ( a ), the mediators and the feedback effectiveness measures controlling for future focus ( b ), future focus and the effectiveness measures ( c ), and future focus and the effectiveness measures controlling for the mediator ( c′ ). The total effect ( c ) equals the direct effect ( c′ ) plus the indirect effect ( a · b ). Data are from Studies 2 and 3. a p = .072; * p = .028; ** p < .001.

Interactions

Future focus might have synergistic or moderating effects. In particular, we hypothesized that perceiving the conversation as more future focused may moderate the negative impact of attribution disagreement on feedback effectiveness. Alternatively, future focus may be especially beneficial when agreement about attributions is good, or when attribution differences are neither so big that they cannot be put aside, nor so small that the parties see eye to eye even when they focus on the past. Similarly, future focus may be especially beneficial when feedback is most unfavorable to the recipient, or when it’s most favorable, or when it is neither so negative that the recipients can’t move past it, nor so positive that the recipients accept it even when the conversation focuses on the past.

We conducted regression analyses with feedback acceptance and intention to change as dependent variables and future focus, feedback favorability, attribution disagreement, and their first-order interactions as predictors. Because some plausible interactions are nonlinear, we defined low, intermediate, and high values for each of the three predictor variables, dividing the 198 participants as evenly as possible for each. We then partitioned each predictor into linear and quadratic components with one degree of freedom each. With linear and quadratic components of three predictors plus a binary variable for Study 2 vs. Study 3, there were seven potential linear effects and 18 possible two-way interactions. We used a stepwise procedure to select which interactions to include in our regressions, using an inclusion parameter of p < .15. Results are shown in Table 5 .

Feedback acceptanceIntention to change
Future focus—Linear0.4875.09< .0010.63911.51< .001
Future focus—Quadratic0.0240.40.687-0.068-1.27.206
Feedback favorability—Linear0.2684.36< .0010.0961.74.083
Feedback favorability—Quadratic-0.067-1.12.265-0.029-0.55.584
Attribution disagreement—Linear-0.226-3.57.001-0.148-2.60.010
Attribution disagreement—Quadratic-0.094-1.62.108-0.088-1.69.093
Study 2 vs. 30.0731.13.259-0.078-1.34.182
Future focus—Linear x Feedback favorability—Linear-0.119-1.91.057-0.116-2.09.038
Future focus—Linear x Attribution disagreement—Linear -0.095-1.83.070
Future focus—Linear x Study-0.136-1.46.145
Feedback favorability–Quadratic x Attribution disagreement–Quadratic 0.0841.60.112

Models include all main effects and those first-order interactions that met an entry criterion of p < .15, plus data source (Study 2 vs. Study 3). Statistically significant values are underlined.

Future focus interacted with feedback favorability—marginally for feedback acceptance and significantly for intention to change. As shown in Fig 5 , recipients who gave low or intermediate ratings for future focus accepted the feedback less when it was most negative ( t (128) = 5.21, p < .001) and similarly, reported less inclination to change ( t (128) = 3.23, p = .002). In contrast, the recipients who rated the feedback discussion as most future focused accepted their feedback and indicated high intention to change at all levels of feedback favorability. These patterns suggest that perceiving future focus moderates the deleterious effect of negative feedback on feedback effectiveness.

An external file that holds a picture, illustration, etc.
Object name is pone.0234444.g005.jpg

Results for each measure of feedback effectiveness are shown by three levels of perceived future focus and three levels of perceived feedback favorability. Error bars show standard errors. Data are from Studies 2 and 3.

On the other hand, we find no evidence that future focus moderates the negative effect of attribution disagreement on feedback effectiveness. Future focus did interact marginally with attribution disagreement for intention to change. However, the benefits of perceiving high vs. low future focus may, in fact, be stronger when there is closer agreement about attributions: The increase in intention to change between low and high future focus groups was 2.30 with high disagreement, 2.37 with intermediate disagreement, and 3.24 in dyads with low disagreement, on a scale from 1 to 7.

Regression-tree analyses

Regression-tree analyses can provide additional insights into the non-linear relations among variables [ 61 ], with a better visualization of the best and worst conditions to facilitate feedback acceptance and intention to change. These analyses use the predictors (here, future focus, attribution disagreement, and feedback favorability) to divide participants into subgroups empirically, maximizing the extent to which values on the dependent measure are homogeneous within subgroups and different between them. We generated regression trees for each of our two effectiveness measures, feedback acceptance and intention to change. Fig 6 shows the results, including all subgroups (nodes) with N = 10 or more.

An external file that holds a picture, illustration, etc.
Object name is pone.0234444.g006.jpg

The trees depict the effects of future focus, attribution disagreement, and feedback favorability on our two measures of feedback effectiveness. The width of branches is proportional to the number of participants in that branch. Node 0 is the full sample of 198. Values on the X axis are standardized values for each dependent measure. Data are from Studies 2 and 3.

Both trees show that future focus is the most important variable, dividing into lower and higher branches at Nodes 1 and 2, and further distinguishing highest-future groups at Nodes A8 and B6. These representations also reinforce the conclusion that perceived future focus does not operate mainly via an association with more positive feedback or with better agreement on attributions. However, attribution disagreement does play a role, with more agreement leading to better acceptance of feedback and greater intention to change, as long as future focus is at least moderately high (Nodes A3 vs. A4 and B7 vs. B8). (The lack of effect at Node B6 is likely a ceiling effect.) Unfavorable feedback makes matters worse under adverse conditions: when future focus is low (Nodes B3 vs. B4) or when future focus is moderate but attribution disagreement is large (nodes A5 vs. A6).

General discussion

Our research was motivated by a need to understand why performance feedback conversations do not benefit performance to the extent intended and what might be done to improve that situation. We investigated how providers and recipients of workplace feedback differ in their judgements about the causes of performance and the credibility of feedback, and how feedback discussions impact provider-recipient (dis)agreement and feedback effectiveness. We were particularly interested in how interpretations of past performance, feedback acceptance, and intention to change are affected by the recipient’s perception of temporal focus, that is, the extent to which the feedback discussion focuses on past versus future behavior.

Management theorists typically advocate evaluating performance relative to established goals and standards, diagnosing the causes of substandard performance, and providing feedback so that people can learn from the past [ 19 ]. They also posit that feedback recipients must recognize there is a problem, accept the feedback as accurate, and find the feedback providers fair and credible in order for performance feedback to motivate improvement [ 7 , 14 , 35 ]. Unfortunately, we know that performance feedback often does not motivate improvement [ 4 ]. Our research contributes in several ways to understanding why that is and how feedback conversations might be made more effective.

Decades of attribution theory and research have elucidated the biases thought to produce discrepant explanations for performance between the providers and recipients of feedback. We show that for negative feedback, these discrepancies are prevalent in the workplace. We also show that larger attribution discrepancies are associated with greater rejection of feedback and, in our performance review simulations, with weaker intention to change. These findings support recent research and theory linking performance feedback, work-related decision making, and attribution theory: Instead of changing behavior in response to mixed or negative feedback, people make self-enhancing and self-protecting attributions and judgements they can use to justify not changing [ 8 , 14 , 62 ].

Our research suggests that the common practice of discussing the employees’ past performance, with an emphasis on how and why outcomes occurred and what that implies about the employees’ strengths and weaknesses, can be counterproductive. Although the parties to a feedback discussion may agree reasonably well about which goals and standards were met or unmet, they are unlikely to converge on an understanding of the causes of unmet goals and standards, even with engaged give and take. Instead, the feedback conversation creates or exacerbates disagreement about the causes of performance outcomes, leading feedback recipients to take more credit for their successes and less responsibility for their failures. This suggests that feedback conversations that attempt to diagnose past performance act as another form of self-threat that increases the self-serving bias [ 33 ]. Surely this runs counter to what the feedback provider intended.

At the same time, we find that self-serving attributions need not stand in the way of feedback acceptance and motivation to improve. A key discovery in our research is that the more recipients feel the feedback focuses on next steps and future actions, the more they accept the feedback and the more they intend to act on it. In fact, when feedback is perceived to be highly future focused, feedback recipients respond as well to predominantly negative feedback as to predominantly positive feedback. Future focus does not nullify self-serving attributions and their detrimental effects [see also 63 ], but it does enable productive feedback discussions despite them.

We used two complementary research methods. Study 1 used a more naturalistic and thus more ecologically valid method, collecting retrospective self-reports from hundreds of managers about actual feedback interactions in a wide variety of work situations [see 64 ]. Studies 2 and 3 used a role-play method that allowed us to give all participants identical workplace performance information, a good portion of which was undisputed and quantitative. With that design, response differences between the providers and recipients of feedback are due entirely to role, unconfounded by differences in knowledge and experience.

What role plays cannot establish is the magnitude of effects in organizational settings. Attribution misalignment and resistance to feedback might easily be much stronger in real workplace performance reviews where it would be rare for the parties to arrive with identical, largely unambiguous information. Moreover, managers’ investment in the monetary and career outcomes of performance reviews might lead feedback recipients to feel more threatened than in a role play and thus to disagree even more with unfavorable feedback. On the other hand, the desire to maintain employment and/or to maintain good relationships with supervisors might motivate managers to re-assess their past achievements, to change their private attributions, and to be more accepting of unfavorable feedback. Data from our role-play studies may not speak to the magnitude of resistance to feedback in work settings (although our survey results suggest it’s substantial), but they do show that feedback acceptance is increased when the participants perceive their feedback to be focused on the future.

Implications for future research and theory

There are few research topics more important to the study of organizations than performance management. Feedback conversations are a cornerstone of most individual and team performance management, yet there is still much we do not know about what should be said, how, and why. Based on research into the motivational advantages of prospective thinking, we hypothesized that feedback discussions perceived as future focused are the most effective kind for generating acceptance of feedback and fostering positive behavior change. Our findings support that hypothesis. The present research contributes to the literature on prospection by highlighting the role of interpersonal interactions in facilitating prefactual thinking and any associated advantages for goal pursuit [ 39 , 43 – 45 , 63 , 65 ]. In this section we suggest three lines of future research: (a) field studies and interventions; (b) research into the potential role of self-beliefs; and (c) exploration of the conversational dynamics associated with feedback perceived as past vs. future focused.

Field research and intervention designs

Testing feedback interventions in the workplace and other field settings is an important future step toward corroborating, elaborating, or correcting our findings. It will be necessary to develop effective means to foster a more future-focused style of feedback. Then, randomized controlled trials that contrast future-focused with diagnostic feedback can demonstrate the benefits that may accrue from focusing feedback more on future behavior and less on past behavior. Participant evaluations of the feedback discussions can be supplemented by those of neutral observers. Such evaluations are directly relevant to organizational goals, including employee motivation, positive supervisor-supervisee relations, and effective problem solving. Assessing subsequent behavior change and job performance is both important and complicated for evaluating feedback effectiveness: Seeing intentions through to fruition depends on many factors, including individual differences in self-regulation [ 66 , 67 ] and factors beyond people’s control, such as competing commitments, limited resources, and changing priorities [ 68 – 71 ]. Nevertheless, the ultimate proof of future-focused feedback will lie in performance improvement itself.

Self-beliefs and future focus

If future focus enhances feedback effectiveness, it may do so via self-beliefs. Growth mindset and self-efficacy, for example, are self-beliefs that influence how people think about and act on the future. Discussions that focus on what people can do in the future to improve performance may encourage people to view their own behavior as malleable and to view better results as achievable. If future focus helps people access this growth mindset, it should orient them toward mastering challenges and improving the self for the future: Whereas people exercise defensive self-esteem repair when in a fixed mindset, they prefer self-improvement when accessing a growth mindset [ 72 , 73 ]. Similarly, feedback conversations that focus on ways the feedback recipient can attain goals in the future may enhance people’s confidence in their ability to execute the appropriate strategies and necessary behaviors to succeed. Such self-efficacy expectancies have been shown to influence the goals people select, the effort and resources they devote, their persistence in the face of obstacles, and the motivation to get started [ 74 , 75 ]. Thus, research is needed to assess whether future focus alters people’s self-beliefs (or vice versa; see below) and if these, in turn, impact people’s acceptance of feedback and intention to change.

We found sizeable variation in the extent to which dyads reported focusing on the future. Pre-existing individual differences in self-beliefs may contribute to that variation. Recent research, for example, finds that professors with more growth mindsets have students who perform better and report being more motivated to do their best work [ 76 ]. In the case of a feedback conversation, we suspect that either party can initiate thinking prospectively, but both must participate in it to sustain the benefits.

Conversational dynamics and future focus

Unlike most studies of people’s reactions to mixed or negative feedback, our studies use face-to-face, real-time interaction, that is to say, two people in conversation. Might conversational dynamics associated with future-focused feedback contribute to its being better accepted and more motivating than feedback focused on the past? Do managers who focus more on the future listen to other people’s ideas and perspectives in ways that are perceived as more empathic and nonjudgmental? Do these more prospective discussions elicit greater cooperative problem solving? Research on conversation in the workplace is in its early stages [ 77 ], but some studies support the idea that high quality listening and partner responsiveness might reduce defensiveness, increase self-awareness, or produce greater willingness to consider new perspectives and ideas [ 78 , 79 ].

Practical implications

Our studies provide the first empirical evidence that managers can make feedback more effective by focusing it on the future. Future-focused feedback, as we define it, is characterized by prospective thinking and by collaboration in generating ideas, planning, and problem-solving. We assessed the degree of future focus by asking participants to rate the extent to which the feedback discussion focused on future behavior, the two parties spent time generating new ideas for next steps, and the conversation centered on how to make the recipient successful. This differs greatly from feedback research that distinguishes past vs. future orientation “using minimal rewording of each critique comment” (e.g., you didn’t always demonstrate awareness of… vs. you should aim to demonstrate more awareness of…) [ 80 p. 1866].

Because future-focused feedback is feedback, it also differs from both advice giving and “feedforward” (although it might be advantageous to incorporate these): It differs from Kluger and Nir’s feedforward interview, which queries how the conditions that enabled a person’s positive work experiences might be replicated in the future [ 81 ], and from Goldsmith’s feedforward exercise, which involves requesting and receiving suggestions for the future, without discussion or feedback [ 82 ].

The scenario at the very start of this article asks, “What can Chris say to get through to Taylor?” A future-focused answer might include the following: Chris first clarifies that the purpose of the feedback is to improve Taylor’s future performance, with the goal of furthering Taylor’s career. Chris applauds Taylor’s successes and is forthright and specific about Taylor’s shortcomings, while avoiding discussion of causes and explanations. Chris signals belief that Taylor has the motivation and competence to improve [ 83 ]. Chris then initiates a discussion in which they work together to develop ideas for how Taylor can achieve better outcomes in the future. (For a more detailed illustration of a future-focused conversation, see S11 Text .)

Conclusions

Our research supports the intriguing possibility that the future of feedback could be more effective and less aversive than its past. Performance management need not be tied to unearthing the determinants of past performance and holding people to account for past failures. Rather, performance may be managed most successfully by collaborating with the feedback recipient to generate next steps, to develop opportunities for interesting and worthwhile endeavors, and to enlarge the vision of what the recipient could accomplish. Most organizations and most managers want their workers to perform well. Most workers wish to succeed at their jobs. Everyone benefits when feedback discussions develop new ideas and solutions and when the recipients of feedback are motivated to make changes based on those. A future-focused approach to feedback holds great promise for motivating future performance improvement.

Supporting information

S1 analyses, s2 analyses, acknowledgments.

For helpful comments on earlier drafts of this paper, we are grateful to Pino Audia, Angelo Denisi, Nick Epley, Ayelet Fishbach, Brian Gibbs, Reid Hastie, Chris Hsee, Remus Ilies, David Nussbaum, Jay Russo, Paul Schoemaker, William Swann, and Kathleen Vohs.

Funding Statement

This research received funding from the Melbourne Business School while the first three authors were either visiting (JG, JK) or permanent (IOW) faculty there. While working on this research, the first two authors (JG, JK) also worked as owners and employees of management consulting firm Humanly Possible. Humanly Possible provided support in the form of salaries and profit-sharing compensation for authors JG and JK, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the “author contributions” section.

Data Availability

  • PLoS One. 2020; 15(6): e0234444.

Decision Letter 0

PONE-D-20-05644

The future of feedback:  Motivating performance improvement

Dear Dr Klayman,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by May 22 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Paola Iannello

Academic Editor

Journal requirements:

When submitting your revision, we need you to address these additional requirements:

1.    Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.plosone.org/attachments/PLOSOne_formatting_sample_main_body.pdf and http://www.plosone.org/attachments/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please modify the title to ensure that it is meeting PLOS’ guidelines ( https://journals.plos.org/plosone/s/submission-guidelines#loc-title ). In particular, the title should be "specific, descriptive, concise, and comprehensible to readers outside the field" and in this case it is not informative and specific about your study's scope and methodology.

3. Thank you for stating the following in the Competing Interests section:

"The authors have declared that no competing interests exist."

We note that one or more of the authors are employed by a commercial company: Humanly Possible, Inc.

1.     Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.  

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests ) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

2. Has the statistical analysis been performed appropriately and rigorously?

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. I enjoyed reading this manuscript, but it appears to be unnecessary long in parts and readability would benefit of a more concise style. I would recommend condensing some parts, for example in the methods section for study 2 was overly long and lacked clarity in parts. The description of the second questionnaire was a little confusing in terms of the consistency in how items were measured and the hypothesis was not clear.

2. In the ethics statement for Study 1 (line 184), please explain the rationale behind the waiver of consent.

3. Procedure (line 187) please give details of the survey platform used.

4. Results -Please include the number of participants in each group.

5. Please comment on what normality checks were performed to assess the distribution of the data.

6. Line 470, correlations are discussed but I can’t see a table to support these.

7. The discussion did not address the results in relation to previous literature and lacked a theoretical explanation of the findings (See for example ‘Korn CW, Rosenblau G, Rodriguez Buritica JM, Heekeren HR (2016) Performance Feedback Processing Is Positively Biased As Predicted by Attribution Theory. PLoS ONE 11(2)’ for a discussion of attributional style and self-serving bias. I recommend some rewrite of the discussion with more reference to theory.

8. Some acknowledgement of the effect of individual differences in self-regulation would be useful to include as this may influence how feedback is received in terms of attributions. See for example, ‘Donovan, JJ, Lorenzet, SJ, Dwight, SA, Schneider, D. The impact of goal progress and individual differences on self‐regulation in training. J Appl Soc Psychol. 2018; 48: 661– 674’.

9. The suggestions for improvement at the end of the study would be better to be condensed to give a brief suggestion of methods.

Reviewer #2: The paper reports an interesting and comprehensive work about a relevant issue in organizational psychology. Both the theoretical frame and the applied methodology are original and thorough, though the use of role-play raises some doubts about the robustness of the results (some concerns are raised by the authors themselves (lines 752-760) ). This is, in my opinion, the main limitation of studies 2 and 3. I would suggest that the authors insert a wider reasoning about the choice of using this method to collect their data and the pros and cons.

In the "General Discussion" paragraph the authors state that "We investigated the sources of agreement and disagreement between feedback provider and recipient" (lines 712-713). I strongly suggest that this sentence is being modified, since it doesn't describe the aim nor the results in Study 1 correctly.

6. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: Yes: Federica Biassoni

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at gro.solp@serugif . Please note that Supporting Information files do not need this step.

Author response to Decision Letter 0

12 May 2020

Please see uploaded document Response to Reviewers. Text copied here.

Response to Reviewers

We wish to thank the reviewers for their very helpful and constructive comments. We especially appreciate the clarity and specificity with which they framed their suggestions. Below we respond to each reviewer recommendation.

Reviewer #1:

1. I enjoyed reading this manuscript, but it appears to be unnecessary long in parts and readability would benefit of a more concise style. I would recommend condensing some parts, for example in the methods section for study 2 was overly long and lacked clarity in parts. The description of the second questionnaire was a little confusing in terms of the consistency in how items were measured and the hypothesis was not clear.

We revised the methods section for Study 2 (former lines 274-279; 285-414, revision lines 276-281; 299-402). The new version is a full page shorter and, in line with the reviewer’s suggestion, we believe this more concise version is now more readable. It includes a revised description of the post-discussion questionnaires (former 346-367; revision 350-361), clarifying the sequence and types of questions provided to each group. It also includes revisions, mainly in the Design section (former 387-414; revision lines 377-402) to clarify how the various measures related to our hypotheses.

Study 1 was approved by the Institutional Review Board at the University of Chicago, which waived the requirement for written consent as was its customary policy for studies judged to be minimal risk, involving only individual, anonymized survey responses. Their decision cited US Code 45 CFR 46.101(b). Citing the code in our manuscript seemed overly legalistic, but we have added the rest of the rationale to the ethics statement (former lines 184-185; revision 184-186).

We now identify the platform as Cogix ViewsFlash (revision line 188).

We have added the requested information for Study 1 (revision lines 214-215). Following up on the suggestion, we also made it easier to locate the corresponding information for Study 2 (revision lines 316-317).

The general consensus is that the analyses we use, i.e. ANOVA and linear regression, are generally quite robust with regard to moderate violations of normality with Ns on the order of ours (e.g., Blanca, Alarcón, Arnau, Bono, & Bendayan, Psichothema, 2017; Schmidt & Finana, Journal of Clinical Epidemiology, 2018; Ali & Sharma, Journal of Econometrics, 1996; Schmider, Ziegler Danay, Beyer, & Bühner, Methodology, 2010). Nevertheless, we used an arcsine transformation on the variables a priori most likely to suffer from systematic deviations, namely the attribution proportions. Most authors recommend checking for major deviations from normality by plotting model-predicted values against residuals and against the normal distribution (using P-P or Q-Q plots). We did that for our analyses (graphs attached), and found no troublesome deviations, with the possible exception of one variable of minor importance to our main results or theory, namely performance quality ratings for successes in Study 2. We note in the paper that that variable may suffer from ceiling effects (former 468-469, revision 456-457). We did not add a discussion of normality to the paper because of the increased length and complexity that would involve and because it’s seldom an issue of concern with data and analyses like ours. However, we could include the graphs we’ve attached here as supplemental material if you tell us you would like us to do so.

Thank you for alerting us to this inadvertent omission. We now include complete correlation tables for all the variables analyzed in each Study in the supplemental materials: S2 Table for Study 1 (revision lines 224-225) and S11 Tables for Studies 2 and 3 separately and combined (revision lines 458-459), with provider-recipient correlations identified by color shading. (S2 was formerly the dataset for Study 1, but now data from all three studies are contained in S17.)

To better address our results in relation to previous attribution literature and theory, we have revised former lines 723-740 in the General Discussion. Now we more clearly discuss our findings in relation to self-serving bias, self-threat, and both historical and more recent formulations of attribution theory, including the helpful reference the reviewer provided (revision lines 708-735). We have also added a brief discussion of how our results relate to previous literature on future thinking (revision lines 760-762). We attempted to minimize redundancy with the Introduction section. The new material includes several new references.

We added mention in the General Discussion of individual differences in self-regulation, citing two references, including the one helpfully provided by Reviewer #1 (revision line 776). Additionally, we reworded former lines 798-799 (revision lines 793-794) to make it clearer that we are acknowledging individual differences there as well.

We condensed former lines 828-846 from 19 lines to 8 lines (revision lines 823-830), referring the interested reader to new Supporting Information S16 Text for the expanded version. We trust this solution meets the recommendation for a brief suggestion of methods, while also satisfying the interests of those seeking more detail.

Reviewer #2:

1. The paper reports an interesting and comprehensive work about a relevant issue in organizational psychology. Both the theoretical frame and the applied methodology are original and thorough, though the use of role-play raises some doubts about the robustness of the results (some concerns are raised by the authors themselves (lines 752-760)). This is, in my opinion, the main limitation of studies 2 and 3. I would suggest that the authors insert a wider reasoning about the choice of using this method to collect their data and the pros and cons.

We now include a wider reasoning about our choice to use a role-play method and the pros and cons. The new version comprises revision lines 282-298. (We also revised the subsequent paragraph for increased clarity, given the insertion of the new paragraph about the role-play method.)

2. In the "General Discussion" paragraph the authors state that "We investigated the sources of agreement and disagreement between feedback provider and recipient" (lines 712-713). I strongly suggest that this sentence is being modified, since it doesn't describe the aim nor the results in Study 1 correctly.

Thank you for your careful reading. We have re-written that sentence to more accurately capture the results of Study 1 as well as the other two studies (revised lines 697-700).

[Figures attached--please see uploaded document Response to Reviewers.]

Submitted filename: Response to Reviewers.docx

Decision Letter 1

27 May 2020

The future of feedback: Survey and role-play investigations into causal attributions, feedback acceptance, motivation to improve, and the potential benefits of future focus for increasing feedback effectiveness in the workplace

PONE-D-20-05644R1

Dear Dr. Klayman,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/ , click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at gro.solp@gnillibrohtua .

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact gro.solp@sserpeno .

With kind regards,

Additional Editor Comments (optional):

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

2. Is the manuscript technically sound, and do the data support the conclusions?

3. Has the statistical analysis been performed appropriately and rigorously?

4. Have the authors made all data underlying the findings in their manuscript fully available?

5. Is the manuscript presented in an intelligible fashion and written in standard English?

6. Review Comments to the Author

Reviewer #1: (No Response)

Reviewer #2: (No Response)

7. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

Acceptance letter

The future of feedback:  Motivating performance improvement through future-focused feedback 

Dear Dr. Klayman:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact gro.solp@sserpeno .

If we can help with anything else, please email us at gro.solp@enosolp .

Thank you for submitting your work to PLOS ONE and supporting open access.

PLOS ONE Editorial Office Staff

on behalf of

Dr. Paola Iannello

IMAGES

  1. Research & Improvement

    further research and improvement

  2. Research and Improvement Lunch and Learn

    further research and improvement

  3. Top 6 Ways to Improve your Research Skills

    further research and improvement

  4. Main steps for a research in innovation development

    further research and improvement

  5. Continuous Improvement Survey

    further research and improvement

  6. Future Research

    further research and improvement

COMMENTS

  1. Conclusions and recommendations for future research

    Similarly, further research might explore the (relatively rare) experiences of marginalised and seldom-heard groups involved in research. Payment for public involvement in research remains a contested issue with strongly held positions for and against; it would be helpful to further explore the value research partners and researchers place on ...

  2. Clinical Updates: Quality improvement into practice

    Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible. 34

  3. Creating a Culture of Continuous Improvement

    Creating a Culture of Continuous Improvement. by. Aravind Chandrasekaran. and. John S. Toussaint. May 24, 2019. michellealbert/Getty Images. Summary. A number of health systems have scored ...

  4. Bridging the Gap Between Research and Practice: Predicting What Will

    Research on school effectiveness and improvement conducted over the past few decades demonstrates the complex, dynamic nature of learning environments (e.g., Reynolds et al., 2014). Researchers in these fields argue that various factors within schools and broader education systems and outside school impact students' learning outcomes ( Lareau ...

  5. Conclusions and Directions for Further Research and Policy

    There are limitations to all sampling strategies and to qualitative research, in particular. The strength of this method was that the sample selection used input from a pool of reognized experts in the organization, delivery, and improvement of health care. Even with a pool of recognized experts, it is reasonable to expect that some high performing micro-systems were overlooked. It was also ...

  6. Research on Continuous Improvement: Exploring the Complexities of

    As a result of the frustration with the dominant "What Works" paradigm of large-scale research-based improvement, practitioners, researchers, foundations, and policymakers are increasingly embracing a set of ideas and practices that can be collectively labeled continuous improvement (CI) methods. This chapter provides a comparative review ...

  7. An introduction to quality improvement

    Improvement research projects which are typically well-designed and with some form of control groups and comparators can both address improvement priorities and generate generalisable research data at the same time. ... N.S.' research is further supported by the ASPIRES research programme (Antibiotic use across Surgical Pathways ...

  8. Research and Quality Improvement: How Can They Work Together?

    Research and quality improvement provide a mechanism to support the advancement of knowledge, and to evaluate and learn from experience. The focus of research is to contribute to developing knowledge or gather evidence for theories in a field of study, whereas the focus of quality improvement is to standardize processes and reduce variation to improve outcomes for patients and health care ...

  9. Research versus practice in quality improvement? Understanding how we

    Rather, those who are focused on in improvement are part of a continuum and are driven by a range of goals from driving and demonstrating local improvements to a focus on attributing these improvements to QI methods that can be generalized and spread, as illustrated in Table 1, which also describes differences in incentives, discussed further ...

  10. Conclusion and suggestions for further research

    This final chapter concludes with the four research questions (sections 8.1.1 to 8.1.4) and provides general insights from across the study (section 8.1.5). ... Conclusion and suggestions for further research. In: Big Data to Improve Strategic Network Planning in Airlines. Schriftenreihe der HHL Leipzig Graduate School of Management. Springer ...

  11. Quality improvement into practice

    Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible.34

  12. Evidence-Based Quality Improvement: a Scoping Review of the Literature

    First, a search using the exact terms ("evidence based quality improvement," "evidence-based quality improvement," or "EBQI") was employed to identify publications published to March 2020 that explicitly refer to EBQI in the title, abstract, or keyword of the publication (i.e., the elements that are searchable in research databases).

  13. How to study improvement interventions: a brief overview of possible

    Improvement (defined broadly as purposive efforts to secure positive change) has become an increasingly important activity and field of inquiry within healthcare. This article offers an overview of possible methods for the study of improvement interventions. The choice of available designs is wide, but debates continue about how far improvement efforts can be simultaneously practical (aimed at ...

  14. How to Write Recommendations in Research

    Recommendations for future research should be: Concrete and specific. Supported with a clear rationale. Directly connected to your research. Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

  15. (PDF) Concept and Design Developments in School Improvement Research

    enrich school improvement research and help further development thereof. T aken . together, they also provide an o verview that can be used to systematically select the .

  16. How and under what circumstances do quality improvement collaboratives

    Quality improvement collaboratives are widely used to improve health care in both high-income and low and middle-income settings. ... Further research is needed to determine whether certain contextual factors related to capacity should be a precondition to the quality improvement collaborative approach and to test the emerging programme theory ...

  17. Generating Improvement Through Research and Development in ...

    At that moment a teacher can further explore the student's thinking, signaling both the expectation that struggling will produce learning, and that the student is capable of thinking further about the problem. ... Generating Improvement Through Research and Development in Education Systems. Science 340, 317-319 (2013). DOI:10.1126/science ...

  18. Implementing Improvements: Opportunities to Integrate Quality

    Quality improvement and implementation science in cancer care: Identifying areas of synergy and opportunities for further integration. ... Quality improvement, clinical research, and quality improvement research--opportunities for integration. Pediatr Clin North Am. 2009; 56 (4):831-841 [Google Scholar] 7.

  19. Integration of continuous improvement strategies with Industry 4.0: a

    Integration of continuous improvement strategies with Industry 4.0: a systematic review and agenda for further research - Author: S. Vinodh, Jiju Antony, Rohit Agrawal, Jacqueline Ann Douglas The purpose of this paper is to provide a review of the history, trends and needs of continuous improvement (CI) and Industry 4.0.

  20. Full article: Ten suggestions for improving academic research in

    As has been implied throughout these first three suggestions, the significance of technology in education centres on issues of change, progress and improvement. Indeed, most people are drawn to digital technology as a research topic precisely because of its association with progress, transformation and the allure of 'the new'.

  21. The effectiveness of continuous quality improvement for developing

    Continuous quality improvement (CQI), an approach used extensively in industrial and manufacturing sectors, has been used in the health sector. Despite the attention given to CQI, uncertainties remain as to its effectiveness given the complex and diverse nature of health systems. ... Further research into the effectiveness of CQI interventions ...

  22. The future of feedback: Motivating performance improvement through

    The data from these 37 managers were excluded from further analysis, leaving samples of 96, 92, 91, and 103 in the provider-positive, provider-negative, recipient-positive, and recipient-negative conditions, respectively. ... we know that performance feedback often does not motivate improvement . Our research contributes in several ways to ...

  23. Toward a Further Understanding of and Improvement in Measurement

    Toward a Further Understanding of and Improvement in Measurement Invariance Methods and Procedures. Robert J. Vandenberg View all authors and ... In the hopes of stimulating further research on these topics, ideas are presented as to how this research may be undertaken. Get full access to this article. View all access and purchase options for ...