The Relevance of Systematic Reviews to Educational Policy and Practice

  • September 2000
  • Oxford Review of Education 26(3-4)
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the author.

Ahmet Aypay

  • Riza Kadilar

Emine Cengiz

  • TEACH TEACH EDUC

Nicholas Bremner

  • Leanne Cameron

Seng Hansen

  • Rao Chandrasekhara
  • Nannapaneni

Rao Srinivasa Bandaru

  • K V Krishna Kishore
  • Nannapaneni Chandrasekhrarao

Mehrdad Malekiverki

  • Mohsen Hedayati

Konstantinos Tsilionis

  • Miltiadis Geropoulos

Yves Wautelet

  • J Intellect Disabil

Sukjeong Rhie

  • Nurhamidah Manullang
  • Jailani Jailani
  • Calista Devi Handaru
  • Endang Sulistyowati

Rohini Balram

  • Saba Hussain
  • Jorge Knijnik

Andreas Filippi

  • Janine Chan

Mustafa Cevikbas

  • Sholeh Hafidz
  • Sami S Alshowiman

Marta Pellegrini

  • Terri Pigott
  • Caroline Sutton Chubb
  • Hannah F. Scarbrough

Håkon Aspøy

  • Prof Dev Educ

Christa S. C. Asterhan

  • MaryBeth Mercer

Susannah L Rose

  • Viktor Moskalets
  • Oksana Fedyk

Hasrat Arjjumend

  • Hatice DEMİR AYDOĞDU
  • İlhan TURAN

Ranabhat Chhabi

  • Dona Ningrum Mawardi

Salem Khalaf Alateyyat

  • Shenglan Zhang

Maria Iosifina Avgousti

  • Victoria Elliott
  • Steve Strand

Hatice Akçakaya

  • Andrea Norman

Caroline Greenhalgh

  • Mario Torres

Rafael Lara-Alecio

  • Lolita Tabron

Hiroshi Sowaki

  • Alexander Sturm

Andrea E Reupert

  • CRIME JUSTICE

David L. Weisburd

  • Anthony Petrosino

Gene V Glass

  • David B. Wilson
  • D L Sackett
  • William Rosenberg

Muir Gray

  • W. Scott Richardson
  • J Clin Child Psychol
  • Matthew J. Hoag

Gary Michael Burlingame

  • CHILD ABUSE NEGLECT

Neil B. Guterman

  • Donald T. Campbell

Hugh Mehan

  • Dina Okamoto
  • Satish Iyengar
  • D.T. Campbell
  • J.C. Stanley

Anthony H. Normore

  • Olatokumbo Fashola
  • Adv Learn Behav Disabil

Sarup R. Mathur

  • Elliott M. Antman

Larry V Hedges

  • William Stock
  • David P. Farrington
  • BRIT EDUC RES J
  • David H. Hargreaves
  • Lee Joseph Cronbach
  • Michael Agar
  • James A. Kulik
  • Chen-Lin C. Kulik
  • Raymond W. Preiss
  • Wax, Murray L., Ed
  • Angela Lintz
  • Kevin M. Gorey

Bruce A. Thyer

  • Debra E. Pawluck
  • BJOG-INT J OBSTET GY

Patricia Crowley

  • Iain Chalmers
  • MARC J. N. C
  • Int J Educ Res

Michael L Dennis

  • Michael Louis

Elliott M Antman

  • Bruce Kupelnick
  • Thomas C. Chalmers
  • Br J Obstet Gynaecol
  • Marc J. N. C. Keirse
  • J CLIN EPIDEMIOL
  • AM J MENT RETARD
  • Robert Didden

Pieter C. Duker

  • Julie A. Reeker
  • David Ensing

Robert Elliott

  • Jennifer G. Worrall
  • A J Petrosino
  • D H Hargreaves
  • J E Schmidt
  • D S Hartmann
  • T A Mosteller
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • DOI: 10.1080/713688543
  • Corpus ID: 145344372

The Relevance of Systematic Reviews to Educational Policy and Practice

  • Published 1 September 2000
  • Oxford Review of Education

238 Citations

Systematic reviews of research in science education: rigour or rigidity, reviewing research evidence in environmental education: some methodological reflections and challenges.

  • Highly Influenced

Quality Indicators for Reviews of Research in Special Education

Systematic reviews: questions, methods and usage, a methodological review of systematic literature reviews in higher education: heterogeneity and homogeneity, beyond synthesis: augmenting systematic review procedures with practical principles to optimise impact and uptake in educational policy and practice, developing an evidence-based approach to management knowledge using systematic review, the use of research to improve professional practice: a systematic review of the literature, on 'systematic' reviews of research literatures: a 'narrative' response to evans & benefield, the value and purpose of the traditional qualitative literature review.

  • 16 Excerpts

47 References

The concept of meta-analysis, best-evidence synthesis: an alternative to meta-analytic and traditional reviews.

  • Highly Influential

Meta-Analysis for Explanation

The efficacy of psychological, educational, and behavioral treatment. confirmation from meta-analysis., meta-analysis in education: how has it been used, differential effectiveness of prevalent social work practice models: a meta-analysis, evidence based medicine: what it is and what it isn't, the effects of class size: an examination of rival hypotheses, meta-analysis of research on class size and achievement, a comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. treatments for myocardial infarction., related papers.

Showing 1 through 3 of 0 Related Papers

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 19 February 2019

The dos and don’ts of influencing policy: a systematic review of advice to academics

  • Kathryn Oliver   ORCID: orcid.org/0000-0002-4326-5258 1 &
  • Paul Cairney 2  

Palgrave Communications volume  5 , Article number:  21 ( 2019 ) Cite this article

89k Accesses

172 Citations

873 Altmetric

Metrics details

  • Politics and international relations
  • Science, technology and society

A Correction to this article was published on 17 March 2020

This article has been updated

Many academics have strong incentives to influence policymaking, but may not know where to start. We searched systematically for, and synthesised, the ‘how to’ advice in the academic peer-reviewed and grey literatures. We condense this advice into eight main recommendations: (1) Do high quality research; (2) make your research relevant and readable; (3) understand policy processes; (4) be accessible to policymakers: engage routinely, flexible, and humbly; (5) decide if you want to be an issue advocate or honest broker; (6) build relationships (and ground rules) with policymakers; (7) be ‘entrepreneurial’ or find someone who is; and (8) reflect continuously: should you engage, do you want to, and is it working? This advice seems like common sense. However, it masks major inconsistencies, regarding different beliefs about the nature of the problem to be solved when using this advice. Furthermore, if not accompanied by critical analysis and insights from the peer-reviewed literature, it could provide misleading guidance for people new to this field.

Similar content being viewed by others

the relevance of systematic reviews to educational policy and practice

Eight problems with literature reviews and how to fix them

the relevance of systematic reviews to educational policy and practice

Mapping the community: use of research evidence in policy and practice

the relevance of systematic reviews to educational policy and practice

Insights from a cross-sector review on how to conceptualise the quality of use of research evidence

Introduction.

Many academics have strong incentives to influence policymaking, as extrinsic motivation to show the ‘impact’ of their work to funding bodies, or intrinsic motivation to make a difference to policy. However, they may not know where to start (Evans and Cvitanovic, 2018 ). Although many academics have personal experience, or have attended impact training, there is a limited empirical evidence base to inform academics wishing to create impact. Although there is a significant amount of commentary about the processes and contexts affecting evidence use in policy and practice (Head, 2010 ; Whitty, 2015 ), the relative importance of different factors on achieving ‘impact’ has not been established (Haynes et al., 2011 ; Douglas, 2012 ; Wilkinson, 2017 ). Nor have common understandings of the concepts of ‘use’ or ‘impact’ themselves been developed. As pointed out by one of our reviewers, even empirical and conceptual papers often routinely fail to define or unpack these terms—with some exceptions (Weiss, 1979 ; Nutley et al., 2007 ; Parkhurst, 2017 ). Perhaps because of this theoretical paucity, there are few empirical evaluations of strategies to increase the uptake of evidence in policy and practice (Boaz et al., 2011 ), and those that exist tend not to offer advice for the individual academic. How then, should academics engage with policy?

There are substantial numbers of blogs, editorials, commentaries, which provide tips and suggestions for academics on how best to increase their impact, how to engage most effectively, or similar topics. We condense this advice into 8 main tips, to: produce high quality research, make it relevant, understand the policy processes in which you engage, be accessible to policymakers, decide if you want to offer policy advice, build networks, be ‘entrepreneurial’, and reflect on your activities.

Taken at face value, much of this advice is common sense, perhaps because it is inevitably bland and generic. When we interrogate it in more detail, we identify major inconsistencies in advice regarding: (a) what counts as good evidence, (b) how best to communicate it, (c) what policy engagement is for, (d) if engagement is to frame problems or simply measure them according to an existing frame, (e) how far to go to be useful and influential, (f) if you need and can produce ground rules or trust (g) what entrepreneurial means, and (h) how much choice researchers should have to engage in policymaking or not.

These inconsistencies reflect different beliefs about the nature of the problem to be solved when using this advice, which derive from unresolved debates about the nature and role of science and policy. We focus on three dilemmas that arise from engagement—for example, should you ‘co-produce’ research and policy and give policy recommendations?—and reflect on wider systemic issues, such as the causes of unequal rewards and punishments for engagement. Perhaps the biggest dilemma reflects the fact that engagement is a career choice, not an event: how far should you go to encourage the use of evidence in policy if you began your career as a researcher? These debates are rehearsed more fully and regularly in the peer-reviewed literature (Hammersley, 2013 ; de Leeuw et al., 2008 ; Fafard, 2015 ; Smith and Stewart, 2015 ; Smith and Stewart, 2017 ; Oliver and Faul, 2018 ), which have spawned narrative reviews of policy theory and systematic reviews of the literature on the ‘barriers and facilitators’ to the use of evidence in policy. For example, we know from policy studies that policymakers seek ways to act decisively, not produce more evidence until it speaks for itself; and, there is no simple way to link the supply of evidence to its demand in a policymaking system (see Cairney and Kwiatkowski, 2017 ). We draw on this literature to highlight inconsistencies and weaknesses in the advice offered to academics.

We assess how useful the ‘how to’ advice is for academics, to what extent the advice reflects the reality of policymaking and evidence use (based on our knowledge of the empirical and theoretical literatures, described more fully in Cairney and Oliver, 2018 ) and explore the implications of any mismatch between the two. We map and interrogate the ‘how to’ advice, by comparing it with the empirical and theoretical literature on creating impact, and on the policymaking context more broadly. We use these literatures to highlight key choices and tensions in engaging with policymakers, and signpost more useful, informed advice for academics on when, how, and if to engage with policymakers.

Methods: a systematic review of the ‘how to’ literature

Systematic review is a method to synthesise diverse evidence types on a clear defined problem (Petticrew and Roberts, 2008 ). Although most commonly associated with statistical methods to aggregate effect sizes (more accurately called meta-analyses), systematic reviews can be conducted on any body of written evidence, including grey or unpublished literature (Tyndall, 2008 ). All systematic reviews take steps to be transparent about the decisions made, the methods used to identify relevant evidence, and how this was synthesised to be transparent, replicable and exhaustive (resources allowing) (Gough et al., 2012 ). Primarily they involve clearly defined searches, inclusion and exclusion processes, and a quality assessment/synthesis process.

We searched three major electronic databases (Scopus, Web of Science, Google Scholar) and selected websites (e.g., ODI, Research Fortnight, Wonkhe) and journals (including Evidence and Policy, Policy and Politics, Research Policy), using a combination of terms. Terms such as evidence and impact were tested to search for articles explaining how to better ‘use’ evidence, or how to create policy ‘impact’. After testing, the search was conducted by combining the following terms, tailored to each database: ((evidence or science or scientist or researchers or impact), (help or advi* or tip* or "how to" or relevan*)) policy* OR practic* OR government* OR parliament*). We checked studies on full text where available and added them to a database for data-extraction. We conducted searches between June 30th and August 3rd 2018. We identified studies for data extraction when they covered these areas: Tips for researchers, tips for policymakers, types of useful research / characteristics of useful research, and other factors.

We included academic, policy and grey publications which offered advice to academics or policymakers on how to engage better with each other. We did not include: studies which explored the factors leading to evidence use, general commentaries on the roles of academics, or empirical analyses of the various initiatives, interventions, structures and roles of academics and researchers in policy (unless they offered primary data and tips on how to improve); book reviews; or, news reports. However, we use some of these publications to reflect more broadly on the historical changes to the academic-policy relationship.

We included 86 academic and non-academic publications in this review (see Table 1 for an overview). Although we found reports dating back to the 1950s on how governments and presidents (predominantly UK/US) do or do not use scientific advisors (Marshall, 1980 ; Bondi, 1982 ; Mayer, 1982 ; Lepkowski, 1984 ; Koshland Jr. et al., 1988 ; Sy, 1989 ; Krige, 1990 ; Srinivasan, 2000 ) and committees (Sapolsky, 1968 ; Wolfle, 1968 ; Editorial, 1972 ; Walsh, 1973 ; Nichols, 1988 ; Young and Jones, 1994 ; Lawler, 1997 ; Masood, 1999 ; Morgan et al., 2001 ; Oakley et al., 2003 ; Allen et al. 2012 ). The earliest publication included was from 1971 (Aurum, 1971 ). Thirty-four were published in the last two years, reflecting ever increasing interest in how academics can increase their impact on policy. Although some academic publications are included, we mainly found blogs, letters, and editorials, often in high-impact publications such as Cell, Science, Nature and the Lancet. Many were opinion pieces by people moving between policy officials and academic roles, or blogs by and for early career researchers on how to establish impactful careers.

The advice is very consistent over the last 80 years; and between disciplines as diverse as gerontology, ecology, and economics. As noted in an earlier systematic review, previous studies have identified hundreds of factors which act as barriers to the uptake of evidence in policy (Oliver et al., 2014 ), albeit unsupported by empirical evidence. Many of the advisory pieces address these barriers, assuming rather than demonstrating that their simple advice will help ease the flow of evidence into policy. The pieces also often cite each other, even to the extent of using the exact phrasing. Therefore, the combination of previous academic reviews with our survey of ‘how to’ advice reinforces our sense of ‘saturation’, in which we have identified all of the most relevant advice (available in written form). In our synthesis, using thematic analysis, we condense these tips into 8 main themes. Then, we analyse these tips critically, with reference to wider discussions in the peer-reviewed literature.

Eight key tips on ‘how to influence policy’

Do high quality research.

Researchers are advised to conduct high-quality, robust research (Boyd, 2013 ; Whitty, 2015 ; Docquier, 2017 ; Eisenstein, 2017 ) and provide it in a way that is timely, policy relevant, and easy to understand, but not at the expense of accuracy (Havens, 1992 ; Norse, 2005 ; Simera et al., 2010 ; Bilotta et al., 2015 ; Kerr et al., 2015 ; Olander et al. 2017 ; POST, 2017 ). Specific research methods, metrics and/or models should be used (Aguinis et al. 2010 ), with systematic reviews/evidence synthesis considered particularly useful for policymakers (Lavis et al., 2003 ; Sutherland, 2013 ; Caird et al., 2015 ; Andermann et al., 2016 ; Donnelly et al., 2018 ; Topp et al., 2018 ), and often also randomised controlled trials, properly piloted and evaluated (Walley et al., 2018 ). Truly interdisciplinary research is required to identify new perspectives (Chapman et al., 2015 ; Marshall and Cvitanovic, 2017 ) and explore the “practical significance” of research for policy and practice (Aguinis et al. 2010 ). Academics must communicate scientific uncertainty and the strengths and weaknesses of a piece of research (Norse, 2005 ; Aguinis et al., 2010 ; Tyler, 2013 ; Game et al., 2015 ; Sutherland and Burgman, 2015 ), and be trained to “estimate probabilities of events, quantities or model parameters” (Sutherland and Burgman, 2015 ). Be ‘policy-relevant’ (NCCPE, 2018 ; Maddox, 1996 ; Green et al., 2009 ; Farmer, 2010 ; Kerr et al., 2015 ; Colglazier, 2016 ; Tesar et al., 2016 ; Echt, 2017b ; Fleming and Pyenson, 2017 ; Olander et al., 2017 ; POST, 2017 ) (although this is rarely defined). Two exceptions include the advice for research programmes to be embedded within national and regional governmental programmes (Walley et al., 2018 ) and for researchers to provide policymakers with models estimating the harms and benefits of different policy options (Basbøll, 2018 ) (Topp et al., 2018 ).

Communicate well: make your research relevant and readable

Academics should engage in more effective dissemination, (NCCPE, 2018 ; Maddox, 1996 ; Green et al., 2009 ; Farmer, 2010 ; Kerr et al., 2015 ; Colglazier, 2016 ; Tesar et al., 2016 ; Echt, 2017b ; Fleming and Pyenson, 2017 ; Olander et al. 2017 ; POST, 2017 ), make data public, (Malakoff, 2017 ), and provide clear summaries and syntheses of problems and solutions (Maybin, 2016 ). Use a range of outputs (social media, blogs, policy briefs), to make sure that policy actors can contact you with follow up questions (POST, 2017 ) (Parry-Davies and Newell, 2014 ), and to write for generalist, but not ignorant readers (Hillman, 2016 ). Avoid jargon but don’t over-simplify (Farmer, 2010 ; Goodwin, 2013 ); make simple and definitive statements (Brumley, 2014 ), and communicate complexity (Fischoff, 2015 ; Marshall and Cvitanovic, 2017 ) (Whitty, 2015 ).

Some blogs advise academics to use established storytelling techniques to persuade policymakers of a course of action or better communicate scientific ideas. Produce good stories based on emotional appeals or humour to expand and engage your audience (Evans, 2013 ; Fischoff, 2015 ; Docquier, 2017 ; Petes and Meyer, 2018 ). Jones and Crow develop a point-by-point guide to creating a narrative through scene-setting, casting characters, establishing a plot, and equating the moral with a ‘solution to the policy problem’ (Jones and Crow, 2017 ; Jones and Crow, 2018 ).

Understand policy processes, policymaking context, and key actors

Academics are advised to get to know how policy works, and in particular to accept that the normative technocratic ideal of ‘evidence-based’ policymaking does not reflect the political nature of decision-making (Tyler, 2013 ; Echt, 2017a ). Policy decisions are ultimately taken by politicians on behalf of constituents, and technological proposals are only ever going to be part of a solution (Eisenstein, 2017 ). Some feel that science should hold a privileged position in policy (Gluckman, 2014 ; Reed and Evely, 2016 ) but many recognise that research is unlikely to translate directly into an off-the-shelf ready-to-wear policy proposal (Tyler, 2013 ; Gluckman, 2014 ; Prehn, 2018 ), and that policy rarely changes overnight (Marshall and Cvitanovic, 2017 ). Being pragmatic and managing one’s expectations about the likely impact of research on policy—which bears little resemblance to the ‘policy cycle’—is advised (Sutherland and Burgman, 2015 ; Tyler, 2013 ).

Second, learn the basics, such as the difference between the role of government and parliament, and between other types of policymakers (Tyler, 2013 ). Note that your policy audience is likely to change on a yearly basis if not more frequently (Hillman, 2016 ); that they have busy and constrained lives (Lloyd, 2016 ; Docquier, 2017 ; Prehn, 2018 ) and their own career concerns and pathways (Lloyd, 2016 ; Docquier, 2017 ; Prehn, 2018 ). Do not guess what might work; take the time to listen and learn from policy colleagues (Datta, 2018 ).

Third, learn to recognise broader policymaking dynamics, paying particular attention to changing policy priorities (Fischoff, 2015 ; Cairney, 2017 ). Academics are good at placing their work in the context of the academic literature, but also need to situate it in the “political landscape” (Himmrich, 2016 ). To do so means taking the time to learn what, when, where and who to influence (NCCPE, 2018 ; Marshall and Cvitanovic, 2017 ; Tilley et al., 2017 ) and getting to know audiences (Jones and Crow, 2018 ); learning about, and maximising use of established ways to engage, such as in advisory committees and expert panels (Gluckman, 2014 ; Pain, 2014 ; Malakoff, 2017 ; Hayes and Wilson, 2018 ) (Pain, 2014 ). Persistance and patience is advised—sticking at it, and changing strategy if it is not working (Graffy, 1999 ; Tilley et al., 2017 ).

Be ‘accessible’ to policymakers: engage routinely, flexibly, and humbly

Prehn uses the phrase ‘professional friends’, which encapsulates vague but popular concepts such as ‘build trust’ and ‘develop good relationships’ (Farmer, 2010 ; Kerr et al., 2015 ; Prehn, 2018 ). Building and maintaining long-term relationships takes effort, time and commitment (Goodwin, 2013 ; Maybin, 2016 ), can be easily damaged. It can take time to become established as a “trusted voice” (Goodwin, 2013 ) and may require a commitment to remaining non-partisan (Morgan et al. 2001 ). Therefore, build routine engagement on authentic relationships, developing a genuine rapport by listening and responding (Goodwin, 2013 ; Jo Clift Consulting, 2016 ; Petes and Meyer, 2018 ). Some suggest developing leadership and communication skills, but with reference to listening and learning (Petes and Meyer, 2018 ; Topp et al., 2018 ); Adopting a respectful, helpful, and humble demeanour, recognising that while academics are authorities on the evidence, we may not be the appropriate people to describe or design policy options (Nichols, 1972 ; Knottnerus and Tugwell, 2017 ) (although many disagree (Morgan et al., 2001 ; Morandi, 2009 )). Behave courteously by acting professionally (asking for feedback; responding promptly; following up meetings and conversations swiftly) (NCCPE, 2018 ; Goodwin, 2013 ; Jo Clift Consulting, 2016 ). Several commentators also reference the idea of ‘two cultures’ of policy and research (Shergold, 2011 ), which have their own language, practices and values (Goodwin, 2013 ). Learning to speak this language would enable researchers to better understand all that is said and unsaid in interactions (Jo Clift Consulting, 2016 ).

Decide if you want to be an ‘issue advocate’ or ‘honest broker’

Reflecting on accessibility should prompt researchers to consider how to draw the line between providing information or recommendations. One possibility is for researchers to simply disseminate their research honestly, clearly, and in a timely fashion, acting as an ‘honest broker’ of the evidence base (Pielke, 2007 ). In this mode, other actors may pick up and use evidence to influence policy in a number of ways—shaping the debate, framing issues, problematizing the construction of solutions and issues, explaining the options (Nichols, 1972 ; Knottnerus and Tugwell, 2017 )—while researchers seek to remain ‘neutral’. Another option is to recommend specific policy options or describe the implications for policy based on their research (Morgan et al., 2001 ; Morandi, 2009 ), perhaps by storytelling to indicate a preferred course of action (Evans, 2013 ; Fischoff, 2015 ; Docquier, 2017 ; Petes and Meyer, 2018 ). However, the boundary between these two options is very difficult to negotiate or identify in practice, particularly since policymakers often value candid judgements and opinions from people they trust, rather than new research (Maybin, 2016 ).

Build relationships (and ground rules) with policymakers

Getting to know policymakers better and building longer term networks (Chapman et al., 2015 ; Evans and Cvitanovic, 2018 ) could give researchers better access to opportunities to shape policy agendas (Colglazier, 2016 ; Lucey et al., 2017 ; Tilley et al., 2017 ), give themselves more credibility within the policy arena (Prehn, 2018 ), help researchers to identify the correct policy actors or champions to work with (Echt, 2017a ), and provide better insight into policy problems (Chapman et al., 2015 ; Colglazier, 2016 ; Lucey et al., 2017 ; Tilley et al., 2017 ). Working with policymakers as early as possible in the process helps develop shared interpretations of the policy problem (Echt, 2017b ; Tyler, 2017 ) and agreement on the purpose of research (Shergold, 2011 ). Co-designing, or otherwise doing research-for-policy together is widely held to be morally, ethically, and practically one of the best ways to achieve the elusive goal of getting evidence into policy (Sebba, 2011 ; Green, 2016 ; Eisenstein, 2017 ). Engaging publics more generally is also promoted (Chapman et al., 2015 ). Relationship-building activities require major investment and skills, and often go unrecognised (Prehn, 2018 ), but may offer the most likely route to get evidence into policy (Sebba, 2011 ; Green, 2016 ; Eisenstein, 2017 ). Initially, researchers can use blogs and social media (Brumley, 2014 ; POST, 2017 ) to increase their visibility to the policy community, combined with networking and direct approaches to policy actors (Tyler, 2013 ).

One of the few pieces built on a case study of impact argued that academics should build coalitions of allies, but also engage political opponents, and learn how to fight for their ideas (Coffait, 2017 ). However, collaboration can also lead to conflict and reputational damage (De Kerckhove et al., 2015 ). Therefore, when possible, academics should produce ground rules acceptable to academics and policymakers. They should be honest and thoughtful about how, when, and why to engage; and recognise the labour and resources required for successful engagement (Boaz et al., 2018 ). Successful engagement may require all parties to agree about processes , including ethics, consent, and confidentiality, and outputs , including data, intellectual property (De Kerckhove et al., 2015 ; Game et al., 2015 ; Hutchings and Stenseth, 2016 ). The organic development of these networks and contacts takes time and effort, and should be recognised as assets, particularly when offered new contacts by colleagues (Evans and Cvitanovic, 2018 ; Boaz et al., 2018 )

Be ‘entrepreneurial’ or find someone who is

Much of the ‘how to’ advice projects an image of a daring, persuasive scientist, comfortable in policy environments and always available when needed (Datta, 2018 ), by using mentors to build networks, or through ‘cold calling’ (Evans and Cvitanovic, 2018 ). Some ideas and values need to be fought for if they are to achieve dominance (Coffait, 2017 ; Docquier, 2017 ), and multiple strategies may be required, from leveraging trust in academics to advocating more generally for evidence based policy (Garrett, 2018 ). Academics are advised to develop “media-savvy” skills (Sebba, 2011 ), learn how to “sell the sizzle”(Farmer, 2010 ), become able to “convince people who think differently that shared action is possible,” (Fischoff, 2015 ), but also be pragmatic, by identifying real, tangible impacts and delivering them (Reed and Evely, 2016 ). Such a range of requirements may imply that being constantly available, and becoming part of the scenery, makes it more likely for a researcher to be the person to hand in an hour of need (Goodwin, 2013 ). Or, it could prompt a researcher to recognise their relative inability to be persuasive, and to hire a ‘knowledge broker’ to act on their behalf (Marshall and Cvitanovic, 2017 ; Quarmby, 2018 ).

Reflect continuously: should you engage, do you want to, and is it working?

Academics may be a good fit in the policy arena if they ‘want to be in real world’, ‘enjoy finding solutions to complex problems’ (Echt, 2017a ; Petes and Meyer, 2018 ), or are driven ‘by a passion greater than simply adding another item to your CV’ (Burgess, 2005 ). They should be genuinely motivated to take part in policy engagement, seeing it as a valuable exercise in its own right, as opposed to something instrumental to merely improve the stated impact of research (Goodwin, 2013 ). For example, scientists can “engage more productively in boundary work, which is defined as the ways in which scientists construct, negotiate, and defend the boundary between science and policy” (Rose, 2015 ). They can converse with policymakers about how science and scientific careers are affected by science policy, as a means of promoting more useful support within government (Pain, 2014 ). Or, they can use teaching to get students involved at an early stage in their careers, to train a new generation of impact-ready entrepreneurs (Hayes and Wilson, 2018 ). Such a profound requirement of one’s time should prompt constant reflection and refinement of practice. It is hard to know what our impact may be or how to sustain it (Reed and Evely, 2016 ). Therefore, academics who wish to engage must learn and reflect on the consequences of their actions (Datta, 2018 ; Topp et al., 2018 ).

The wider literature on the wider policymaking context

Our observation of this advice is that it is rather vague, very broad, and each theme contains a diversity of opinions. We also argue that much of this advice is based on misunderstandings about policy processes, and the roles of researchers and policymakers. We summarise these misunderstandings below (see Table 2 for an overview), by drawing a wider range of sources such as policy studies literature (Cairney, 2016 ) and a systematic review of factors influencing evidence use in policy (Oliver et al., 2014 ), to identify the wider context in which to understand and use these tips. We also contextualise these discussions in the broader evidence and policy/practice literature.

Firstly, there is no consensus over what counts as good evidence for policy (Oliver and de Vocht, 2015 ), and therefore how best to communicate good evidence . While we can probably agree what constitutes high quality research within each field, the criteria we use to assess it in many disciplines (such as generalisability and methodological rigour) have far lower salience for policymakers (Hammersley, 2013 ; Locock and Boaz, 2004 ). They do not adhere to the scientific idea of a ‘knowledge deficit’ in which our main collective aim is to reduce policymaker uncertainty by producing more of the best scientific evidence (Crow and Jones, 2018 ). Rather, evidence garners credibility, legitimacy and usefulness through its connections to individuals, networks and topical issues (Cash et al., 2003 ; Boaz et al., 2015 ; Oliver and Faul, 2018 ).

One way in which to understand the practical outcome of this distinction is to consider the profound consequences arising from the ways in which policymakers address their ‘bounded rationality’ (Simon, 1976 ; Cairney and Kwiatkowski, 2017 ). Individuals seek cognitive shortcuts to avoid decision-making ‘paralysis’—when faced with an overwhelming amount of possibly-relevant information—and allow them to process information efficiently enough to make choices (Gigerenzer and Selten, 2001 ). They combine ‘rational’ shortcuts, including trust in expertise and scientific sources, and ‘irrational’ shortcuts, to use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see Haidt, 2001 ; Kahneman, 2011 ; Lewis, 2013 ; Baumgartner, 2017 ; Jones and Thomas, 2017 ; Sloman and Fernbach, 2017 ). Therefore, we need to understand how they use such shortcuts to interpret their world, pay attention to issues, define issues as policy problems, and become more or less receptive to proposed solutions. In this scenario, effective policy actors—including advocates of research evidence—frame evidence to address the many ways to interpret policy problems (Cairney, 2016 ; Wellstead et al. 2018 ) and compete to draw attention to one ‘image’ of a problem and one feasible solution at the expense of the competition (Kingdon and Thurber, 1984 ; Majone, 1989 ; Baumgartner and Jones, 1993 ; Zahariadis, 2007 ). This debate determines the demand for evidence.

Secondly, there is little empirical guidance on how to gain the wide range of skills that researchers and policymakers need, to act collectively to address policymaking complexity, including to: produce evidence syntheses, manage expert communities, ‘co-produce’ research and policy with a wide range of stakeholders, and be prepared to offer policy recommendations as well as scientific advice (Topp et al., 2018 ). The list of skills includes the need to understand the policy processes in which you engage, such as by understanding the constituent parts of policymaking environments (John, 2003 , p. 488; (Cairney and Heikkila, 2014 ), p. 364–366) and their implications for the use of evidence:

Many actors make and influence policy in many ‘venues’ across many levels and types of government. Therefore, it is difficult to know where the ‘action’ is.

Each venue has its own ‘institutions’, or rules and norms maintained by many policymaking organisations. These rules can be formal and well understood, or informal, unwritten, and difficult to grasp (Ostrom, 2007a , 2007b ). Therefore, it takes time to learn the rules before being able to use them effectively.

These ‘rules of the game’ extend to policy networks, or the relationships between policymakers and influencers, many of which develop in ‘subsystems’ and contain relatively small groups of specialists. One can be a privileged insider in one venue but excluded from another, and the outcome may relate minimally to evidence.

Networks often reproduce dominant ‘ideas’ regarding the nature of the policy problem, the language we use to describe it, and the political feasibility of potential solutions (Kingdon and Thurber, 1984 ). Therefore, framing can make the difference between being listened to or ignored.

Policy conditions and events can reinforce or destabilise institutions. Evidence presented during crises or ‘focusing events’ (Birkland, 1997 ) can prompt lurches of attention from one issue to another, but this outcome is rare, and policy can remain unchanged for decades.

A one-size fits-all model is unlikely to help researchers navigate this environment where different audiences and institutions have different cultures, preferences and networks. Gaining knowledge of the complex policy context can be extremely challenging, yet the implications are profoundly important. In that context, theory-informed studies recommend investing your time over the long term, to build up alliances, trust in the messenger, knowledge of the system, and exploit ‘windows of opportunity’ for policy change (Cairney, 2016 , p.124). However, they also suggest that this investment of time may pay off only after years or decades—or not at all (Cairney and Oliver, 2018 ).

This context could have a profound impact on the way in which we interpret the eight tips. For example, it may:

tip the balance from scientific to policy-relevant measures of evidence quality;

shift the ways in which we communicate evidence from a focus on clarity to an emphasis on framing;

suggest that we need to engage with policymakers to such an extent that the division between honest broker and issue advocate become blurry;

prompt us to focus less on the ‘entrepreneurial’ skills of individual researchers and more on the nature of their environment; and

inform reflection on our role, since successful engagement may feel more like a career choice than an event.

Throughout this process, we need to decide what policy engagement is for —whether it is to frame problems or simply measure them according to an existing frame—and how far researchers should go to be useful and influential . While immersing oneself fully in policy processes may be the best way to achieve credibility and impact for researchers, there are significant consequences of becoming a political actor (Jasanoff and Polsby, 1991 ; Pielke, 2007 ; Haynes et al., 2011 ; Douglas, 2015 ). The most common consequences include criticism within one’s peer-group (Hutchings and Stenseth, 2016 ), being seen as an academic ‘lightweight’ (Maynard, 2015 ), and being used to add legitimacy to a policy position (Himmrich, 2016 ; Reed and Evely, 2016 ; Crouzat et al., 2018 ). More serious consequences include a loss of status completely—David Nutt famously lost his advisory role after publicly criticising UK government drug policy—and the loss of one’s safety if adopting an activist mindset (Zevallos, 2017 ). If academics need to go ‘all in’ to secure meaningful impact, we need to reflect on the extent to which they have the resources and support to do so.

Three major dilemmas in policy engagement

These misunderstandings matter, because well-meaning people are giving recommendations that are not based on empirical evidence, and may lead to significant risks, such as reputational damage and wasted resources. Further, their audience may reinforce this problem by holding onto deficit models of science and policy, and equating policy impact with a simple linear policy cycle. When unsuccessful, despite taking the ‘how to’ advice to heart, researchers may blame politics and policymakers rather than reflecting on their own role in a process they do not understand fully.

Although it is possible to synthesise the ‘how to’ advice into eight main themes, many categories contain a wide range of beliefs or recommendations within a very broad description of qualities like’ accessibility’ and ‘engagement’. We interrogate key examples to identify the wide range of (potentially contradictory) advice about the actual and desirable role of researchers in politics: whether to engage, how to engage, and the purpose of engagement.

Should academics try to influence policy?

A key area of disagreement was over the normative question of whether academics should advocate for policy positions, try to persuade policymakers of particular courses of action (e.g., Tilley et al., 2017 ), offer policy implications from their research (Goodwin, 2013 ), or be careful not to promote particular methods and policy approaches (Gluckman, 2014 ; Hutchings and Stenseth, 2016 ; Prehn, 2018 ). Aspects of the debate include:

The public duty to engage versus the need to protect science . Several pieces argued that publicly-paid academics should regard policy impact as a professional duty (Shergold, 2011 ; Tyler, 2017 ). If so, they should try: to influence policy by framing evidence into dominant policy narratives or to address issues that policymakers care about (Rose, 2015 ; Hillman, 2016 ; King, 2016 ), and engage in politics directly or when needed (Farmer, 2010 ; Petes and Meyer, 2018 ). Others felt that it risked an academic’s main asset – their independence of advice (Whitty, 2015 ; Alberts et al., 2018 ; Dodsworth and Cheeseman, 2018 )—and that this political role should be left to the specialists, such as scientific advisors (Hutchings and Stenseth, 2016 ). Others emphasise the potential costs to self-censorship (De Kerckhove et al., 2015 ), and the tension between being elite versus inclusive and accessible (Collins, 2011 ).

The potential for conflict and reputational damage . Some identify the tension between being able to provide rational advice to shape political discourse and the potential for conflict (De Kerckhove et al., 2015 ). Others rejected it as a false dichotomy, arguing that advocacy is a “continuous process of establishing relationships and creating a community of experts both in and outside of government who can give informed input on policies” (Himmrich, 2016 ).

The need to represent academics and academia : Some recommend discussing topics beyond your narrow expertise—almost as a representative for your field or profession (Petes and Meyer, 2018 )—while others caution against it, since speaking about one’s own expertise is the best way to maintain credibility (Marshall and Cvitanovic, 2017 ).

Such debates imply a choice to engage and do not routinely consider the unequal effects built on imbalances of power (Cairney and Oliver, 2018 ). Many researchers are required to show impact and it is not strictly a choice to engage. Further, there are significant career costs to engagement, which are relatively difficult to incur by more junior or untenured researchers, while women and people of colour may be more subject to personal abuse or exploitation. The risk of burnout, or the opportunity cost of doing impact rather than conducting the main activities of teaching and research jobs is too high for many (Graffy, 1999 ; Fischoff, 2015 ). Being constantly available, engaging with no clear guarantee of impact or success, with no payment for time or even travel is not possible for many researchers, even if that is the most likely way to achieve impact. This means that the diversity of voices available to policy is limited (Oliver and Faul, 2018 ). Much of the ‘how to’ advice is tailored to individuals without taking into account these systemic issues. They are mostly drawn from the experiences of people who consider themselves successful at influencing policy. The advice is likely to be useful mostly to a relatively similar group of people who are confident, comfortable in policy environments, and have both access and credibility within policy spaces. Thus, the current advice and structures may help reproduce and reinforce existing power dynamics and an underrepresentation of women, BAME, and people who otherwise do not fit the very narrow mould (Cairney and Oliver, 2018 )—even extending to the exclusion of academics from certain institutions or circles (Smith and Stewart, 2017 ).

How should academics influence policy?

A second dilemma is: how should academics try to influence policy? By merely stating the facts well, telling stories to influence our audience more, or working with our audience to help produce policy directly? Three main approaches were identified in the reviews. Firstly, to use specific tools such as evidence syntheses, or social media, to improve engagement (Thomson, 2013 ; Caird et al., 2015 ). This approach fits with the ‘deficit’ model of the evidence-policy relationships, whereby researchers merely provide content for others to work with. As extensively discussed elsewhere, this method, while safe, has not been shown to be effective at achieving policy change; and underpinning much of the advice in this strain are some serious misunderstandings about the practicalities, psychology and real world nature of policy change and information flow (Sturgis and Allum, 2004 ; Fernández, 2016 ; Simis et al., 2016 ).

Secondly, to use emotional appeals and storytelling to craft attractive narratives with the explicit aim of shaping policy options (Jones and Crow, 2017 ; Crow and Jones, 2018 ). Leaving aside the normative question of the independence of scientific research, or researchers’ responsibilities to represent data fully and honestly (Pielke, 2007 ), this strategy makes practical demands on the researcher. It requires having the personal charisma to engage diverse audiences and seem persuasive yet even-handed. Some of the advice suggests that academics try to seem pragmatic and equable about the outcome of any such approach, although not always clear whether this was to help the researcher seem more worldly-wise and sensible, or simply as a self-protective mechanism (King, 2016 ). Either way, deciding how to seem omnipotent yet credible; humble but authoritative; straightforward yet not over-simplifying—all while still appearing authentic—is probably beyond the scope of most of our acting abilities.

Thirdly, to collaborate (Oliver et al., 2014 ). Co-production is widely hailed as the most likely way to promote the use of research evidence in policy, as it would enable researchers to respond to policy agendas, and enable more agile multidisciplinary teams to coalesce around topical policy problems. There are also trade-offs to this way of working (Flinders et al., 2016 ). Researchers have to cede control over the research agenda and interpretations. This can give rise to accusations of bias, partisanship, or at least partiality for one political view over another. There are significant reputational risks involved in collaboration, within the academic community and outside it. Pragmatically, there are practical and logistical concerns about how and when to maintain control of intellectual property and access to data. More broadly, it may cloud one’s judgement about the research in hand, hindering one’s ability to think or speak critically without damaging working relationships.

What is the purpose of academics engagement in policymaking?

Authors do not always tell us the purpose of engagement before they tell us how to do it. Some warn against ‘tokenistic’ engagement, and there is plenty of advice for academics wanting to build ‘genuine’ rapport with policymakers to make their research more useful. Yet, it is not always clear if researchers should try and seem authentically interested in policymakers as a means of achieving impact or actually to listen, learn, and cede some control over the research process. The former can be damaging to the profession. As Goodwin points out, it’s not just policymakers who may feel short-changed by transactional relationships: “by treating policy engagement as an inconvenient and time-consuming ‘bolt on' you may close doors that could be left open for academics who genuinely care about this collaborative process” (Goodwin, 2013 ). The latter option is more radical. It involves a fundamentally different way of doing public engagement: one with no clear aim in mind other than to listen and learn, with the potential to transform research practices and outputs (Parry-Davies and Newell, 2014 ).

Although the literature helps us frame such dilemmas, it does not choose for us how to solve them. There are no clear answers on how scientists should act in relation to policymaking or the public (Mazanderani and Latour, 2018 ), but we can at least identify and clarify the dilemmas we face, and seek ways to navigate them. Therefore, it is imperative to move quickly from basic ‘how to’ advice towards a deeper understanding of the profound choices that shape careers and lives.

Conclusions

Academics are routinely urged to create impact from their research; to change policy, practice, and even population outcomes. There are, however, few empirical evaluations of strategies to enable academics to create impact. This lack of empirical evidence has not prevented people from offering advice based on their personal experience, rather than concrete evaluations of strategies to increase impact. Much of the advice demonstrates a limited understanding or description of policy processes and the wider social aspects of ‘doing’ science and research. The interactions between knowledge production and use may be so complex that abstract ‘how to’ advice is limited in use. The ‘how to’ advice has a potentially immense range, from very practical issues (how long should an executive summary be?) to very profound (should I risk my safety to secure policy change?), but few authors situate themselves in that wider context in which they provide advice.

There are some more thoughtful approaches which recognise more complex aspects of the task of influencing policy: the emotional, practical and cognitive labour of engaging; that it often goes unrewarded by employers; that impact is never certain, so engagement may remain unrewarded; and, that our current advice, structures and incentives have important implications for how we think about the roles and responsibilities of scientists when engaging with publics. Some of the ‘how to’ literature also considers the wider context of research production and use, noting that the risks and responsibilities are borne by individuals and, for example, one individual cannot possibly to get to know the whole policy machinery or predict the consequences of their engagement on policy or themselves. For example, universities, funders and academics are advised to develop incentives, structures to make ‘impact’ happen more easily (Kerr et al., 2015 ; Colglazier, 2016 ); and remove any actual or perceived penalisation of ‘doing’ public engagement (Maynard, 2015 ). Some suggest universities should move into the knowledge brokerage space, acting more like think-tanks (Shergold, 2011 ) by creating and championing policy-relevant evidence (Tyler, 2017 ), and providing “embedded gateways” which offer access to credible and high-quality research (Green, 2016 ). Similarly, governments have their own science advisory system which, they are advised, should be both independent, and inclusive and accountable (Morgan et al., 2001 ; Malakoff, 2017 ). Government and Parliament need to be mindful about the diversity of the experts and voices on which they draw. For example, historians and ethicists could help policymakers question their assumptions and explore historical patterns of policies and policy narratives in particular areas (Evans, 2013 ; Haddon et al., 2015 ) but economics and law have more currency with policymakers (Tyler, 2013 ).

However, we were often struck by the limited range of advice offered to academics, many of whom are at the beginning of their careers. This gap may leave each generation of scientists to fight the same battles, and learn the same lessons over again. In the absence of evidence about the effectiveness of these approaches, all one can do is suggest a cautious, learning approach to coproduction and engagement, while recognising that there is unlikely to be a one-size-fits all model which would lead to simple, actionable advice. Further, we do not detect a coherent vision for wider academy-policymaker relations. Since the impact agenda (in the UK, at least) is unlikely to recede any time soon, our best response as a profession is to interrogate it, shape and frame it, and to help us all to find ways to navigate the complex practical, political, moral and ethical challenges associated with being researchers today. The ‘how to’ literature can help, but only if authors are cognisant of their wider role in society and complex policymaking systems.

For some commentators, engagement is a safe choice tacked onto academic work. Yet, for many others, it is a more profound choice to engage for policy change while accepting that the punishments (such as personal threats or abuse) versus rewards (such as impact and career development opportunities) are shared highly unevenly across socioeconomic groups. Policy engagement is a career choice in which we seek opportunities for impact that may never arise, not an event in which an intense period of engagement produces results proportionate to effort.

Overall, we argue that the existing advice offered to academics on how to create impact is not based on empirical evidence, or on good understandings of key literatures on policymaking or evidence use. This leads to significant misunderstandings, and advice which can have potentially costly repercussions for research, researchers and policy. These limitations matter, as they lead to advice which fails to address core dilemmas for academics—whether to engage, how to engage, and why—which have profound implications for how scientists and universities should respond to the call for increased impact. Most of these tips focus on the individuals, whereas engagement between research and policy is driven by systemic factors.

Data availability

The datasets generated during and/or analysed during the current study are not publicly available but are available from the corresponding author on reasonable request.

Change history

17 march 2020.

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Aguinis H, Werner S, Lanza Abbott J, Angert C, Joon Hyung P, Kohlhausen D (2010) Customer-centric science: reporting significant research results with rigor, relevance, and practical impact in mind. Organ Res Methods 13(3):515–539. https://doi.org/10.1177/1094428109333339

Article   Google Scholar  

Alberts B, Gold BD, Lee Martin L, Maxon ME, Martin LL, Maxon ME (2018) How to bring science and technology expertise to state governments. Proc Natl Acad Sci USA 115(9):19521955. https://doi.org/10.1073/pnas.1800543115

Article   CAS   Google Scholar  

Allen DD, Lauffenburger J, Law AV, Pete Vanderveen R, Lang WG (2012) Report of the 2011-2012 standing committee on advocacy: the relevance of excellent research: strategies for impacting public policy. Am J Pharmaceut Educ 76(6). https://doi.org/10.5688/ajpe766S6

Andermann A, Pang T, Newton JN, Davis A, Panisset U (2016) Evidence for health II: overcoming barriers to using evidence in policy and practice. Health Res Policy Syst 14(1):17. https://doi.org/10.1186/s12961-016-0086-3 . BioMed Central

Article   PubMed   PubMed Central   Google Scholar  

Aurum (1971) Letter from London: science policy and the question of relevancy. Bull At Sci Routledge 27(6):25–26. https://doi.org/10.1080/00963402.1971.11455376

Basbøll T (2018) We need our scientists to build models that frame our policies, not to tell stories that shape them, LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2018/07/30/we-need-our-scientists-to-build-models-that-frame-our-policies-not-to-tell-stories-that-shape-them/ . Accessed 1 Aug 2018

Baumgartner FR (2017) Endogenous disjoint change. Cogn Syst Res 44:69–73. https://doi.org/10.1016/j.cogsys.2017.04.001

Baumgartner FR, Jones BD (1993) Agendas and instability in American politics. University of Chicago Press: Chicago

Bilotta GS, Milner AM, Boyd IL (2015) How to increase the potential policy impact of environmental science research. Environ Sci Eur 27(1):9. https://doi.org/10.1186/s12302-015-0041-x

Birkland TA (1997) After disaster: agenda, public policy, and focusing events. American governance and public policy. Georgetown University Press, 178. http://press.georgetown.edu/book/georgetown/after-disaster . Accessed 17 July 2018

Boaz A, Baeza J, Fraser A (2011) Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes https://doi.org/10.1186/1756-0500-4-212 .

Boaz A, Hanney S, Borst R, O’Shea A, Kok M (2018) How to engage stakeholders in research: design principles to support improvement. Health Res Policy Syst 16(1):60. https://doi.org/10.1186/s12961-018-0337-6 . BioMed Central

Boaz A, Locock L, Ward V (2015) Whose evidence is it anyway? Evidence and Policy. https://doi.org/10.1332/174426515X14313738355534

Bondi H (1982) Science adviser to government. Interdiscip Sci Rev 7(1):9–13. https://doi.org/10.1179/030801882789801269

Article   MathSciNet   Google Scholar  

Boyd I (2013) Research: a standard for policy-relevant science. Nature 501(7466):159–160. https://doi.org/10.1038/501159a

Article   PubMed   Google Scholar  

Brumley C (2014) Academia and storytelling are compatible–how to reduce the risks and gain control of your research narrative. LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2014/08/27/academic-storytelling-risk-reduction/ . Accessed 1 Aug 2018

Burgess J (2005) Follow the argument where it leads: Some personal reflections on “policy-relevant” research. Trans Inst Br Geogr 30(3):273–281. https://doi.org/10.1017/S147474720500209X

Caird J, Sutcliffe K, Kwan I, Dickson K, Thomas J (2015) Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach? Evid Policy 11(1):81–97. https://doi.org/10.1332/174426514X13988609036850

Cairney P (2016) The politics of evidence-based policy making, The Politics of Evidence-Based Policy Making. 1–137. https://doi.org/10.1057/978-1-137-51781-4

Google Scholar  

Cairney P, Heikkila T (2014) A comparison of theories of the policy process. Theor Policy Process. p. 301–324

Cairney P, Kwiatkowski R (2017) How to communicate effectively with policymakers: Combine insights from psychology and policy studies. Palgrave Communications 3(1):37. https://doi.org/10.1057/s41599-017-0046-8

Cairney P, Oliver K (2018) How should academics engage in policymaking to achieve impact? Polit Stud Rev https://doi.org/10.1177/1478929918807714

Cairney P (2017) Three habits of successful policy entrepreneurs|Paul Cairney: Politics and Public Policy, https://paulcairney.wordpress.com/2017/06/05/three-habits-of-successful-policy-entrepreneurs/ . Accessed 9 July 2018

Cash DW, Clark WC, Alcock F, Dickson NM, Eckley N, Guston DH, Jäger J, Mitchell RB (2003) Knowledge systems for sustainable development. Proc Natl Acad Sci USA 100(14):8086–8091. https://doi.org/10.1073/pnas.1231332100

Article   ADS   CAS   Google Scholar  

Chapman JM, Algera D, Dick M, Hawkins EE, Lawrence MJ, Lennox RJ, Rous AM, Souliere CM, Stemberger HLJ, Struthers DP, Vu M, Ward TD, Zolderdo AJ, Cooke SJ (2015) Being relevant: practical guidance for early career researchers interested in solving conservation problems. Glob Ecol Conserv 4:334–348. https://doi.org/10.1016/j.gecco.2015.07.013

Coffait L (2017) Academics as policy entrepreneurs? Prepare to fight for your ideas (if you want to win), Wonkhe. https://wonkhe.com/blogs/academics-as-policy-entrepreneurs-prepare-to-fight-for-your-ideas-if-you-want-to-win/ . Accessed 9 July 2018

Colglazier B (2016) Encourage governments to heed scientific advice. Nature 537(7622):587. https://doi.org/10.1038/537587a

Article   ADS   CAS   PubMed   Google Scholar  

Collins P (2011) Quality control in scientific policy advice: the experience of the Royal Society. Polit Scient Adv https://doi.org/10.1017/CBO9780511777141.018

Crouzat E, Arpin I, Brunet L, Colloff MJ, Turkelboom F, Lavorel S (2018) Researchers must be aware of their roles at the interface of ecosystem services science and policy. Ambio 47(1):97–105. https://doi.org/10.1007/s13280-017-0939-1

Crow D, Jones M (2018) Narratives as tools for influencing policy change. Policy Polit 46(2):217–234. https://doi.org/10.1332/030557318X15230061022899

Datta A (2018, July 11) Complexity and paradox: lessons from Indonesia. On Think Tanks https://onthinktanks.org/articles/complexity-and-paradox-lessons-from-indonesia/ . Accessed 1 Aug 2018

Docquier D (2017) Communicating your research to policy makers and journalists–Author Services. https://authorservices.taylorandfrancis.com/communicating-science-to-policymakers-and-journalists/ . Accessed 9 July 2018

Dodsworth S, Cheeseman N (2018) Five lessons for researchers who want to collaborate with governments and development organisations but avoid the common pitfalls. LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2018/02/05/five-lessons-for-researchers-who-want-to-collaborate-with-governments-and-development-organisations-but-avoid-the-common-pitfalls/ . Accessed 9 July 2018

Donnelly CA, Boyd I, Campbell P, Craig C, Vallance P, Walport M, Whitty CJM, Woods E, Wormald C (2018) Four principles to make evidence synthesis more useful for policy. Nature 558(7710):361–364. https://doi.org/10.1038/d41586-018-05414-4

Douglas H (2012) Weighing complex evidence in a democratic society. Kennedy Inst Ethics J 22(2):139–162. https://doi.org/10.1353/ken.2012.0009

Article   MathSciNet   PubMed   Google Scholar  

Douglas H (2015) Politics and science: untangling values, ideologies, and reasons. Ann Am Acad Political Social Sci 658(1):296–306. https://doi.org/10.1177/0002716214557237

Echt L (2017a) “Context matters”: a framework to help connect knowledge with policy in government institutions, LSE Impact blog. http://blogs.lse.ac.uk/impactofsocialsciences/2017/12/19/context-matters-a-framework-to-help-connect-knowledge-with-policy-in-government-institutions/ Accessed 10 July 2018

Echt L (2017b) How can we make our research to be policy relevant? | Politics and Ideas: A Think Net, Politics and Ideas. http://www.politicsandideas.org/?p=3602 . Accessed 10 July 2018

Editorial (1972) Science research council advises the government. Nature 239(5370):243–243. https://doi.org/10.1038/239243a0 . Nature Publishing Group

Eisenstein M (2017) The needs of the many. Nature 551. https://doi.org/10.1038/456296a .

Evans J (2013, Feburary 19) How arts and humanities can influence public policy. HuffPost . https://www.huffingtonpost.co.uk/jules-evans/arts-humanities-influence-public-policy_b_2709614.html . Accessed 9 July 2018

Evans MC, Cvitanovic C (2018) An introduction to achieving policy impact for early career researchers. Palgrave Commun 4(1):88. https://doi.org/10.1057/s41599-018-0144-2

Fafard P (2015) Beyond the usual suspects: using political science to enhance public health policy making. J Epidemiol Commun Health 1129:1–4. https://doi.org/10.1136/jech-2014-204608 .

Farmer R (2010) How to influence government policy with your research: tips from practicing political scientists in government. Political Sci Polit 43(4):717–719. https://doi.org/10.1017/S1049096510001368

Fernández RJ (2016) How to be a more effective environmental scientist in management and policy contexts. Environ Sci & Policy 64:171–176. https://doi.org/10.1016/J.ENVSCI.2016.07.006

Fischoff M (2015) How can academics engage effectively in public and political discourse? At a 2015 conference, experts described how and why academics should reach out. Network for Business Sustainability

Fleming AH, Pyenson ND (2017) How to produce translational research to guide arctic policy. BioScience 67(6):490–493. https://doi.org/10.1093/biosci/bix002

Flinders M, Wood M, Cunningham M (2016) The politics of co-production: risks, limits and pollution. Evid Policy 12(2):261–279. https://doi.org/10.1332/174426415X14412037949967

Game ET, Schwartz MW, Knight AT (2015) Policy relevant conservation science. Conserv Lett 8(5):309–311. https://doi.org/10.1111/conl.12207

Garrett T (2018) Moving an Evidence-based Policy Agenda Forward: Leadership Tips from the Field. NASN Sch Nurse 33(3):158–159. https://doi.org/10.1177/1942602X18766481

Gigerenzer G, Selten R (2001) The adaptive toolbox. In: G. Gigerenzer, R. Selten (eds) Bounded rationality The adaptive toolbox. MIT Press: Cambridge, pp. 37–50

Gluckman P (2014) The art of science advice to the government. Nature 507:163–165. https://doi.org/10.1038/507163a

Goodwin M (2013) How academics can engage with policy: 10 tips for a better Conversation, The Guardian . https://www.theguardian.com/higher-education-network/blog/2013/mar/25/academics-policy-engagement-ten-tips

Gough D, Oliver S and Thomas J (2012) Introducing systematic reviews. In: An Introduction to Systematic Reviews. https://doi.org/10.1186/2046-4053-1-28

Graffy EA (1999) Enhancing policy-relevance without burning up or burning out: a strategy for scientists, in Science into policy: water in the public realm. The Association, pp. 293–298. http://apps.webofknowledge.com/full_record.do?product=UA&search_mode=AdvancedSearch&qid=3&SID=D3Y7AMjSYyfgCmiXBUw&page=21&doc=208 . Accessed 9 Jul 2018

Green D (2016) How academics and NGOs can work together to influence policy: insights from the InterAction report, LSE Impact blog. http://blogs.lse.ac.uk/impactofsocialsciences/2016/09/23/how-academics-and-ngos-can-work-together-to-influence-policy-insights-from-the-interaction-report/ . Accessed 10 July 2018

Green LW, Glasgow RE, Atkins D, Stange K (2009) Making evidence from research more relevant, useful, and actionable in policy, program planning, and practice. slips “Twixt Cup and Lip”. Am J Prev Med 37(6 SUPPL. 1):S187–S191. https://doi.org/10.1016/j.amepre.2009.08.017

Haddon C, Devanny J, Forsdick PC, Thompson PA (2015) What is the value of history in policymaking? https://www.instituteforgovernment.org.uk/publications/what-value-history-policymaking . Accessed 10 July 2018

Haidt J (2001) The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev 108(4):814–834. https://doi.org/10.1037/0033-295X.108.4.814

Article   CAS   PubMed   Google Scholar  

Hammersley M (2013) The myth of research-based policy and practice

Havens B (1992) Making research relevant to policy. Gerontologist 32(2):273. https://doi.org/10.1093/geront/32.2.273

Hayes S, Wilson C (2018) Being ‘resourceful’ in academic engagement with parliament | Wonkhe | Comment, Wonkhe. https://wonkhe.com/blogs/being-resourceful-in-academic-engagement-with-parliament/ . Accessed 12 July 2018

Haynes AS, Derrick GE, Chapman S, Redman S, Hall WD, Gillespie J, Sturk H (2011) From “our world” to the “real world”: Exploring the views and behaviour of policy-influential Australian public health researchers. Social Sci Med 72(7):1047–1055. https://doi.org/10.1016/j.socscimed.2011.02.004

Head BW (2010) Reconsidering evidence-based policy: key issues and challenges. Policy Soc 77–94. https://doi.org/10.1016/j.polsoc.2010.03.001

Hillman N (2016) The 10 commandments for influencing policymakers | THE Comment, Times Higher Education. https://www.timeshighereducation.com/comment/the-10-commandments-for-influencing-policymakers . Accessed 9 July 2018

Himmrich J (2016) How should academics interact with policy makers? Lessons on building a long-term advocacy strategy. LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2016/06/20/how-should-academics-interact-with-policy-makers-lessons-on-building-a-longterm-advocacy-strategy/ . Accessed 10 July 2018

Hutchings JA, Stenseth NC (2016) Communication of science advice to government. Trends Ecol Evol 31(1):7–11. https://doi.org/10.1016/j.tree.2015.10.008

Jasanoff S, Polsby NW (1991) The fifth branch: science advisers as policymakers. Contemp Sociol 20(5):727. https://doi.org/10.2307/2072218

Jo Clift Consulting (2016) Are you trying to get your voice heard in Government?–Jo Clift’s Personal Website. http://jocliftconsulting.strikingly.com/blog/are-you-trying-to-get-your-voice-heard-in-government . Accessed 10 July 2018

John P (2003) Is there life after policy streams, advocacy coalitions, and punctuations: using evolutionary theory to explain policy change? Policy Stud J 31(4):481–498. https://doi.org/10.1111/1541-0072.00039

Jones BD, Thomas HF (2017) The cognitive underpinnings of policy process studies: Introduction to a special issue of Cognitive Systems Research. Cogn Syst Res 45:48–51. https://doi.org/10.1016/j.cogsys.2017.04.003

Jones M, Crow D (2018) Mastering the art of the narrative: using stories to shape public policy–Google Search, LSE Impact blog. https://www.google.co.uk/search?q=astering+the+art+of+the+narrative%3A+using+stories+to+shape+public+policy&rlz=1C1GGRV_en-GBGB808GB808&oq=astering+the+art+of+the+narrative%3A+using+stories+to+shape+public+policy&aqs=chrome..69i57.17213j0j4&sourceid=chrom Accessed 6 Aug 2018

Jones Michael D, Anderson Crow D (2017) How can we use the “science of stories” to produce persuasive scientific stories. Palgrave Commun 3(1):53. https://doi.org/10.1057/s41599-017-0047-7

Kahneman DC, Patrick E (2011) Thinking, fast and slow. Allen Lane. https://doi.org/10.4324/9781912453207

De Kerckhove DT, Rennie MD, Cormier R (2015) Censoring government scientists and the role of consensus in science advice: a structured process for scientific advice in governments and peer-review in academia should shape science communication strategies. EMBO Rep 16(3):263–266. https://doi.org/10.15252/embr.201439680

Article   CAS   PubMed   PubMed Central   Google Scholar  

Kerr EA, Riba M, Udow-Phillips M (2015) Helping health service researchers and policy makers speak the same language. Health Serv Res 50(1):1–11. https://doi.org/10.1111/1475-6773.12198

King A (2016) Science, politics and policymaking. EMBO Rep 17(11):1510–1512. https://doi.org/10.15252/embr.201643381

Kingdon J Thurber J (1984) Agendas, alternatives, and public policies. https://schar.gmu.edu/sites/default/files/current-students/Courses/Fall_2017/PUAD/Regan-PUAD-540-002-Fall-17.pdf . Accessed 31 Jan 2018

Knottnerus JA, Tugwell P (2017) Methodology of the “craft” of scientific advice for policy and practice. J Clin Epidemiol 82:1–3. https://doi.org/10.1016/j.jclinepi.2017.01.005

Koshland Jr. DE, Koshland Jr. DE, Koshland DE, Abelson PH (1988) Science advice to the president. Science 242(4885):1489. https://doi.org/10.1126/science.242.4885.1489

Article   ADS   PubMed   Google Scholar  

Krige J (1990) Scientists as Policy-makers - British Physicists Advice to Their Government on Membership of CERN (1951-1952). Science History Publications, U.S.A. http://apps.webofknowledge.com/full_record.do?product=UA&-search_mode=AdvancedSearch&qid=3&SID=D3Y7AMjSYyfgCmiXBUw&page=11&doc=105 Accessed 9 July 2018

Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J (2003) How can research organizations more effectively transfer research knowledge to decision makers? Milbank Q 81(2):221–248. https://doi.org/10.1111/1468-0009.t01-1-00052

Lawler A (1997) Academy seeks government help to fight openness law. Science 473. https://doi.org/10.1126/science.277.5325.473

de Leeuw E, McNess A, Crisp B, Stagnitti K (2008) Theoretical reflections on the nexus between research, policy and practice. Critical Public Health https://doi.org/10.1080/09581590801949924

Lepkowski W (1984) Heritage-foundation science policy advice for reagan. Chem Eng News 62(51):20–21. https://doi.org/10.1021/cen-v062n051.p020

Lewis PG (2013) Policy thinking, fast and slow: a social intuitionist perspective on public policy processes. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2300479 . Accessed 17 July 2018

Lloyd J (2016) Should academics be expected to change policy? Six reasons why it is unrealistic for research to drive policy change, LSE Impact Blod. http://blogs.lse.ac.uk/impactofsocialsciences/2016/05/25/should-academics-be-expected-to-change-policy-six-reasons-why-it-is-unrealistic/ . Accessed 9 July 2018

Locock L, Boaz A (2004) Research, policy and practice–worlds apart? Social Policy Soc https://doi.org/10.1017/S1474746404002003

Lucey JM, Palmer G, Yeong KL, Edwards DP, Senior MJM, Scriven SA, Reynolds G, Hill JK (2017) Reframing the evidence base for policy-relevance to increase impact: a case study on forest fragmentation in the oil palm sector. J Appl Ecol 54(3):731–736. https://doi.org/10.1111/1365-2664.12845

Maddox G (1996) Policy-relevant health services research: who needs it? J Health Serv Res Policy 1(3):167–168. https://doi.org/10.1177/135581969600100309

Majone G (1989) Evidence, argument, and persuasion in the policy process. Yale University Press. https://yalebooks.yale.edu/book/9780300052596/evidence-argument-and-persuasion-policy-process . Accessed 17 July 2018

Malakoff D (2017) A battle over the “best science. Science. Am Assoc Advan Sci 1108–1109. https://doi.org/10.1126/science.355.6330.1108

Marshall E (1980) Advising reagan on science policy. Science 210(4472):880–881. https://doi.org/10.1126/science.210.4472.880

Marshall N, Cvitanovic C (2017) Ten top tips for social scientists seeking to influence policy, LSE Impact Blog

Masood E (1999) UK panel formed to rebuild trust in government science advice. Nature 397(6719):458. https://doi.org/10.1038/17161

Maybin J (2016) How proximity and trust are key factors in getting research to feed into policymaking, LSE Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2016/09/12/how-proximity-and-trust-are-key-factors-in-getting-research-to-feed-into-policymaking/ . Accessed 1 Aug 2018

Mayer J (1982) Science advisers to the government. Science 215(4535):921. https://doi.org/10.1126/science.215.4535.921

Maynard, A. (2015) Is public engagement really career limiting? Times Higher Education

Mazanderani F and Latour B (2018) The Whole World is Becoming Science Studies: Fadhila Mazanderani Talks with Bruno Latour. Engaging Science, Technology, and Society 4(0): 284. https://doi.org/10.17351/ests2018.237

Morandi L (2009) Essential nexus. how to use research to inform and evaluate public policy. Am J Prev Med 36(2 SUPPL.):S53–S54. https://doi.org/10.1016/j.amepre.2008.10.005

Morgan MG, Houghton A, Gibbons JH (2001) Science and government: Improving science and technology advice for congress. Science . 1999–2000. https://doi.org/10.1126/science.1065128

NCCPE (2018) How can you engage with policy makers? https://www.publicengagement.ac.uk/do-engagement/understanding-audiences/policy-makers . Accessed 10 July 2018

Nichols RW (1972) Some practical problems of scientist-advisers. Minerva 10(4):603–613. https://doi.org/10.1007/BF01695907

Nichols RW (1988) Science and technology advice to government. To not know is no sin; To not ask is. Technol Soc 10(3):285–303. https://doi.org/10.1016/0160-791X(88)90011-5

Norse D (2005) The nitrogen cycle, scientific uncertainty and policy relevant science. Sci China Ser C, Life Sci / Chin Acad Sci 48(Suppl 2):807–817. https://doi.org/10.1007/BF03187120

Nutley SM, Walter I, Davies HTO (2007) Using evidence: how research can inform public services. Policy Press. https://www.press.uchicago.edu/ucp/books/book/distributed/U/bo13441009.html . Accessed 21 Jan 2019

Oakley A, Strange V, Toroyan T, Wiggins M, Roberts I, Stephenson J (2003) Using random allocation to evaluate social interventions: three recent U.K. examples. Ann Am Acad Political Social Sci 589(1):170–189. https://doi.org/10.1177/0002716203254765

Olander L, Polasky S, Kagan JS, Johnston RJ, Wainger L, Saah D, Maguire L, Boyd J, Yoskowitz D (2017) So you want your research to be relevant? Building the bridge between ecosystem services research and practice. Ecosyst Serv 26:170–182. https://doi.org/10.1016/j.ecoser.2017.06.003

Oliver KA, de Vocht F (2015) Defining “evidence” in public health: a survey of policymakers’ uses and preferences. Eur J Public Health. ckv082. https://doi.org/10.1093/eurpub/ckv082

Oliver K, Faul MV (2018) Networks and network analysis in evidence, policy and practice. Evidence and Policy 14(3): 369–379. https://doi.org/10.1332/174426418X15314037224597

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J (2014) A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res 14(1):2. https://doi.org/10.1186/1472-6963-14-2

Ostrom E (2007a) Institutional rational choice: an assessment of the institutional analysis and development framework. Theor Policy Process . 21–64. https://doi.org/10.1017/CBO9781107415324.004

Ostrom E (2007b) Sustainable social-ecological systems: an impossibility. Presented at the 2007 Annual Meetings of the American Association for the Advancement of Science, “Science and Technology for Sustainable Well-Being ” . https://doi.org/10.2139/ssrn.997834

Pain E (2014) How scientists can influence policy. Science https://doi.org/10.1126/science.caredit.a1400042

Parkhurst J (2017) The politics of evidence: from evidence-based policy to the good governance of evidence. Routledge Studies in Governance and Public Policy. https://doi.org/10.4324/9781315675008

Book   Google Scholar  

Parry-Davies E, Newell P (2014, July, 21) 10 ways to make public engagement work for you | Higher Education Network | The Guardian. The Guardian. https://www.theguardian.com/higher-education-network/blog/2014/jul/21/10-ways-make-public-engagement-work-for-you . Accessed 10 July 2018

Petes LE, Meyer MD (2018) An ecologist’s guide to careers in science policy advising. Front Ecol Environ 16(1):53–54. https://doi.org/10.1002/fee.1761

Petticrew M, Roberts H (2008) Systematic reviews in the social sciences: a practical guide, systematic reviews in the social sciences: a practical guide. Sociol Health Illness. https://doi.org/10.1002/9780470754887

Pielke RA (2007) The honest broker: making sense of science in policy and politics. Honest Broker https://doi.org/10.1017/CBO9780511818110

POST (2017) Getting your research into parliament-Author Services. https://authorservices.taylorandfrancis.com/getting-your-research-into-parliament/ . Accessed 9 July 2018

Prehn T (2018, May 24) Thomas Prehn’s innovation diary: What I learned at MindLab. Apolitical

Quarmby S (2018) Evidence-informed policymaking: does knowledge brokering work? LSE Impact Blog. https://blogs.lse.ac.uk/politicsandpolicy/evidence-informed-policymaking-knowledge-brokers/

Reed, M. and Evely, A. (2016) How can your research have more impact? Five key principles and practical tips for effective knowledge exchange. LSE Impact blog. pp. 1–5. http://blogs.lse.ac.uk/impactofsocialsciences/2015/07/07/how-can-your-research-have-more-impact-5-key-principles-tips/ . Accessed 10 July 2018

Rose DC (2015) The case for policy-relevant conservation science. Conserv Biol 29(3):748–754. https://doi.org/10.1111/cobi.12444

Sapolsky HM (1968) Science advice for state and local government. Science 160(3825):280–284. https://doi.org/10.1126/science.160.3825.280

Sebba J (2011) Getting research into policy: the role of think tanks and other mediators. LSE Impact blog. http://blogs.lse.ac.uk/impactofsocialsciences/2011/03/07/getting-research-into-policy-the-role-of-think-tanks-and-other-mediators/. Accessed 10 July 2018

Shergold P (Interviewee) (2011, November 8) Let’s close the gap between academics and policy makers: Peter Shergold on changing the system. The Conversation

Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG (2010) Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med 24. https://doi.org/10.1186/1741-7015-8-24 .

Simis MJ, Madden H, Cacciatore MA, Yeo SK (2016) The lure of rationality: Why does the deficit model persist in science communication? Public Underst Sci 25(4):400–414. https://doi.org/10.1177/0963662516629749

Simon H (1976) Administrative behavior: A study of decision-making processes in administrative organization, PsycNET. 3rd edn. New York: Free Press. https://psycnet.apa.org/record/1976-21554-000 Accessed 5 Feb 2019

Sloman S, Fernbach P (2017) The knowledge illusion: why we never think alone

Smith KE, Stewart E (2015) “Black magic” and “gold dust”: the epistemic and political uses of evidence tools in public health policy making. Evid Policy 11(3):415–437. https://doi.org/10.1332/174426415X14381786400158

Smith KE, Stewart E (2017) We need to talk about impact: why social policy academics need to engage with the UK’s research impact agenda. J Social Policy 46(01):109–127. https://doi.org/10.1017/S0047279416000283

Srinivasan TN (2000) The Washington consensus a decade later: Ideology and the art and science of policy advice. World Bank Res Obs 15(2):265–270. https://doi.org/10.1093/wbro/15.2.265

Sturgis P, Allum N (2004) Science in society: re-evaluating the deficit model of public attitudes. Public Underst Sci 13(1):55–74. https://doi.org/10.1177/0963662504042690

Sutherland WJ (2013) Review by quality not quantity for better policy. Nature 503(7475):167. https://doi.org/10.1038/503167a

Sutherland WJ, Burgman MA (2015) Policy advice: se experts wisely, Nature 317–318. https://doi.org/10.1038/526317a .

Sy KJ (1989) As scientists and citizens: profiles and perspectives of academic advisers to state government. Sci Commun 10(4):280–303. https://doi.org/10.1177/107554708901000403

Tesar C, Dubois MA, Shestakov A (2016) Toward strategic, coherent, policy-relevant arctic science. Science 353(6306):1368–1370. https://doi.org/10.1126/science.aai8198

Thomson H (2013) Improving utility of evidence synthesis for healthy public policy: the three Rs (relevance, rigor, and readability [and resources]). Am J Public Health 103(8):e17–e23. https://doi.org/10.2105/AJPH.2013.301400

Tilley H, Shaxson L, Rea J, Ball L, Young J (2017) 10 things to know about how to influence policy with research. London. https://www.odi.org/publications/10671-10-things-know-about-how-influence-policy-research . Accessed 9 July 2018

Topp L, Mair D, Smillie L, Cairney P (2018) Knowledge management for policy impact: the case of the European Commission’s Joint Research Centre Introduction: why we need knowledge management for policy. Palgrave Commun 4(1):87. https://doi.org/10.1057/s41599-018-0143-3

Tyler C (2013, December) Top 20 things scientists need to know about policy-making. The Guarduna , pp. 1–7. https://doi.org/10.1136/bmjopen

Tyler C (2017) Wanted: academics wise to the needs of government. Nature 7. https://doi.org/10.1038/d41586-017-07744-1 .

Tyndall J (2008) How low can you go?: toward a hierarchy of grey literature, Flinders Academic Commons. http://www.alia2008.com . Accessed 21 Jan 2019

Walley J, Khan MA, Witter S, Haque R, Newell J, Wei X (2018) Embedded health service development and research: why and how to do it (a ten-stage guide). Health Res Policy Syst 16(1):67. https://doi.org/10.1186/s12961-018-0344-7

Walsh J (1973) Science policy: committee wants adviser to use active voice. Science 181(4098):421–4. https://doi.org/10.1126/science.181.4098.421

Weiss CH (1979) The many meanings of research utilization. Public Adm Rev 39(5):426. https://doi.org/10.2307/3109916

Wellstead A, Cairney P, Oliver K (2018) Reducing ambiguity to close the science-policy gap. Policy Des Pract 1(2):115–125. https://doi.org/10.1080/25741292.2018.1458397

Whitty CJM (2015) What makes an academic paper useful for health policy? BMC Med 13(1):301. https://doi.org/10.1186/s12916-015-0544-8

Article   MathSciNet   PubMed   PubMed Central   Google Scholar  

Wilkinson C (2017) Evidencing impact: a case study of UK academic perspectives on evidencing research impact. Stud Higher Educ. https://doi.org/10.1080/03075079.2017.1339028

Wolfle D (1968) Science advice for state governments. Science 160(3828):607–607. https://doi.org/10.1126/science.160.3828.607

Young A, Jones D (1994) The role of the public and federal advisory committees in providing advice to the government on science issues of papers, in American Chemical Society. Meeting. American Chemical Society. American Chemical Society. http://apps.webofknowledge.com/full_record.do?product=UA&search_mode=AdvancedSearch&qid=3&SID=D3Y7AMjSYyfgCmiXBUw&page=17&doc=162 Accessed 9 July 2018

Zahariadis N (2007) The multiple streams framework. Theor Policy Process https://doi.org/10.1081/E-EPAP2-120041405

Zevallos Z (2017) Protecting activist academics against public harassment. The Other Sociologist

Download references

Acknowledgements

The authors wish to thank the audiences of recent talks given by both authors, which helped to develop the ideas presented.

Author information

Authors and affiliations.

Department of Public Health, Environments and Society, London School of Hygiene and Tropical Medicine, London, WC1H 9SR, UK

Kathryn Oliver

University of Stirling, Stirling, UK

Paul Cairney

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kathryn Oliver .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Oliver, K., Cairney, P. The dos and don’ts of influencing policy: a systematic review of advice to academics. Palgrave Commun 5 , 21 (2019). https://doi.org/10.1057/s41599-019-0232-y

Download citation

Received : 08 August 2018

Accepted : 28 January 2019

Published : 19 February 2019

DOI : https://doi.org/10.1057/s41599-019-0232-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

the relevance of systematic reviews to educational policy and practice

The relevance of systematic reviews to educational policy and practice

This paper argues that educational policy and practice has much to gain from systematic review and other methods of research synthesis. Different types of reviews are considered, including narrative reviews, vote-counting reviews, and meta-analyses, best evidence synthesis, and meta-ethnography. It is argued that systematic reviews allow researchers, and users of research, to go beyond the limitations of single studies and to discover the consistencies and variability in seemingly similar studies. This, in turn, allows for some degree of cumulative knowledge of educational research that is oft ...  Show more

This paper argues that educational policy and practice has much to gain from systematic review and other methods of research synthesis. Different types of reviews are considered, including narrative reviews, vote-counting reviews, and meta-analyses, best evidence synthesis, and meta-ethnography. It is argued that systematic reviews allow researchers, and users of research, to go beyond the limitations of single studies and to discover the consistencies and variability in seemingly similar studies. This, in turn, allows for some degree of cumulative knowledge of educational research that is often missing in the absence of systematic reviews. Some limitations of systematic reviews and research synthesis for educational policy and practice are also discussed. The work of the Campbell Collaboration as an international organisation that promotes the use of systematic reviews in educational policy and practice is outlined.

Published abstract reprinted by permission of the copyright owner. Show less

Authors: Davies, Philip

Published: London, England, Carfax Publishing, Taylor & Francis, 2000

Resource type: Article

Access item: Request Item from NCVER

Journal title: Oxford review of education

Journal volume: 26

Journal number: 3&4

Journal date: September-December 2000

Pages: pp.365-378

ISSN: 0305-4985

Statement of responsibility: Philip Davies

Peer reviewed: Yes

Document number: TD/TNC 65.201

Report a broken link

Leave your email and we'll notify you when the requested link is available again.

the relevance of systematic reviews to educational policy and practice

Subjects: Evaluation Research Policy Industry

Keywords: Evaluation technique Policy formation Research method Educational research Educational policy Organisation

Download files

Get citation.

NCVER Author-Date style

  • Citation only
  • Full record

Scan this QR code using your mobile or use the below permanent URL for this page

  • Search Menu
  • Sign in through your institution
  • Advance Articles
  • Editor's Choice
  • Supplements
  • E-Collections
  • Virtual Roundtables
  • Author Videos
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • About The European Journal of Public Health
  • About the European Public Health Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Terms and Conditions
  • Explore Publishing with EJPH
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

  • < Previous

Systematic Reviews: What have they got to offer evidence based policy and practice? Iveta Nagyova

  • Article contents
  • Figures & tables
  • Supplementary Data

I Nagyova, Systematic Reviews: What have they got to offer evidence based policy and practice? Iveta Nagyova, European Journal of Public Health , Volume 25, Issue suppl_3, October 2015, ckv173.021, https://doi.org/10.1093/eurpub/ckv173.021

  • Permissions Icon Permissions

There is an increasing effort in translating research outcomes into policy decisions in a wide range of policy areas. Producers of systematic reviews use different methods to make their findings more accessible to decision-makers. These include plain language summaries, structured critical abstracts, overviews of reviews on a particular topic, and briefings that combine systematic reviews with other evidence sources. This presentation will contribute to the debate on extending the use of systematic reviews in public health and healthcare policy areas. It will examine the ways in which systematic review presents a distinctive approach to synthesising research. It will discuss the barriers to knowledge translations and challenges faced by researchers who use systematic review outside clinical medicine. It will also address the issues of effectiveness of knowledge translation strategies focusing on policy makers and senior health service manager as well as the wider impact of systematic reviewing on the quality of primary research together with the tools and training resources available to support this activity.

  • evidence-based practice
Month: Total Views:
February 2017 2
March 2017 10
April 2017 1
May 2017 11
July 2017 5
August 2017 2
September 2017 5
October 2017 16
November 2017 9
December 2017 17
January 2018 14
February 2018 24
March 2018 21
April 2018 22
May 2018 20
June 2018 6
July 2018 19
August 2018 2
September 2018 6
October 2018 4
November 2018 9
December 2018 8
January 2019 4
February 2019 8
March 2019 15
April 2019 12
May 2019 12
June 2019 9
July 2019 8
August 2019 3
September 2019 7
October 2019 18
November 2019 5
December 2019 11
January 2020 18
February 2020 14
March 2020 7
April 2020 5
May 2020 10
June 2020 12
July 2020 6
August 2020 13
September 2020 2
October 2020 6
November 2020 2
December 2020 15
January 2021 11
February 2021 9
March 2021 10
April 2021 14
May 2021 6
June 2021 9
July 2021 9
August 2021 12
September 2021 11
October 2021 13
November 2021 7
December 2021 14
January 2022 4
February 2022 7
March 2022 10
April 2022 21
May 2022 3
June 2022 10
July 2022 14
August 2022 13
September 2022 10
October 2022 3
November 2022 8
December 2022 1
January 2023 6
February 2023 8
March 2023 4
April 2023 3
May 2023 12
June 2023 6
July 2023 7
August 2023 7
September 2023 4
October 2023 14
November 2023 8
December 2023 12
January 2024 15
February 2024 6
March 2024 15
April 2024 12
May 2024 8
June 2024 7
July 2024 12
August 2024 9

Email alerts

Citing articles via.

  • Contact EUPHA
  • Recommend to your Library

Affiliations

  • Online ISSN 1464-360X
  • Copyright © 2024 European Public Health Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Table of Contents
  • Chapter 1: Introduction
  • Chapter 2: Creating Trustworthy Guidelines
  • Chapter 3: Overview of the Guideline Development Process
  • Chapter 4: Formulating PICO Questions
  • Chapter 5: Choosing and Ranking Outcomes
  • Chapter 6: Systematic Review Overview
  • Chapter 7: GRADE Criteria Determining Certainty of Evidence
  • Chapter 8: Domains Decreasing Certainty in the Evidence
  • Chapter 9: Domains Increasing One's Certainty in the Evidence
  • Chapter 10: Overall Certainty of Evidence
  • Chapter 11: Communicating findings from the GRADE certainty assessment
  • Chapter 12: Integrating Randomized and Non-randomized Studies in Evidence Synthesis

Related Topics:

  • Advisory Committee on Immunization Practices (ACIP)
  • Vaccine-Specific Recommendations
  • Evidence-Based Recommendations—GRADE

Chapter 6: Systematic Review Overview

  • This ACIP GRADE handbook provides guidance to the ACIP workgroups on how to use the GRADE approach for assessing the certainty of evidence.

The evidence base must be identified and retrieved systematically before the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach is used to assess the certainty of the evidence and provide support for guideline judgements. A systematic review should be used to retrieve the best available evidence related to the Population, Intervention, Comparison, and Outcomes (PICO) question. All guidelines should be preceded by a systematic review to ensure that recommendations and judgements are supported by an extensive body of evidence that addresses the research question. This section provides an overview of the systematic review process, external to the GRADE assessment of the certainty of evidence.

Systematic methods should be used to identify and synthesize the evidence 1 . In contrast to narrative reviews, systematic methods address a specific question and apply a rigorous scientific approach to the selection, appraisal, and synthesis of relevant studies. A systematic approach requires documentation of the search strategy used to identify all relevant published and unpublished studies and the eligibility criteria for the selection of studies. Systematic methods reduce the risk of selective citation and improve the reliability and accuracy of decisions. The Cochrane handbook provides guidance on searching for studies, including gray literature and unpublished studies ( Chapter 4: Searching for and selecting studies ) 1 .

6.1 Identifying the evidence

Guidelines should be based on a systematic review of the evidence 2 3 . A published systematic review can be used to inform the guideline, or a new one can be conducted. The benefits of identifying a previously conducted systematic review include reduced time and resources of conducting a review from scratch 3 . Additionally, if a Cochrane or other well-done systematic review exists on the topic of interest, the evidence is likely presented in a well-structured format and meets certain quality standards, thus providing a good evidence foundation for guidelines. As a result, systematic reviews do not need to be developed de novo if a high-quality review of the topic exists. Updating a relevant and recent high-quality review is usually less expensive and requires less time than conducting a review de novo. Databases, such as the Cochrane library, Medline (through PubMed or OVID), and EMBASE can be searched to identify existing systematic reviews which address the PICO question of interest. Additionally, the International Prospective Register of Systematic Reviews (PROSPERO) database can be searched to check for completed or on-going systematic reviews addressing the research question of interest 3 . It's important to base an evidence assessment and recommendations on a well-done systematic review to avoid any potential for bias to be introduced into the review, such as the inability to replicate methods or exclusion of relevant studies. Assessing the quality of a published systematic review can be done using the A Measurement Tool to Assess systematic Reviews (AMSTAR 2) instrument 3 . This instrument assesses the presence of the following characteristics in the review: relevancy to the PICO question; deviations from the protocol; study selection criteria; search strategy; data extraction process; risk of bias assessments for included studies; and appropriateness of both quantitative and qualitative synthesis 4 . A Risk of Bias of Systematic Reviews (ROBIS) assessment may also be performed 5 .

If a well-done systematic review is identified but the date of the last search is more than 6-12 months old, consider updating the search from the last date to ensure that all available evidence is captured to inform the guideline. In a well-done published systematic review, the search strategy will be provided, possibly as an online appendix or supplementary materials. Refer to the Evidence Retrieval section (6.3) for more information.

If a well-done published systematic review is not identified, then a de novo systematic review must be conducted. Once the PICO question(s) have been identified, conducting a systematic review includes the following steps:

  • Protocol development
  • Evidence retrieval and identification
  • Risk of bias assessment
  • A meta-analysis or narrative synthesis
  • Assessment of the certainty of evidence using GRADE

6.2 Protocol development

There are several in-depth resources available to support authors when developing a systematic review; therefore, this and following sections will refer to higher-level points and provide information on those resources. The Cochrane Handbook serves as a fundamental reference for the development of systematic reviews and the PRISMA guidance provides detailed information on reporting requirements. To improve transparency and reduce the potential for bias to be introduced into the systematic review process, a protocol should be developed a priori to outline the methods of the planned systematic review. If the methods in the final systematic review deviate from the protocol (as is not uncommon), this must be noted in the final review with a rationale. Protocol development aims to reduce potential bias and ensure transparency in the decisions and judgements made by the review team. Protocols should document the predetermined PICO and study inclusion/exclusion criteria without the influence of the outcomes available in published primary studies 6 . The Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) framework can be used to guide the development of a systematic review 7 . Details on the PRISMA-P statement and checklist are available at https://www.prisma-statement.org/protocols . 7 If the intention is to publish the systematic review in a peer-reviewed journal separately from the guideline, consider registering the systematic review using PROSPERO before beginning the systematic review process 8 .

To ensure the review is done well and meets the needs of the guideline authors, it is important to consider what type of evidence will be searched and included at the protocol stage before the evidence is retrieved 9 . While randomized controlled trials (RCTs) are often considered gold standards for evidence, there are many reasons why authors will choose to include nonrandomized studies (NRS) in their searches:

  • To address baseline risks
  • When RCTs aren't feasible, ethical or readily available
  • When it is predicted that RCTs will have very serious concerns with indirectness (Refer to Table 12 for more information about Indirectness)

NRS can serve as complementary, sequential, or replacement evidence to RCTs depending on the situation 10 . Section 9 of this handbook provides detailed information about how to integrate NRS evidence. At the protocol stage it is important to consider whether or not NRS should be included.

The systematic review team will scope the available literature to develop a sense of whether or not the systematic review should be limited to RCTs alone or if a reliance on NRS may also be necessary. Once this inclusion and exclusion criteria has been established, the literature can be searched and retrieved systematically.

6.3 Evidence retrieval and identification

6.3a. searching databases.

An expert librarian or information specialist should be consulted to create a search strategy that is applied to all relevant databases to gather primary literature 1 . The following databases are widely used when conducting a systematic review: MEDLINE (via PubMed or OVID); EMBASE; Cochrane Central Register of Controlled Trials (CENTRAL). The details of each strategy as actually performed, with search terms (keywords and/or Medical Subject Headings/MESH terms) the date(s) on which the search was conducted and/or updated; and the publication dates of the literature covered, should be recorded.

In addition to searching for evidence, references from studies included for the review should also be examined to add anything relevant missed by the searches. It is also useful to examine clinical trials registries maintained by the federal government ( www.clinicaltrials.gov ) and vaccine manufacturers, and to consult subject matter experts. Ongoing studies should be recorded as well so that if the review or guideline were to be updated, these studies can be assessed for inclusion.

6.3b. Screening to identify eligible studies

The criteria for including/excluding evidence identified by the search, and the reasons for including and excluding evidence should be described (e.g., population characteristics, intervention, comparison, outcomes, study design, setting, language). Screening is typically conducted independently and in duplicate by at least two reviewers. Title and abstract screening is done first based on broader eligibility criteria and once relevant abstracts are selected, the full texts of those papers are pulled. The full-text screening is also usually conducted by two reviewers, independently and in duplicate with a more specific eligibility criteria to decide if the paper answers the PICO question or not. At both the title and abstract, and at the full-text stages, disagreements between reviewers can be resolved through discussion or involvement of a third reviewer. The goal of the screening process is to sort through the literature and select the most relevant studies for the review. To organize and conduct the systematic review, Covidence can be used to better manage each of the steps of the screening process. Other programs, such as DistillerSR or Rayyan can also be used to manage the screening process 11 12 . The PRISMA Statement ( www.prisma-statement.org ) includes guidance on reporting the methods for evidence retrieval. A PRISMA flow diagram (Figure 3) presents the systematic review search process and results.

Figure 3. PRISMA flow diagram depicting the flow of information through the different phases of the systematic review evidence retrieval process, including the number of records identified, records included and excluded at each stage, and the reasons for exclusions.

References in this figure: 13

Figure 3: PRISMA flow diagram depicting the flow of information through the different phases of the systematic review...

*Consider, if feasible to do so, reporting the number of records identified from each database or register searched (rather than the total number across all databases/registers).

**If automation tools were used, indicate how many records were excluded by a human and how many were excluded by automation tools.

From: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. doi: 10.1136/bmj.n71. For more information, visit: http://www.prisma-statement.org/

6.3c. Data extraction

Once included articles have been screened and selected, relevant information from the articles should be extracted systematically using a standardized and pilot-tested data extraction form. Table 3 provides an example of an ACIP data extraction form (data fields may differ by topic and scope); Microsoft Excel can be used to keep track of and extract relevant details about each study. Data extraction forms typically capture information about: 1) study details (author, publication year, title, funding source, etc.); 2) study characteristics (study design, geographical location, population, etc.); 3) study population (demographics, disease severity, etc.); 4) intervention and comparisons (e.g., type of vaccine/placebo/control, dose, number in series, etc.); 5) outcome measures. For example, for dichotomously reported outcomes, the number of people with the outcome per study arm and the total number of people in each study arm are noted. In contrast, for continuous outcomes, the total number of people in each study arm, the mean or median, as well as standard deviation or standard error are extracted. This is the information needed to conduct a quantitative synthesis. If this information is not provided in the study, reviewers may want to reach out to the authors for more information or contact a statistician about alternative approaches to quantifying data. After extracting the studies, risk of bias should be assessed using an appropriate tool described in Section 8.1 of this handbook.

Table 3. Example of a data extraction form for included studies

Author, Year Name of reviewer Date completed Study characteristics Participants Interventions Outcomes Other fields
Study design Number of participants enrolled* Number of participants analyzed* Loss to follow up (for each outcome) Country Age Sex (% female) Race/ Ethnicity Inclusion criteria Exclusion criteria Equivalence of baseline characteristics Intervention arm Dose Duration Cointerventions Comparison arm Dose Duration Cointerventions Dichotomous: intervention arm n event/N, control arm n event/N Type of study (published/ unpublished) Funding source Study period Reported subgroup analyses

*total and per group

6.4 Conducting the meta-analysis

After the data has been retrieved, if appropriate, it can be statistically combined to produce a pooled estimate of the relative (e.g., risk ratio, odds ratio, hazard ratio) or absolute (e.g., mean difference, standard mean difference) effect for the body of evidence of each outcome. A meta-analysis can be performed when there are at least two studies that report on the same outcome. Several software programs are available that can be used to perform a meta-analysis, including R, STATA, and Review Manager (RevMan).

The results from a meta-analysis are presented in a forest plot as presented in figure 4. A forest plot presents the effect estimates and confidence intervals for each individual study and a pooled estimate of all the studies included in the meta-analysis 14 . The square represents the effect estimate and the horizontal line crossing the square is indicative of the confidence interval (CI; typically 95% CI). The area the square covers reflects the weight given to the study in the analysis. The summary result is presented as a diamond at the bottom.

Figure 4. Estimates of effect for RCTs included in analysis for outcome of incidence of arthralgia (0-42 days)

References in this figure: 15

Figure 4. Estimates of effect for RCTs included in analysis for outcome of incidence of arthralgia (0-42 days)

The two most popular statistical methods for conducting meta-analyses are the fixed-effects model and the random-effects model 14 . These two models typically generate similar effect estimates when used in meta-analyses. However, these models are not interchangeable, and each model makes a different assumption about the data being analyzed.

A fixed-effects model assumes that there is one true effect size that can be identified across all included studies; therefore, all observed differences between studies are attributed to sampling error. The fixed effect model is used when all the studies are assumed to share a common effect size 16 . Before using the fixed-effect model in a meta-analysis, consideration should be made as to whether the results will be applied to only the included studies. Since the fixed-effect model provides the pooled effect estimate for the population in the studies included in the analysis, it should not be used if the goal is to generalize the estimate to other populations.

In contrast, a random-effects model, some variability between the true effect sizes studies is accepted. These effect sizes are assumed to follow a normal distribution. The confidence intervals generated by the random-effects model are typically wider than those generated by the fixed-effect model, as they recognize that some variability in the findings can be due to differences between the primary studies. The weights of the studies are also more similar under the random-effects model. When variations in, for example, the participants or methods across different included studies is suspected, it is suggested to use a random-effects model. This is because the studies are weighed more evenly than the fixed effect model. The majority of analyses will meet the criteria to use a random effects mode. One caveat about the selection of models: when the number of studies included in the analysis is few (<3), the random-effects model will produce an estimate of variance with poor precision. In this situation, a fixed effect model will be a more appropriate way to conduct the meta-analysis 17 .

  • Lefebvre C, Glanville J, Briscoe S, et al. Chapter 4: Searching for and selecting studies. In: Higgins J, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions version 63 (updated February 2022). Cochrane; 2022. www.training.cochrane.org/handbook.
  • Committee on Standards for Developing Trustworthy Clinical Practice Guidelines BoHCS, Institute of Medicine. Clinical Practice Guidelines We Can Trust. National Academies Press; 2011.
  • World Health Organization. WHO handbook for guideline development, 2nd ed. 2014: World Health Organization. 167.
  • Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017/09/21/ 2017:j4008. doi:10.1136/bmj.j4008
  • Bristol Uo. ROBIS tool.
  • Lasserson T, Thomas J, Higgins J. Chapter 1: Starting a review. In: Higgins J, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions version 63. 2022. www.training.cochrane.org/handbook
  • Moher D, Shamseer L, Clarke M, et al. Preferred reporting items for systematic review and meta analysis protocols (PRISMA-P) 2015 statement. Syst Rev. Jan 1 2015;4:1. doi:10.1186/2046- 4053-4-1
  • PROSPERO. York.ac.uk. https://www.crd.york.ac.uk/PROSPERO/
  • Cuello-Garcia CA, Santesso N, Morgan RL, et al. GRADE guidance 24 optimizing the integration of randomized and non-randomized studies of interventions in evidence syntheses and health guidelines. J Clin Epidemiol. 2022/02// 2022;142:200-208. doi:10.1016/j.jclinepi.2021.11.026
  • Schünemann HJ, Tugwell P, Reeves BC, et al. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods. 2013 2013;4(1):49-62. doi:10.1002/jrsm.1078
  • DistillerSR | Systematic Review and Literature Review Software. DistillerSR.
  • Rayyan – Intelligent Systematic Review. https://www.rayyan.ai/
  • Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. doi:10.1136/bmj.n71
  • Deeks J, Higgins J, Altman D. Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins J, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of v.04_2024 20 Interventions version 63 (updated February 2022). Cochrane; 2022. www.training.cochrane.org/handbook .
  • Choi MJ, Cossaboom CM, Whitesell AN, et al. Use of ebola vaccine: recommendations of the Advisory Committee on Immunization Practices, United States, 2020. MMWR Recommendations and Reports. 2021;70(1):1.
  • Borenstein M, Hedges LV, Higgins JP, Rothstein HR. Introduction to meta-analysis. John Wiley & Sons; 2021.
  • Borenstein M, Hedges LV, Higgins JP, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods. 2010;1:97-111. doi:DOI: 10.1002/jrsm.12

ACIP GRADE Handbook

This handbook provides guidance to the ACIP workgroups on how to use the GRADE approach for assessing the certainty of evidence.

Introduction to Systematic Reviews

  • Reference work entry
  • First Online: 20 July 2022
  • pp 2159–2177
  • Cite this reference work entry

the relevance of systematic reviews to educational policy and practice

  • Tianjing Li 3 ,
  • Ian J. Saldanha 4 &
  • Karen A. Robinson 5  

418 Accesses

1 Citations

A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter. We discuss the motivation for and importance of performing systematic reviews and their relevance to trialists. We introduce the key steps in completing a systematic review, including framing the question, searching for and selecting studies, collecting data, assessing risk of bias in included studies, conducting a qualitative synthesis and a quantitative synthesis (i.e., meta-analysis), grading the certainty of evidence, and writing the systematic review report. We also describe how to identify systematic reviews and how to assess their methodological rigor. We discuss the challenges and criticisms of systematic reviews, and how technology and innovations, combined with a closer partnership between trialists and systematic reviewers, can help identify effective and safe evidence-based practices more quickly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

the relevance of systematic reviews to educational policy and practice

What Is the Difference Between a Systematic Review and a Meta-analysis?

the relevance of systematic reviews to educational policy and practice

Systematic Reviewing

AHRQ (2015) Methods guide for effectiveness and comparative effectiveness reviews. Available from https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview . Accessed on 27 Oct 2019

Andersen MZ, Gülen S, Fonnes S, Andresen K, Rosenberg J (2020) Half of Cochrane reviews were published more than two years after the protocol. J Clin Epidemiol 124:85–93. https://doi.org/10.1016/j.jclinepi.2020.05.011

Article   Google Scholar  

Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, Gartlehner G, Hartling L, McPheeters M, Morgan LC, Reston J, Sista P, Whitlock E, Chang S (2015) Grading the strength of a body of evidence when assessing health care interventions: an EPC update. J Clin Epidemiol 68(11):1312–1324

Borah R, Brown AW, Capers PL, Kaiser KA (2017) Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open 7(2):e012545. https://doi.org/10.1136/bmjopen-2016-012545

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JP, Oliver S (2014) How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156–165. https://doi.org/10.1016/S0140-6736(13)62229-1

Clarke M, Chalmers I (1998) Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA 280(3):280–282

Cooper NJ, Jones DR, Sutton AJ (2005) The use of systematic reviews when designing studies. Clin Trials 2(3):260–264

Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, Hozo I, Clarke M, Sargent D, Schell MJ (2011) Optimism bias leads to inconclusive results-an empirical study. J Clin Epidemiol 64(6):583–593. https://doi.org/10.1016/j.jclinepi.2010.09.007

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol 91:23–30

Equator Network. Reporting guidelines for systematic reviews. Available from https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=systematic-reviews-and-meta-analyses&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+ . Accessed 9 Mar 2020

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ (2016) Panel for updating guidance for systematic reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 354:i3507. https://doi.org/10.1136/bmj.i3507 . Erratum in: BMJ 2016 Sep 06 354:i4853

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64(4):383–394

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) (2019a) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester

Google Scholar  

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (2019b) Standards for the conduct of new Cochrane intervention reviews. In: JPT H, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (eds) Methodological expectations of Cochrane intervention reviews. Cochrane, London

IOM (2011) Committee on standards for systematic reviews of comparative effectiveness research, board on health care services. In: Eden J, Levit L, Berg A, Morton S (eds) Finding what works in health care: standards for systematic reviews. National Academies Press, Washington, DC

Jonnalagadda SR, Goyal P, Huffman MD (2015) Automating data extraction in systematic reviews: a systematic review. Syst Rev 4:78

Krnic Martinic M, Pieper D, Glatt A, Puljak L (2019) Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol 19(1):203. Published 4 Nov 2019. https://doi.org/10.1186/s12874-019-0855-0

Lasserson TJ, Thomas J, Higgins JPT (2019) Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327(4):248–254

Lau J (2019) Editorial: systematic review automation thematic series. Syst Rev 8(1):70. Published 11 Mar 2019. https://doi.org/10.1186/s13643-019-0974-z

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6(7):e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, Chalmers I (2016) Towards evidence based research. BMJ 355:i5440. https://doi.org/10.1136/bmj.i5440

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC (2018) Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 9(4):602–614. https://doi.org/10.1002/jrsm.1287

Michelson M, Reuter K (2019) The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun 16:100443. https://doi.org/10.1016/j.conctc.2019.100443 . Erratum in: Contemp Clin Trials Commun 2019 16:100450

Moher D, Liberati A, Tetzlaff J (2009) Altman DG; PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269. W64

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

NIHR HTA Stage 1 guidance notes. Available from https://www.nihr.ac.uk/documents/hta-stage-1-guidance-notes/11743 ; Accessed 10 Mar 2020

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D (2016) Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al (eds) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester, pp 349–374

Chapter   Google Scholar  

Robinson KA (2009) Use of prior research in the justification and interpretation of clinical trials. Johns Hopkins University

Robinson KA, Goodman SN (2011) A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 154(1):50–55. https://doi.org/10.7326/0003-4819-154-1-201101040-00007

Rouse B, Cipriani A, Shi Q, Coleman AL, Dickersin K, Li T (2016) Network meta-analysis for clinical practice guidelines – a case study on first-line medical therapies for primary open-angle glaucoma. Ann Intern Med 164(10):674–682. https://doi.org/10.7326/M15-2367

Saldanha IJ, Lindsley K, Do DV et al (2017) Comparison of clinical trial and systematic review outcomes for the 4 most prevalent eye diseases. JAMA Ophthalmol 135(9):933–940. https://doi.org/10.1001/jamaophthalmol.2017.2583

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 7:10

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007) How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 147(4):224–233

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S (2019) Chapter 2: determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

USPSTF U.S. Preventive Services Task Force Procedure Manual (2017). Available from: https://www.uspreventiveservicestaskforce.org/uspstf/sites/default/files/inline-files/procedure-manual2017_update.pdf . Accessed 21 May 2020

Whitaker (2015) UCSF guides: systematic review: when will i be finished? https://guides.ucsf.edu/c.php?g=375744&p=3041343 , Accessed 13 May 2020

Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J (2016) Churchill R; ROBIS group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234. https://doi.org/10.1016/j.jclinepi.2015.06.005

Download references

Author information

Authors and affiliations.

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

Department of Health Services, Policy, and Practice and Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Ian J. Saldanha

Department of Medicine, Johns Hopkins University, Baltimore, MD, USA

Karen A. Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tianjing Li .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Epidemiology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins University, Baltimore, MD, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Li, T., Saldanha, I.J., Robinson, K.A. (2022). Introduction to Systematic Reviews. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_194

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_194

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Special issue of the International Review of Education published

International Review of Education Special Issue Cover

The International Review of Education – Journal of Lifelong Learning has published a special issue entitled  The Sustainable Development Goals and the Global Governance of Education , guest-edited by William C. Smith, Melanie C. M. Ehren and Sotiria Grek.

In their  introduction , the guest editors note that ‘while global governance has been well developed in other fields, its discussion in the field of education is still limited’. For a long time, much attention has been given to the role played by the World Bank, the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organisation for Economic Co-operation and Development (OECD) in shaping national policy and global discourse on education.

However, research has begun to recognise the importance of intergovernmental organisations, civil society and the private sector in the global sphere, embracing a multilateral approach and an appreciation of multistakeholder governance. While this is not without risk of tensions in terms of power and influence, the editors note that the negotiation process of the Sustainable Development Goals (SDGs) has been “applauded as one of the most inclusive consultations in the history of the United Nations, largely due to the engagement with a wide spectrum of stakeholders during the creation of the 2030 Agenda”. This special issue examines the SDGs as both a product of global governance and a mechanism through which global governance is used to influence nations and regions.

Concentrating on the SDGs as a product of global governance, the first three articles address the aspects of gender, education and global governance in organisations; global summitry and the Transforming Education Summit (TES); and orchestration and the Global Education Cooperation Mechanism (GCM). The other three articles consider the SDGs as a mechanism of global governance. They look at voluntary national reviews (VNRs) as a tool of soft governance; the evolution of key themes around SDG 4 before and after 2015; and the adoption and adaption of the global goals for the African context with the African Union in the role of a regional translator of the SDGs.

Related items

  • Mobilize Education to Transform Lives
  • Adult education
  • Adult education programmes
  • SDG: SDG 4 - Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all
  • See more add

This article is related to the United Nation’s Sustainable Development Goals .

More on this subject

Global summit for peace education

Other recent news

UIS uncovers its brand-new knowledge products

COMMENTS

  1. The Relevance of Systematic Reviews to Educational Policy and Practice

    ABSTRACT This paper argues that educational policy and practice has much to gain from systematic reviews and other methods of research synthesis. Different types of reviews are considered, including narrative reviews, vote-counting reviews, meta-analyses, best evidence synthesis, and meta-ethnography. It is argued that systematic reviews allow ...

  2. The Relevance of Systematic Reviews to Educational Policy and Practice

    This, in turn, allows for some degree of cumulative knowledge of educational research that is often missing in the absence of systematic reviews. Some limitations of systematic reviews and research synthesis for educational policy and practice are also discussed. The work of the Campbell Collaboration as an international organisation that ...

  3. PDF The Relevance of Systematic Reviews to Educational Policy and Practice

    Oxford Review of Education, Vol. 26, Nos. 3&4, 2000 The Relevance of Systematic Reviews to Educational Policy and Practice PHILIP DAVIES ABSTRACT This paper argues that educational policy and practice has much to gain from systematic reviews and other methods of research synthesis. Different types of reviews

  4. The Relevance of Systematic Reviews to Educational Policy and Practice

    Abstract. This paper argues that educational policy and practice has much to gain from systematic reviews and other methods of research synthesis. Different types of reviews are considered ...

  5. Systematic Reviews in Educational Research: Methodology, Perspectives

    A review of reviews (sometimes called 'overviews' or 'umbrella' reviews) is a tertiary level of analysis. It is a systematic map and/or synthesis of previous reviews. The 'data' for reviews of reviews are previous reviews rather than primary research studies (see for example Newman et al. (2018).

  6. Reflections on the Methodological Approach of Systematic Reviews

    In an attempt to remedy this, not only were funds directed into increasing the amount of policy- and practice-relevant research on teaching and learning, but also into producing systematic reviews of findings relating to a wide range of educational issues (Davies 2000; Oakley et al. 2005).

  7. Systematic Reviews in Educational Research

    In this open access edited volume, international researchers of the field describe and discuss the systematic review method in its application to research in education. Alongside fundamental methodical considerations, reflections and practice examples are included and provide an introduction and overview on systematic reviews in education research.

  8. The trials of evidence-based practice in education: a systematic review

    Introduction. Since the late 1990s there has been an increasing shift towards the notion of evidence-based practice in education (Thomas and Pring Citation 2004; Hammersley Citation 2007; Bridges, Smeyers, and Smith Citation 2009).A significant element of this has been concerned with research that has sought to identify and provide robust evidence of 'what works' in relation to educational ...

  9. The Place of Systematic Reviews in Education Research

    that wishes to inform and be informed by practice and policy. It proposes and discusses a model of educational research, showing how reviews relate to small or large-scale primary studies. Keywords: systematic research reviews, policy, practice 1. INTRODUCTION The re-emergence of systematic research reviews in the field of

  10. Systematic reviews of research in education: aims, myths and multiple

    Systematic reviews are still a controversial topic in some quarters, with the arguments for and against their use being well-rehearsed. In an attempt to advance a more nuanced approach to thinking about systematic reviewing, this paper illustrates the wide range of theoretical perspectives, methodologies and purposes that underpin the vast range of systematic review approaches now available ...

  11. Achieving Better Educational Practices Through Research Evidence: A

    A question commonly asked among educational policy makers and researchers is whether technology is "effective" for teaching and learning. In a previous paper, we examined how research is commonly designed and results interpreted to address this issue (Ross & Morrison, 2014). Some of the concerns that we raised mirror those discussed above ...

  12. Research Worth Using: (Re)Framing Research Evidence Quality for

    Relevance to practice as a criterion for rigor. Educational ... The use of research to improve professional practice: A systematic review of the literature. Oxford Review of ... to improve the use and usefulness of research in education. In Finnigan K., Daly A. (Eds.), Using research evidence in education: Policy implications of ...

  13. The Relevance of Systematic Reviews to Educational Policy and Practice

    This paper argues that educational policy and practice has much to gain from systematic reviews and other methods of research synthesis. Different types of reviews are considered, including narrative reviews, vote-counting reviews, meta-analyses, best evidence synthesis, and meta-ethnography. It is argued that systematic reviews allow researchers, and users of research, to go beyond the ...

  14. Evidence-Based Policies in Education: Initiatives and Challenges in

    The topic areas have since been expanded to include social care, public health, employment, social and economic development, as well as education. In addition to conducting systematic reviews in the educational and social sciences, the EPPI-Centre examines the research used in decision-making in policy, practice, and everyday life.

  15. The dos and don'ts of influencing policy: a systematic review of advice

    We summarise these misunderstandings below (see Table 2 for an overview), by drawing a wider range of sources such as policy studies literature (Cairney, 2016) and a systematic review of factors ...

  16. A systematic review of screen-time literature to inform educational

    The inclusion criteria were guided by the research question (Xiao & Watson, 2019) and inspired by existing systematic reviews (Quin, 2017; Schott, van Roekel & Tummers, 2020). 1. Relevance - Only studies that were a good fit with the research question were included. This implied that studies must explicitly explore the relationship between time ...

  17. Ethical Considerations of Conducting Systematic Reviews in Educational

    Abstract. Ethical considerations of conducting systematic reviews in educational research are not typically discussed explicitly. However, systematic reviews are frequently read and cited in documents that influence educational policy and practice. Hence, ethical issues associated with what and how systematic reviews are produced and used have ...

  18. The relevance of systematic reviews to educational policy and practice

    The relevance of systematic reviews to educational policy and practice. This paper argues that educational policy and practice has much to gain from systematic review and other methods of research synthesis. Different types of reviews are considered, including narrative reviews, vote-counting reviews, and meta-analyses, best evidence synthesis ...

  19. Systematic Reviews: What have they got to offer evidence based policy

    These include plain language summaries, structured critical abstracts, overviews of reviews on a particular topic, and briefings that combine systematic reviews with other evidence sources. This presentation will contribute to the debate on extending the use of systematic reviews in public health and healthcare policy areas.

  20. A systematic literature review of barriers and supports: initiating

    The importance of educational change has prompted reviews of literature around the world. ... from the perspective of a particular stakeholder (e.g. Leithwood's (Citation 2016) systematic review of department head leadership for school improvement), or from a particular angle ... Educational Research for Policy and Practice 8 : ...

  21. Chapter 6: Systematic Review Overview

    A published systematic review can be used to inform the guideline, or a new one can be conducted. The benefits of identifying a previously conducted systematic review include reduced time and resources of conducting a review from scratch 3. Additionally, if a Cochrane or other well-done systematic review exists on the topic of interest, the ...

  22. Introduction to Systematic Reviews

    Abstract. A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter.

  23. Special issue of the International Review of Education published

    The International Review of Education - Journal of Lifelong Learning has published a special issue entitled The Sustainable Development Goals and the Global Governance of Education, guest-edited by William C. Smith, Melanie C. M. Ehren and Sotiria Grek.. In their introduction, the guest editors note that 'while global governance has been well developed in other fields, its discussion in ...

  24. Evidence-Informed Policy and Practice in the Field of Education: The

    THE ORGANIZATIONAL CHALLENGES OF KNOWLEDGE BROKERING. The fields of policy, science, and practice have different organizational characteristics, societal purposes and norms, and values systems that are known to be rather powerful and influential (Merton, Citation 1973).As such, they can be seen as fields that are both producing and are themselves embedded in certain institutional logics ...

  25. Systematic Reviews in Political Science: What Can the Approach

    Davies P (2000) The Relevance of Systematic Reviews to Educational Policy and Practice. Oxford Review of Education 26 (3-4): 365-378. Crossref. Web of Science. Google Scholar. ... The Importance of Consociationalism for Twenty‐First Century Politics ...

  26. Social and Emotional Learning in U.S. Schools

    A large body of evidence indicates that well-implemented social and emotional learning (SEL) programs improve academic, social, and emotional outcomes for students and educators. Education policy has the potential to influence the high-quality implementation of SEL, from the school district, to the school, to the classroom.