1. Building capacity for evidence-informed policy-making: the need to connect supply with demand for evidence

Ensuring demand for evidence has become very challenging in a context of global over-supply of knowledge and the complex political process. The amount of information to be considered by policy-makers is overwhelming and ever more complex, while the individual and organisational capacity to process information can be restricted and skewed by biases. Simultaneously, important evidence gaps remain on ‘what works’ in many policy areas. As a result, moving the frontiers of evidence, policy and people for joint solutions involves difficult trade-offs just at a time when evidence-informed policies are very much needed. These challenges are compounded in a “post-truth” world, where the speed of reaction is dictated by a wide variety of media and where ‘facts’ may be presented without foundation or verification. Governments are also facing citizens’ anger, and political forces are responding to citizens’ perceptions in ways that may challenge some of the established arrangements. Maintaining the capacity of government to deliver in effective ways that respond to political priorities without prejudgement is critical to respond to these new challenges.

This requires building new skills and capacity in the public sector. The challenge is to be able to foster informed judgement and to ensure that the public sector is equipped with the right skill-set to nurture evidence-informed policy-making (EIPM). New technologies and new possibilities with data analytics, a growing body of policy-relevant research and a diversity of citizen perspectives demand new skills for effective and timely policy-making.

Those interested in serving the public interest need the right skills to commission, understand and integrate evidence. Effective civil service capacity support should ideally encompass a range of interventions: from developing skills, values and norms to promote EIPM at an individual level, to supporting the adoption of procedures, incentives and resources (financial and human) to enhance use of evidence. The civil service, particularly the Senior Civil Service, needs critical appraisal skills in order to assess the provenance of evidence, its robustness, its relevance and impact, and at the same time meet ethical standards, while feeding into institutional set-ups that take into account wider political constraints.

Evidence has a critical role to play in improving the quality, responsiveness and accessibility of public services. It can play a role throughout the key stages of the policy cycle and is increasingly recognised as a critical part of good governance. Evidence-informed policy-making can be defined as a process whereby multiple sources of information, including statistics, data and including the best available research evidence and evaluations, are consulted before making a decision to plan, implement, and (where relevant) alter public policies, programmes and deliver quality public other services. ‘(derived from (Langer, Tripney and Gough, 2016[1]; OECD, 2018[2]). This report adopts a correspondingly broad definition of research evidence to mean ‘a systematic investigative process employed to increase or revise current knowledge’ (Langer, Tripney and Gough, 2016[1]) that encompasses policy evaluation as well as scientific investigations.

Policy design benefits from ‘policy memory’, an understanding of what challenges have been experienced in the past and what previous good practices could be incorporated into the current reform effort. This underlines the importance of thorough stock taking of the existing evidence base to inform policy and programme design.

Evidence synthesis, such as systematic reviews, helps to prevent one-sided policy design, avoid duplication and ensure scarce resources are directed at areas of policy requiring further solutions. Evidence synthesis also helps to identify policies and practices that have been found to be ineffective, where caution should be exercised before further investment in the absence of further refinement and testing (Gough, Oliver and Thomas, 2013[3]; Torgerson and Torgerson, 2003[4]).

Evidence also has a critical contribution to make in policy implementation, which requires significant planning and management support. Implementation science provides an understanding of how to adapt policies to meet local needs, whilst guarding against changes that may affect outcomes: this can make the difference between a successful implementation of an intervention and one that is ineffective or potentially even harmful (Moore, Bumbarger and Cooper, 2013[5]). Gathering evidence on factors that help and hinder implementation also facilitates dissemination of effective interventions at scale and achieve outcomes at the population level (Castro, Barrera and Holleran Steiker, 2010[6]).

Policy evaluation is also critical to understand why some (complex) policies work and why others do not. As one important source of policy relevant knowledge, policy evaluation supports policy choices rooted in an evidence-informed policy-making process. Solid policy evaluation and its strategic use throughout the policy cycle can foster a range of objectives such as policies’ effectiveness, value for money, accountability and overall transparency of a policy-making process (OECD, 2018). Building evaluation capacity is an important component of international aid towards development, which are subject to strong accountability objectives and it also enables governments to assess how policies stimulate progress towards the Sustainable Development Goals (SDGs).

When used systematically and as a system-wide approach, regulatory impact analysis (RIA) is a critical tool to ensure greater quality of a particular type of government intervention, which concerns the use of regulatory and legislative tools (OECD, 2018[2]). RIA is an important tool to address the issue that government interventions do not always fully consider their likely effects at the time of their development. As a result, there are many instances of embarrassment, unintended consequences and ultimately negative impacts for citizens, businesses and society as a whole that could be better identified through a RIA process (OECD, 2018[2]). Carefully designed and executive RIA, undertaken at the inception of policy proposals ensures informed judgements can be made between policy options.

While this report starts from the premise that evidence-informed policy-making can lead to better outcomes, it also acknowledges the inherently complex conditions of the policy-making process, which necessitates multiple approaches to ensure sound public governance. Political decision makers are considering many sources and forms of input including economic, ideological, social and political factors (Newman, Fisher and Shaxson, 2012[7]) and are listening to citizens and other stakeholder groups in order to make decisions in a timely manner. Alongside the civil service, ministerial advisors can also help government leaders in these areas. OECD’s Ministerial Advisors Survey (OECD, 2011[8]) finds that advisors have a crucial role to play in helping ministers keep in touch with stakeholders and public opinion in an increasingly complex and fast-paced environment. Evidence will always be mediated through a political process that allows intuition to shape the final policy as part of a democratic process that fully respects political discretion.

There is also a need to address the potential for bias as external voices often try to intervene in the policy-making process to preserve or promote specific interests. As a result, conflicts of interest is another issue that has become a matter of public concern, and that can also impact the quality of evidence and the trust that is attached to it. OECD’s ‘Guidelines for Managing Conflict of Interest in the Public Service’ respond to a growing demand to ensure integrity and transparency in the public sector (OECD, 2003[9]). The primary aim of the Guidelines is to help countries, at central government level, consider Conflict of Interest policy and practice relating to public officials. Demands for transparency in policy-making have also led to concerns over lobbying practices. In 2009, the OECD reviewed the data and experiences of government regulation, legislation and self-regulation, leading to ‘10 Principles for Transparency and Integrity in Lobbying’. These issues are also pertinent to evidence-informed policy-making and capacity building. The commercialisation of capacity building activities can also create pressure to overstate the benefits, leading to erosion in confidence if expectations are not met (Leadbeater et al., 2018[10]). These wider issues of integrity and transparency of the interface between evidence production and policy-making do matter and need specific attention, even if this goes beyond the scope of the current report.

Despite the potential for policies to be based on evidence, in reality an effective connection with many types of research evidence in policy-making remains elusive (Newman, Cherney and Head, 2017[11]). For example, US estimates show that, under the two Obama administrations only 1% of government funding was informed by evidence (Bridgeland and Orszag, 2013[12]). In the UK, there are also concerns about the generation and use of evidence by government. An enquiry into ‘missing evidence’ found that although UK government spends around 2.5 billion GBP a year on research for policy, only 4 out of 24 departments maintain a database of commissioned research (Sedley, 2016[13]). A report by the National Audit Office on evaluation in government found little systematic information from the government on how it has used the evaluation evidence that it had commissioned or produced (NAO, 2013[14]).

A study of 2,084 public servants in Australia found that although public servants seem to have good access to academic research, they are not using it systematically in crafting policy analysis and advice (Newman, Cherney and Head, 2017[11]). A survey in South Africa also found that while 45% of senior policy-makers intended to use evidence during policy-making, in reality only 9% were able to do this in practice. Policy-makers identify a lack of skill and capacity as one of the reasons why they do not use research and the results of policy evaluation (Campbell et al., 2009[15]; Orton et al., 2011[16]).

One consequence of these challenges is that often, there remains a discrepancy between what is known to be effective as suggested by evidence and what is actually happening in practice on the ground. In health care, where this issue was first identified over two decades ago, it was estimated that overuse, underuse or misuse of health care, failing to take advantage of evidence-based care approaches could cost as much as 91000 deaths per year for chronic conditions, and between 44 000 and 98000 deaths due to preventable medical errors (Kohn LT, 2000[17]). Still today, one salient example of this concerns the importance of handwashing to prevent transmission of infection: a century after the relevant research, there remains chronic underuse of appropriate handwashing in both high-income countries and low to middle-income countries resulting in avoidable illness and deaths (Glasziou et al., 2017[18]). In social services, evidence based interventions for families, which are effective in improving a range of outcomes for children also remain underutilised globally (Kumpfer, Magalhães and Xie, 2017[19]). Underuse of effective interventions represents an inefficient use of resources, causing harm to citizens. Therefore, increasing governments’ capacity for an evidence-informed approach remains a critical part of good public governance, to increase the capacity to deliver quality public services and increase citizens’ wellbeing in cost effective ways.

While objective evidence is one critical input to the policy-making process, advances in behavioural sciences have demonstrated that decision-making is subject to fundamental constraints and biases. Cognitive biases, ideologies, and competing interests of stakeholders all have the potential to influence the policy-making process. For example, motivated reasoning, the biased assessment of evidence that favours the desired outcome, is a fundamental feature of cognitive reasoning (Mercier and Sperber, 2011[20]; Pennycook and Rand, 2018[21]) This can affect the work of analysts, as well as the approach of senior decision makers and politicians. Time constraints may mean that policymakers use the ‘best available’ evidence, which includes personal memories that can be most directly accessed by the brain, and previous textbook approaches, rather than waiting for information from the latest scientific experiments and policy evaluations and understanding the complex rationale of some scientific findings.

The OECD is addressing some of the cognitive aspects in work on behavioural insights, to see how to identify the role of psychological bias to design more effective interventions, particularly on the regulatory side (OECD, 2016[22]; OECD, 2017[23]).

The role of cognitive approaches was also fully acknowledged in the OECD’s New Approaches to Economic Challenges (NAEC) initiative. NAEC has called for promoting a systemic perspective on interconnected challenges to identify the analytical and policy tools needed to understand them. For example, in 2017, NAEC organised a workshop with experts in the fields of economics, behavioural and cognitive sciences, psychology and philosophy to explore what the study of neuro-economics and neural processes involved in policy-making, to understand those often less understood aspects of human behaviour (OECD, 2017[24]).

This is also an area where the EU Joint Research Centre is currently working to understand and explain the drivers that influence policy-making and political discourse, in order to optimise the way scientific evidence is used in policy-making (European Commission, 2018[25]). This can help to understand how a range of different factors including facts, values, interests and social relations affect the policy-making process, at individual and organisational level.

Building capacity for Evidence-informed policy-making (EIPM), in the public sector will help to implement strategies that aim to increase the efficiency, effectiveness and responsiveness of government, through better use of evidence (Harrow, 2001[26]). This is designed to improve core public governance outcomes, which are impacted by the efficiency, effectiveness and responsiveness of government and of public services. Building capacity for EIPM will help the public sector to engage effectively with the plurality of evidence available in modern global economies. (Newman, Cherney and Head, 2017[11]).

This report intentionally focuses on how to build capacity on the demand side of evidence that is how to generate effective demand and use of evidence. This focus was chosen because there is a lack of research and international comparative examples of how to build capacity on the demand side of evidence-informed policy-making (Newman, Fisher and Shaxson, 2012[7]). This report conceptualises the demand and supply side relationship primarily in terms the civil service/wider public sector and the research and policy profession community, recognising that members of the research and policy profession community can also be situated within government such as economists, statisticians and social researchers.

Nevertheless, this report recognises that evidence-informed policy-making necessitates consideration of both the supply and demand for evidence. A reliable and high-quality supply of policy-relevant evidence is a necessary factor for the use of evidence. While supply of scientific knowledge and of analysis is generally abundant, in many policy areas this supply may be limited or lacking to address specific policy relevant questions. Governments are also recognising that the data they hold is a strategic asset that can used to generate evidence to inform the performance of policies. These supply side issues are addressed in other OECD work, including an upcoming report on the Institutionalisation, Quality and Use of Evaluation (OECD, 2020[27]), as well as an upcoming report on the data driven public sector (OECD, 2019[28]).

This report also addresses the structural and organisational level. A further motivation for building capacity for EIPM in a structural sense is that the majority of work on how to improve evidence use focuses on the individual level, such as training courses, linking schemes between policy-makers and training individual researchers or policy-makers to be ‘knowledge brokers’. However, a sole focus on individuals places undue expectations on researchers and policy-makers, who may not see it as their role or skills set to transfer knowledge or make use of knowledge (Parkhurst, 2017[29]). A focus on individuals may also limit the potential to generate long-term and system wide change. This is especially true in the context of the ‘churn’ in employees experienced in the civil service, whether as a result of standard staff rotations or of change of government at senior levels (OECD, 2017[30]).


[17] Academies, U. (ed.) (2000), Building a Safer Health Care System.

[12] Bridgeland, J. and P. Orszag (2013), Can Government Play Moneyball? - The Atlantic, https://www.theatlantic.com/magazine/archive/2013/07/can-government-play-moneyball/309389/ (accessed on 6 December 2018).

[15] Campbell, D. et al. (2009), “Increasing the use of evidence in health policy: practice and views of policy makers and researchers”, Australia and New Zealand Health Policy, Vol. 6/1, p. 21, https://doi.org/10.1186/1743-8462-6-21.

[6] Castro, F., M. Barrera and L. Holleran Steiker (2010), “Issues and Challenges in the Design of Culturally Adapted Evidence-Based Interventions”, Annual Review of Clinical Psychology, Vol. 6/1, pp. 213-239, https://doi.org/10.1146/annurev-clinpsy-033109-132032.

[25] European Commission (2018), Terms of Reference: Expert contributions to the JRC’s Enlightenment 2.0 Flagship Report, European Commission, Brussels, https://ec.europa.eu/jrc/sites/jrcsh/files/enlightenment_termsofreference-expert-contributions_180326.pdf.

[18] Glasziou, P. et al. (2017), “Evidence for underuse of effective medical services around the world”, The Lancet, Vol. 390/10090, pp. 169-177, https://doi.org/10.1016/S0140-6736(16)30946-1.

[3] Gough, D., S. Oliver and J. Thomas (2013), Learning from Research: Systematic Reviews for Informing Policy Decisions: A Quick Guide..

[26] Harrow, J. (2001), “‘CAPACITY BUILDING’ AS A PUBLIC MANAGEMENT GOAL - Myth, magic or the main chance?”, Public Management Review, Vol. 3/2, pp. 209-230, https://doi.org/10.1080/14616670010029593.

[19] Kumpfer, K., C. Magalhães and J. Xie (2017), “Cultural Adaptation and Implementation of Family Evidence-Based Interventions with Diverse Populations”, Prevention Science, Vol. 18/6, pp. 649-659, https://doi.org/10.1007/s11121-016-0719-3.

[1] Langer, L., J. Tripney and D. Gough (2016), The science of using science: researching the use of Research evidence in decision-making..

[10] Leadbeater, B. et al. (2018), “Ethical Challenges in Promoting the Implementation of Preventive Interventions: Report of the SPR Task Force”, Prevention Science, pp. 1-13, https://doi.org/10.1007/s11121-018-0912-7.

[20] Mercier, H. and D. Sperber (2011), “Why do humans reason? Arguments for an argumentative theory”, Behavioral and Brain Sciences, Vol. 34/02, pp. 57-74, https://doi.org/10.1017/S0140525X10000968.

[5] Moore, J., B. Bumbarger and B. Cooper (2013), “Examining Adaptations of Evidence-Based Programs in Natural Contexts”, The Journal of Primary Prevention, Vol. 34/3, pp. 147-161, https://doi.org/10.1007/s10935-013-0303-6.

[14] NAO (2013), Evaluation in government, https://www.nao.org.uk/report/evaluation-government/.

[11] Newman, J., A. Cherney and B. Head (2017), “Policy capacity and evidence-based policy in the public service”, Public Management Review, Vol. 19/2, pp. 157-174, https://doi.org/10.1080/14719037.2016.1148191.

[7] Newman, K., C. Fisher and L. Shaxson (2012), “Stimulating Demand for Research Evidence: What Role for Capacity-building?”, IDS Bulletin, Vol. 43/5, pp. 17-24, https://doi.org/10.1111/j.1759-5436.2012.00358.x.

[27] OECD (2020), Institutionalisation, Quality and Use of Policy Evaluation, Governance Lessons from Countries Experiences.

[2] OECD (2018), OECD Best Practice Principles for Regulatory Policy: Regulatory Impact Assessment, OECD, Paris.

[23] OECD (2017), Behavioural Insights and Public Policy: Lessons from Around the World, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264270480-en.

[30] OECD (2017), Government at a Glance 2017, OECD Publishing, http://www.oecd-ilibrary.org/governance/government-at-a-glance-2017_gov_glance-2017-en.

[24] OECD (2017), The State of Mind in Economics, http://www.oecd.org/fr/naec/the-state-of-mind-in-economics.htm (accessed on 6 March 2019).

[22] OECD (2016), Protecting Consumers through Behavioural Insights., OECD Publishing.

[8] OECD (2011), Ministerial Advisors: Role, Influence and Management, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264124936-en.

[9] OECD (2003), RECOMMENDATION OF THE COUNCIL ON GUIDELINES FOR MANAGING CONFLICT OF INTEREST IN THE PUBLIC SERVICE, http://www.oecd.org/governance/ethics/2957360.pdf (accessed on 3 October 2018).

[28] OECD Publishing, D. (ed.) (2019), The Path to Becoming a Data-Driven Public Sector,, https://doi.org/doi.org/10.1787/059814a7-en.

[16] Orton, L. et al. (2011), “The Use of Research Evidence in Public Health Decision Making Processes: Systematic Review”, PLoS ONE, Vol. 6/7, p. e21704, https://doi.org/10.1371/journal.pone.0021704.

[29] Parkhurst, J. (2017), The politics of evidence : from evidence-based policy to the good governance of evidence, Routledge, London, http://researchonline.lshtm.ac.uk/3298900/ (accessed on 23 November 2018).

[21] Pennycook, G. and D. Rand (2018), “Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning”, Cognition, https://doi.org/10.1016/J.COGNITION.2018.06.011.

[13] Sedley, S. (2016), Missing Evidence: Sir Stephen Sedley’s inquiry into delayed publication of government commissioned research, Sense about Science, London, https://www.cloisters.com/news/missing-evidence-sir-stephen-sedley-s-inquiry-into-delayed-publication-of-government-commissioned-research-report-out-now (accessed on 6 December 2018).

[4] Torgerson, C. and D. Torgerson (2003), “The Design and Conduct of Randomised Controlled Trials in Education: Lessons from health care”, Oxford Review of Education, Vol. 29/1, pp. 67-80, https://doi.org/10.1080/03054980307434.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.


© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.