copy the linklink copied!

Chapter 2. Making Policy Evaluation happen: What are the institutional underpinnings?

Abstract

Sound institutional set-ups can provide guidance and incentives to conduct evaluations across government in a systematic way. They can create conditions for transparency and accountability in the management of evaluations, and help promote the use of results in policy-making. This chapter presents the main aspects of the different institutional set-ups used by countries to promote evaluation as a practice within government. The chapter introduces countries’ legal and policy frameworks for evaluation and discusses the nature of policy evaluation guidelines. The chapter identifies the key institutional actors in charge of the management of evaluation within the executive, such as centres of government, ministries of finance and autonomous agencies, and underlines the role of supreme audit institutions beyond the executive. Finally, the chapter stresses the importance of co-ordination mechanisms to enable greater alignment and sharing of practices across institutions.

    
copy the linklink copied!

Key Findings

  • Institutionalisation matters for effective implementation and use of evaluation. It can provide useful incentives to ensure that evaluations can be conducted, as well as improve their quality through internal management and control tools.

  • Legal frameworks constitute the key basis to embedding the practice of evaluations across government in a systematic way. Around two-thirds of responding countries have created a legal basis for requiring and enabling policy evaluation.

  • Policy frameworks have also been developed in about half of the countries surveyed. These frameworks can give strategic direction for a specific sector or thematic area and can help to support the implementation of quality evaluation. They also have the potential to provide high-level guidance and clarity.

  • The centre of government is the main actor that provides strategic direction for policy evaluation. Survey responses show that the CoG plays a crucial role in embedding a whole-of-government approach to policy evaluation and often has the widest mandates. It also provides incentives for other institutions to use evaluation findings.

  • Ministries of finance also play a very important role and have responsibilities in many countries. Policy evaluation can help in enhancing the quality of public expenditures and in delivering improved results and performance. .

  • Ministries of planning play a significant role in about a sixth of the sample. This trend can be explained by the relevance of national development plans in Latin America countries, and the mandate given to Ministries of Planning to evaluate these strategic plans.

  • In some countries, autonomous agencies have also taken up competences related to policy evaluation across government. They are well placed to conduct independent, transparent and accountable evaluations. Outside the executive, Supreme Audit Institutions often play an important role.

  • While institutions both within and outside the executive play key roles in establishing evaluation practices, a truly embedded evaluation system benefits from coordination mechanisms, which can provide enable greater alignment and sharing of practices across institutions.

  • There are trade-offs between ensuring the independence of evaluation, and increasing its influence when choosing where to locate the responsibility for evaluation: locating the responsibility close to political decision-making power may prove effective to commission evaluations and to follow up on commitments by different ministries. However, agencies endowed with technical autonomy may yield a perception of transparency, unbiased judgement and accountability, which is conducive to greater trust in the results.

copy the linklink copied!

Understanding the institutional set-up for policy evaluation

Why does institutionalisation of evaluation matter?

A sound institutional set-up can contribute to aligning isolated and unplanned programme evaluation efforts into more formal and systematic approaches, with the ability to prioritise and to set standards for methodologies and practices (Gaarder and Briceño, 2010[55]).

An institutional set-up can provide incentives to ensure that evaluations are effectively conducted. For instance, Mackay (2007[58]) describes such incentives as carrots (positive encouragement and rewards for conducting policy evaluation and utilising the findings); sticks (penalties for institutions or individual civil servants who fail to take performance and policy evaluation seriously); and sermons (include high-level statements of endorsement and advocacy concerning the importance of evaluations).

Through the use and promotion of internal management and control tools in governmental institutions, a sound institutional set-up has the potential to promote the principles of transparency and accountability in the management of evaluations (Gaarder and Briceño, 2010[55]) (Parkhurst, 2017[32]). Thus, it can contribute to protecting policy evaluation practices from undue political influence and from undermining bureaucratic practices. This is critical as policy evaluation is a key component to ensure accountability (Lázaro, 2015[54]). Finally, an institutional set-up can contribute to improving the comparability and consistency of results across time, institutions, and disciplines; allowing the continuity of data interpretation.

Nevertheless, laws or decrees, in and of themselves, do not ensure the effectiveness of a policy evaluation system. In some contexts, rigid institutional set-ups can even have adverse effects. For example, formal measures to undertake and use evaluation can create a fear of sanctions, which can prevent risk-taking, experimentation and innovation in policy and programme design (OECD, 2019[59]; Brown and Osborne, 2013[60]; Flemig, Osborne and Kinder, 2016[61]). Additionally, an excessively rigorous system may turn public institutions into formalistic bureaucracies (Schillemans and Bovens, 2011[62]). Hence, instead of being perceived as a learning tool, evaluations run the risk of legitimatising or reinforcing prevailing power structures (OECD, 2019[59]).

Despite these limitations, the existence of a sound legal framework can be an important measure to promote policy evaluation and to clarify institutional responsibilities from a legal perspective. There is not a single recipe to institutionalising policy evaluation across government. Policy evaluation is characterised by a high diversity of institutional approaches across countries. According to Jacob et al (2015[63]) “few normative claims exist regarding how evaluation should be embedded in the architecture of governance”. Factors such as the political system, public administration cultures, and the rationale for evaluation, shape the development and characteristics of evaluation cultures.

The subsequent sections intend to map and identify the main institutionalisation trends, including legal and policy frameworks and key actors. The chapter will focus on two main dimensions: the existing legal and policy framework and its key features, which provide the legal basis to undertake policy evaluations and the macro-level guidance on when and how to carry out those; and the identification of institutional actors with allocated resources and mandates to oversee or carry out evaluations.

While institutional set ups differ, the analysis mostly focuses on institutions with responsibilities within the executive branch, even if it also considers the role of supreme audit institutions. Subnational governments, parliaments and civil society are certainly influential to institutionalise policy evaluation and critical in facilitating demand for evaluation. Therefore, the fact that the report mainly draws on data concerning the executive branch calls for caution before making any inferences for other institutions outside of the scope of the survey.

What does the institutionalisation of evaluation mean?

For the purpose of this report, institutionalisation is defined as the systematic process of embedding evaluation practices into more formal and systematic approaches ​ (Gaarder and Briceño, 2010[55]; Lazaro, 2015[5]). Both exogenous and endogenous factors can trigger such a process. Studies have highlighted a variety of factors for the institutionalisation of policy evaluation, such as the existence of a democratic system with a vibrant and vocal opposition, or the presence of influential evaluation champions - such as Congress, the presidency or the Minister of Finance - to lead the process (Gaarder and Briceño, 2010[55]). Often, policy evaluation systems can be modified following crises: these crises can be financial, fiscal or result from major disasters and pandemics. The policy response requires some restructuring of the public administration, or concerns public and health safety or a general lack of performance in a policy domain, thus calling for better understanding of what works in this policy area.

In countries with longstanding traditions for evaluation, such as Australia, Canada, the United States, and the United Kingdom, government-wide evaluation cultures were initially developed with a main focus on improving performance of public expenditure and were related to the diffusion of performance budgeting, as was first initiated in the US (Jacob, Speer and Furubo, 2015[63]). In many European countries, the growth of the welfare state, particularly in the 1970s and 1980s, in a context of slow growth and sluggish resources, spurred significant demands for policy evaluation. In others, such as Mexico, the need to have a better understanding of the impact of sectoral policies seemed to have worked as the primary driving force, in addition to the widely recognised need to better evaluate poverty, which was part of the impetus for setting up CONEVAL, the Mexican agency for policy evaluation.

In addition, international organisations and development banks have played an important role for the development of evaluation systems. European Union (EU) membership and EU Structural Funds, for instance, seem to have been crucial for the dissemination and promotion of policy evaluation in some European countries, given the strict accountability requirements related to the use of these funds (Olejniczak, Raimondo and Kupiec, 2016[64]).

Increasing demand by citizens for more openness, transparency and better services, and the necessity to improve public sector performance have been identified as additional factors for the development of policy evaluation systems in the recent past in countries without longstanding experience.

The institutional set-up can adopt many shapes and levels of robustness. While in some countries policy evaluations are promoted though a whole-of-government legal and/or policy framework and a central institution with responsibilities across government, in others policy evaluations are the sole responsibility of line ministries with more fragmented approaches.

What has impeded institutionalisation so far?

Although institutionalisation is critical for building a policy evaluation system, countries face major challenges relating to (1) the establishment of a whole-of-government strategy, (2) human resources (capacity and capabilities) (3) political interest and demand for policy evaluation (4) financial resources required for policy evaluations and (5) the availability and quality of data.

According to the survey, the greatest challenge that countries encounter in promoting evaluation across government is the absence of a strategy that promotes a whole-of-government approach to policy evaluation (Figure 1.8 in Chapter 1). The institutionalisation process involves a wide variety of actors, many of them subject to inertia or resistance to change. Without effective guidance – for instance on mandates, timing and resources – public organisations may fail to make coordinated decisions and agree on a common vision, mission and shared goals, which are all necessary steps in setting up a policy evaluation system (Cinar, Trott and Simms, 2018[65]). Successful institutionalisation can also benefit from the engagement of external stakeholders such as citizens and academia, whose participation relies on transparency and accountability mechanisms that might be difficult to put in place without proper planning (Viñuela, Ortega and Gomes, 2015[66]).

A second factor identified by countries is linked to human resources, in terms of capacity and capabilities for policy evaluation. This is partly related to the fact that civil servants lack the time to absorb the new practices associated to evaluation, especially when those are not directly related to their operational priorities (Cinar, Trott and Simms, 2018[65]), (Bossuyt, Shaxson and Datta, 2014[67]).

A third major challenge perceived by respondents relates to the low political interest in, and demand for, policy evaluation. Commitment at the highest political level is a key enabler to successful governance reforms (OECD, 2018[2]). Without strong political interest and demand for policy evaluation, it is difficult to find incentives for civil servants usually busy in managing day-to-day responsibilities. It becomes also challenging for knowledge brokers and for the advocates and transmitters of evaluations to engage with policymakers and civil servants in tasks outside their immediate area of responsibility (Liverani, Hawkins and Parkhurst, 2013[68]). The demand for evaluation may be caught in a vicious circle, where the lack of demand comes from insufficient understanding of evaluation practices and purpose, which itself comes from a lack of experience with evaluations, due to weak demand for Mackay (2007[58]).

Another challenge is the financial resources. Institutionalising policy evaluations can be financially and labour intensive. While the production of consistent data and the dissemination of results play a crucial supporting role in the institutionalisation process (Maeda, Harrit and Mabuchi, 2012[69]), this demands capacity for consistent estimation methods, communication facilities, and time (Zida et al., 2017[70]).

Finally, the limited availability and quality of data across government agencies and departments can also be a major challenge. Data is a strategic asset to improve policy design, service delivery and the operations of the machinery of government. Nonetheless, enabling the strategic use and quality of data requires human and technical capabilities, especially the willingness of the public servants to use data, as well as an investment in data analytical tools (van Ooijen, Ubaldi and Welby, 2019[71]).

copy the linklink copied!

Anchoring policy evaluation in legal & policy frameworks

An adequate legal and policy framework constitutes a solid basis to embed the practice of evaluations across government in a systematic way. However, there is no one-size-fits-all: countries have developed laws, policies and guidelines to promote evaluations in various ways.

Ensuring solid legal frameworks for policy evaluation

A majority of countries (29 countries, 23 OECD countries) have developed a legal framework that guides policy evaluation, as is the case in Chile, Norway, Poland, and Slovenia. The fact that over two-thirds of responding countries have created a legal basis for policy evaluation underlines the importance that OECD member and partner countries attribute to this practice across government (Figure 2.1)

copy the linklink copied!
Figure 2.1. Availability of a legal framework guiding policy evaluation across government
Figure 2.1. Availability of a legal framework guiding policy evaluation across government

Note: n=42 (35 OECD member countries). Answers reflect responses to the question, “Is there a legal framework guiding policy evaluation across government?”

Source: OECD Survey on Policy Evaluation (2018).

The legal anchors of evaluation can vary substantially across countries. Some countries have specific stipulations in their constitutions while others focus on primary or secondary laws. An overview of the nature of the legal framework is provided below Table 2.1.

copy the linklink copied!
Table 2.1. Nature of legal framework for policy evaluation

Constitution

Primary legislation (laws or equivalent)

Secondary/subordinate legislation

Austria

Canada

Chile

Czech Republic

Germany

Estonia

France

Greece

Hungary

Italy

Japan

Korea

Latvia

Lithuania

Mexico

Netherlands

Norway

Poland

Slovakia

Slovenia

Spain

Switzerland

USA

OECD Total

● Yes

4 (17%)

21 (91%)

17 (74%)

○ No

19 (83%)

2 (9%)

6 (26%)

Argentina

Brazil

Colombia

Costa Rica

Kazakhstan

Romania

Note: n=29 (23 OECD member countries). 13 countries (12 OECD members) are not included as they answered that they do not have a legal framework guiding policy evaluation across the government. Answers reflect responses to the question, “At what level is policy evaluation across government legally embedded? (Check all that apply)”.

Source: OECD Survey on Policy Evaluation.

Constitutional provisions for policy evaluation

Requirements for policy evaluation can be incorporated at the level of the constitution. This reflects a significant commitment and provides an important mandate to the government in this area. Moreover, it institutes policy evaluation as a long-term policy, as the incorporation of elements in a constitution reflects a great degree of consensus among different political actors, which usually goes beyond electoral mandates. Germany, France, Mexico, Switzerland, Colombia, and Costa Rica have specific provisions within their constitution relating to policy evaluation (Box 2.1).

copy the linklink copied!
Box 2.1. Examples of policy evaluation-related principles found in national constitutions

The Constitution of Switzerland requires the Federal Assembly to ensure the evaluation of federal measures in terms of their effectiveness.

The German Constitution states the necessity to conduct evaluations of financial assistance grants on a regular basis (Article 104b).

In France, Articles 47-2 of the Constitution of the 5th Republic mandate the French Supreme Audit Institution (Cour des Comptes) to assist the Government and Parliament in the evaluation of public policies, among other duties (See Box 2.2 on France’s embedded policy evaluation framework).

Moreover, the Mexican Constitution’s Article 134 requires that economic resources be managed and used efficiently, effectively and transparently, and that the results of such use be assessed by technical agencies, in order to guarantee an optimal budget allocation.

Colombia’s Constitution contains a larger number of articles that establish evaluative activities, such as the prescription of the national planning entity to organise the evaluation of public administrations’ management and performance (Article 343).

Lastly, the Costa Rican Constitution’s Article 11 prescribes the evaluation of the results and accountability of all public institutions as well as the fulfilment of civil servants’ duties.

Source: OECD (2018) Survey on Policy Evaluation.

As Box 2.1 shows, constitutional provisions can give responsibilities to particular entities and can define approaches and scopes of evaluation practices. Constitutional provisions might differ in terms of who they mandate to conduct evaluations. For instance, the French constitution mandates the supreme audit institution to assist the government and the parliament in policy evaluation. In Mexico, the constitution states that technical agencies should evaluate the use of national resources, while in Colombia, the national planning entity is required to organise evaluations.

Constitutional mandates may also have specific provisions regarding the scope and object of evaluation. For example, the Swiss Constitution requires federal measures to be evaluated in terms of their effectiveness, whereas the French Constitution requires legislative proposals to be evaluated in terms of their impact. Even more specifically, the German Constitution mandates the regular evaluation of financial assistance grants, while the Colombian one requires the evaluation of public administrations’ management and performance. Constitutional provisions can also focus on the particular duties that civil servants have, as is the case in Costa Rica.

These provisions can largely shape the configuration of a country’s evaluation system. While in countries such as Colombia, the evaluation system is linked to development planning, the German system is closely oriented to spending reviews and the role of the Parliament in assessing the Federal Government’s performance. The main elements of the French legal framework are presented below (Box 2.2).

copy the linklink copied!
Box 2.2. France’s embedded policy evaluation framework

France implemented a legal framework for policy evaluation embedded at three different levels: the constitution, primary legislation and secondary legislation.

At the constitutional level, article 47-2 mandates the French Supreme Audit Institution (Cour des Comptes) to assist the parliament and the government in evaluating public policies. The results are made available to government and citizens through publication of the evaluations. Evaluative activities are also expressed in articles 39 and 48 of the Constitution.

In terms of primary legislation, articles 8, 11 and 12 of the organic law number 2009-403 on the application of article 34-1 of the Constitution requires legislative proposals to be subject to ex ante impact assessment. Assessment results are then annexed to the legislative proposal as soon as they are sent to the Supreme Administrative Court (Conseil d’État).

On the secondary legislation level, article 8 of the Decree No. 2015-510 states that all legal draft proposals affecting the missions and organisation of decentralised State services should be subject to an impact assessment. The main objective is to check the alignment between the objectives pursued by the proposal and the resources allocated to decentralised services.

Additionally, France has a number of circulars from the prime minister that relate to evaluation. On October 12th 2015, the circular related to the evaluation of norms, and in May 2016 to the impact evaluation of new law projects and regulatory texts.

Source: OECD (2018) Survey on Policy Evaluation, Constitution de la Ve République, and the respective articles from Legifrance (https://www.legifrance.gouv.fr).

Primary and secondary legislation on policy evaluation

Primary laws or equivalent and secondary legislation (decrees, ministerial resolutions or equivalent) represent the most frequent legal basis for institutionalisation.

Primary legislation frameworks differ substantively across countries. Some countries have framed evaluation as part of larger public management laws. This is the case of the United States, with the Foundations for Evidence-Based Policymaking Act of 2018 (see Box 2.3). In the field of policy evaluation, the act mandates the Office of Management and Budget (OMB) to develop guidance and advice in policy evaluation. The law includes a provision that requires agencies to submit annual evaluation plans, which shall "describe key questions for each significant evaluation study that the agency plans to begin in the next fiscal year". It also mandates government’s agencies to: (1) designate a senior employee as evaluation officer to coordinate evidence-building activities; (2) develop capacity assessments which "shall contain an assessment of the coverage, quality, methods, effectiveness, and independence of the statistics, evaluation, research, and analysis efforts of the agency”; (3) implement OMB guidance for programme evaluation; (4) identify “key skills and competencies, establish or update an occupational series, and establish a new career path” on programme evaluation (115th Congress, 2019[7]).

copy the linklink copied!
Box 2.3. Building the institutional foundations for evidence-based policymaking in the US

The Foundations for Evidence-Based Policymaking Act of 2018, which resulted from the work of a Bipartisan Commission of Congress, was signed and enacted into law on January 14th, 2019. The Evidence Act aims for federal agencies to better acquire, access, and use evidence to inform decision-making. It includes three Titles and has a significant impact in terms of the institutionalisation of evidence across federal government:

  1. 1. Federal Evidence-Building Activities

  2. 2. Open Government Data Act

  3. 3. Confidential Information Protection and Statistical Efficiency (CIPSEA)

Accordingly, the Act mandates evidence-generating activities across agencies, open government data, confidential information protection, as well as skills and capacity building. This Act matters in that it elevates programme evaluation as a key agency function, calling on agencies to strategically and methodically build evidence in a coordinated manner.

The implementation approach of this Act is phased and coordinated. Its first and foundational phase (“Learning Agendas, Personnel and Planning”) centres on developing learning agendas, identifying relevant personnel, their roles and responsibilities, and undertaking planning activities. The purpose of the learning agendas is to promote deliberate and strategic planning of evidence-building activities. In creating the learning agendas, agencies are required to identify and set priorities for evidence building, in consultation with various stakeholders. The second element – personnel – involves three newly designated positions (Chief Data Officer, Evaluation Officer, and Statistical Official), who spearhead activities pertaining to Phase 1 of the implementation, including reporting requirements. These individuals also serve on a Data Governance Body inside of their respective agency, which is concerned with managing data as a strategic asset to fulfil the agency’s mission as well as addressing the priorities identified in the agency’s learning agenda.

The last element of the first phase of implementation consists of various planning activities. This includes developing annual evaluation plans, which outline the specific evaluations that each agency intends to carry out to address its learning agenda priorities. Furthermore, agencies are required to undertake capacity assessments in order to assess their ability to carry out evidence-building activities like performance measurement, fact-finding, etc. Finally, agencies are also required to identify data to answer the questions outlined in their learning agendas.

The learning agenda activity is intended to drive all other evidence-building activities. Other aspects of the implementation of the Act include: “Open Data Access & Management”, “Data Access for Statistical Purposes”, and finally “Programme Evaluation”. The fourth phase of implementation will consist of the Office of Management and Budget (OMB) issuing guidance on Programme Evaluation Standards and Best Practices as well as on Evaluation Skills and Competencies (with the Office of Personnel Management).

Sources: (The Statistical Reform Promotion Council, 2017[72]), (The Committee on Promoting EBPM, 2017[73]), (The Cabinet Secretariat, 2019[74]), (United States Office of Management and Budget, 2019[75])

Other countries have issued specific legislations on policy evaluation, such as Japan with the Government Policy Evaluations Act (see Box 2.4) and Korea with the Framework Act on Government Performance Evaluation (See Box 2.5).

copy the linklink copied!
Box 2.4. Institutionalisation of Policy Evaluation in Japan

To provide the policy evaluation system with a clear-cut framework and improve its effectiveness, Japan has enacted the Government Policy Evaluations Act of 2001, which provides an overarching framework for the policy evaluation and clarifies the role of each ministry in the evaluation of policies. It requires appropriate implementation of policy evaluations prior to the adoption of policies, and specifies how policy evaluations conducted by the Ministry of Internal Affairs and Communications (MIC) should be conducted.

Under the act, the “Basic Guidelines for Implementing Policy Evaluation” offer guidelines for the development of basic plans by for each ministry to develop an evaluation plan in order to promote a whole-of-government approach to evaluation. The ministries’ "Basic Plan for Policy Evaluation", cover a period of 3 to 5-year and incorporate policy evaluation into public management cycle such as “Plan, Do, Check, Act (PDCA)”.

The MIC has also developed various guidelines to support use and publication of evaluations (e.g., “Policy Evaluation Implementation guidelines” (2005,) and “Guidelines for Publication of Information on Policy Evaluation” (2010)).

The Administrative Evaluation Bureau (AEB)

The AEB formulates standard rules and guidelines for conducting policy evaluations, aggregates all policy evaluation reports across the government, and conducts reviews to improve the quality of those evaluations. In 2012, the AEB introduced a standard format across ministries for ex-post evaluation of major policy that made it easier to read and compare the evaluation reports. Besides, the AEB set up Portal Site for Policy Evaluation in FY 2012, which provides links to policy evaluation data including analysis sheets and evaluation reports publicised by each ministry to ensure transparency and accountability.

Sources: (Ministry of Internal Affairs and Communications, 2017[76]), (The Ministry of Internal Affairs and Communications, 2010[77]). (The Ministry of Internal Affairs and Communication, 2005[78])

copy the linklink copied!
Box 2.5. Korea’s Policy Evaluation System

The 100 Policy Tasks of the Republic of Korea

The Republic of Korea’s “Five-Year Plan”, set by the State Affairs Planning Advisor Committee, consists of an overarching vision, policy goals and strategies, and the 100 Policy Tasks. The tasks were selected through a review of over 200 pledges and nearly 900 breakdowns of pledges proposed to, and received from, the public during the presidential election campaign. The 100 policy tasks include a system for comprehensive monitoring and management, conducted with close cooperation between the Presidential Commission on Policy Planning, Office for Government Policy Coordination, and the Government Performance Evaluation Framework. Government performance evaluation implementation plans are released annually, providing overviews on evaluation structure and more detailed evaluation plans organised by levels of governance, from central administrative agencies to local governments.

Framework Act on Government Performance Evaluation

The Government of Korea established a performance management system by enacting the Framework Act on Governance Performance Evaluation (FAGPE) in 2006. This law aims to improve the efficiency, effectiveness, and accountability of the government administration by establishing the fundamental principles, institutional foundation, management strategies and execution plans on which government performance evaluations can be implemented (Roh, 2018[79])

Prior to 2006, the performance management and evaluation systems of the Korean government were dispersed and consisted of different programs under various agencies (Yang and Torneo, 2016[80]). The act aimed to improve and integrate the different performance management systems of all government organisations. This practice has enabled systematically managing their performance under the Government Performance Evaluation Committee (GPEC). This Committee oversees all government performance management and evaluation systems, and provides consistency and stability in government performance management (Roh, 2018[79]).

Sources: OECD (2018) Survey on Policy Evaluation, (Roh, 2018[79]), (Yang and Torneo, 2016[80]).

Finally, some countries have regulated policy evaluation as part of their budgetary governance framework, as is in the case of Estonia, Germany and Italy, among others. In the case of Germany, the Federal Budget Code of 19 August 1969, as last amended in 2017, provides the guidelines for presenting and authorising the annual budget. This code includes procedures of the budget implementation, auditing and evaluation (OECD, 2014[81]), The Federal Budget Code is mandatory to the Federal Parliament, the Federal Government and individual federal ministries.

Secondary legislation can also support and streamline constitutional mandates or primary legislation. This type of legislation usually provides more detailed policy frameworks on evaluation; usually containing specific information concerning evaluation annual plans, timing, selection criteria, etc. This is the case of the Netherlands, with the Regulation of the Minister of Finance (15-03-2018), which lays down rules for periodic evaluation based on the Accountability Act of 2016. In these arrangements, the Ministry of Finance establishes the scope, actors involved, timing, and minimal quality criteria requested in an evaluation.

Some countries frame their evaluation system only through regulations and acts issued by the executive. This is the case of Norway, Slovakia, Argentina, Brazil, Kazakhstan and Romania. The cases of Norway and Argentina illustrate this situation:

  • Norway, within the Ministry of Finance’s Regulation for Financial Management in Central Government on Financial Management in Central Government (2003), mandates all agencies to conduct evaluations to gather information on efficiency, objective achievement and results, in all or some of their areas of responsibility and activities.

  • Argentina, with the decree 292/2018 on monitoring and evaluation guidelines lays down two main objectives: (1) to mandate the National Council for the Coordination of Social Policies as the body responsible for preparing and executing the Annual Policy and Social Programmes Monitoring and Evaluation Plan; and (2) to provide technical assistance to ministries and national organisations for the evaluation and monitoring of policies, programs, plans and projects with social impact.

Creating a supportive policy framework

A policy framework is generally a document or set of documents that provides strategic direction, guiding principles and courses of action to the government for a specific sector or thematic area. Policy frameworks can include different legislative acts, but this is not necessarily always the case, and some ministries can internally adopt evaluation policies in the form of guidelines without issuing any specific regulation. In any case, a clear policy framework can help to:

  • conduct different aspects of policy analysis in a credible and rigorous manner, which supports the implementation of quality evaluation.

  • provide high-level guidance and clarity for institutions by outlining overarching best practices and goals, generally taking the form of an institution-wide guidance document that describes implementation or standards for policies, establishes hierarchies and categories, and outlines the exigence or rationale behind stated goals.

Half of surveyed countries (21 in total, including 17 OECD countries) developed a policy framework for organising policy evaluation across government (Figure 2.2). Among those, a number of countries implemented both a legal and a policy framework (19 in total, including 15 OECD countries). Countries include Estonia, Japan, Korea, Colombia and Costa Rica.

copy the linklink copied!
Figure 2.2. Availability of a policy document on policy evaluation across government
Figure 2.2. Availability of a policy document on policy evaluation across government

Note: n=42 (35 OECD member countries). Answers reflect responses to the question, “Apart/under the umbrella of a legal framework, has your government developed a policy framework for organising policy evaluation across government?”

Source: OECD Survey on Policy Evaluation (2018).

The Czech Republic established a “Methodological guidance for Evaluation in the 2014-2020 programming period”. Taken with its legal framework that has more of an operational focus, this methodological guidance contributes to creating a comprehensive evaluation framework. Korea’s Framework Act on Governance Performance Evaluation (FAGPE) is complemented with two policy documents: a Basic Plan of the Government Performance Evaluation (2017-2019), and Operational Rules of the Government Performance Evaluation Committee.

Policy frameworks tend to allocate institutional responsibilities for evaluation, with a total of 17 countries surveyed (of which 14 are OECD) outlining this in their evaluation policy (see Table 2.2). In Germany for instance, the Instructions for Economic Efficiency Investigations (Arbeitsanleitung Einführung in Wirtschaftlichkeitsuntersuchungen) intend to guide the realisation of economic evaluations.

copy the linklink copied!
Table 2.2. Features of the framework for policy evaluation

Objectives or expected results of the evaluation policy

Policy areas (thematic) or programmes covered by the evaluation policy

Responsibilities of government institutions concerning policy evaluation

Requirement for government institutions to undertake regular evaluation of their policies

Standards for ethical conduct

Requirements related to the quality standards of evaluations

Requirements related to stakeholder engagement

Requirements related to evaluation reporting

Requirements related to the use of evaluation findings into policy planning making

Canada

Czech Republic

Germany

Estonia

France

Great Britain

Greece

Japan

Korea

Latvia

Lithuania

Mexico

Poland

Slovakia

Spain

USA

OECD Total

● Yes

13 (81%)

10 (63%)

14 (88%)

12 (75%)

9 (56%)

9 (56%)

9 (56%)

11 (69%)

9 (56%)

○ No

3 (19%)

6 (37%)

2 (12%)

4 (25%)

7 (44%)

7 (44%)

7 (44%)

5 (31%)

7 (44%)

Argentina

Brazil

Colombia

Costa Rica

Note: OECD Survey on Policy Evaluation (2018).

Source: n=20 (16 OECD member countries). 21 countries (18 OECD member countries) are not included as they answered that they do not have a policy framework for organizing policy evaluation across government. Data is not available for Ireland. Answers reflect responses to the question, “Which elements do(es) the document/s referred to under Q4 and Q5 cover concerning policy evaluation across government? (Check all that apply)”. The documents referred to under Q4 and Q5 are the ones stipulating a policy framework organising policy evaluation across government. The option "Other" is not included.

Evaluation plans, or requirements for government institutions to undertake regular evaluation of their policies are also common among respondents (16 overall, and 12 OECD countries). Such evaluation plans exist in Spain, where an Action Plan for the evaluation of the Spending Review, an Annual Plan of Normative Impact, and a Master Plan of Spanish Cooperation for 2018-2021 have been created. In Mexico as well, an Evaluation Programme is published every year since 2007.

Interestingly, some countries who lack an overarching legal framework nevertheless created a policy framework to promote evaluation. These include Canada, the United Kingdom and Ireland. The United Kingdom’s Treasury has a set of policy frameworks that give government guidance on evaluation, such as the Green book, which is particularly focused on centre of government’s1 responsibilities concerning evaluation. The other guide, the Magenta Book, is to be used by policy analysts, policy makers in all levels of government, including central and local, and the voluntary sector, to institutionalise good evaluation practices. Another key reference is the European Commission’s guidance, which presents key principles for the implementation and design of evaluations (Innovate UK, 2018[82]).

Many policy frameworks state objectives or expected results (17 surveyed countries overall including 13 OECD ones). Canada for instance, which does not have a legal framework for policy evaluation, has a Policy on Results (See Box 2.6). One of its objectives is to improve the achievement of results across government, and it expects federal departments to measure and evaluate their performance to ultimately improve policies, programmes and services (Canada Treasury Board, 2016[8]).

copy the linklink copied!
Box 2.6. Canada's Policy on Results

In July 2016, the Government of Canada launched a Policy on Results, which seeks to improve the achievement of results across government and better understand the desired results and the resources used to achieve them.

The responsibility for the implementation of this policy mainly falls under the Treasury Board. This body is responsible for promoting the use of evaluation findings into policymaking and defining and updating the evaluation policy.

The policy establishes that all government departments should have an evaluation unit. On the other hand, line ministries are responsible for establishing a departmental results framework. For the implementation of the policy, the Treasury Board of Canada has, among others, the following competences:

  • It can require departments to undertake specific evaluations and participate in centrally-led evaluations;

  • It can initiate or undertake resource alignment reviews;

  • It approves line ministries departmental results frameworks and any changes to their organisations’ core responsibilities.

This policy complements Canada’s Financial Administration Act, which requires the evaluation of grants and contributions programs every five years.

Source: (Canada Treasury Board, 2016[8]); https://laws-lois.justice.gc.ca/PDF/F-11.pdf.

Policy frameworks may focus on particular policy areas and programmes, as 12 countries surveyed -including 10 OECD countries- have implemented thematic or specific policy frameworks. Ireland for example, does not have a formal legal framework for policy evaluation, and instead has guidelines for Regulatory Impact Analysis and a public spending code.

In addition to institutionalising responsibilities, policy frameworks may also include provisions regarding the quality and use of evaluation, such as standards for the ethical conduct of evaluators, requirements for stakeholder engagement, reporting, and use of findings into policy-making. Country practices concerning the promotion of quality and use of evaluation in policy frameworks are discussed in Chapter 3. Germany, Korea, Greece and Costa Rica have comprehensive policy frameworks that include these elements on the quality and use. Greece’s Manual of Inter-Ministerial Coordination includes all elements in the table above, but for standards for ethical conduct. Another example is Costa Rica’s National Evaluation Policy (Box 2.7).

copy the linklink copied!
Box 2.7. The National Evaluation Policy (PNE) in Costa Rica

The National Evaluation Policy (PNE) was established by the Ministry of National Planning and Economic Policy (Mideplan), in coordination with line ministries, academics and civil society. This Policy serves as an instrument to establish a framework to strengthen the progress of evaluations in the public sector.

The PNE particularly aims at improving public management by promoting evaluation as an instrument for decision-making, learning, control of public resources, and accountability. The PNE focuses on four axes of action:

  1. 1. Evaluation in the Management cycle for Development Results: this aims to increase the evaluability conditions in public interventions through a joint work between Mideplan and the Ministry of Finance (e.g. through technical and methodological guidelines for the use of evaluations in social programs).

  2. 2. Institutionalisation of an evaluation framework: this aims to improve the design and management of public interventions based on evidence (e.g. public repository with previews of evaluations in the public sector).

  3. 3. Capacity building in evaluation: this aims to increase the quality of evaluations made in the public sector (e.g. training to civil servants on the design and implementation of evaluation).

  4. 4. Stakeholders’ participation: this aims to increase the participation of stakeholders in the evaluation process (e.g. spaces for dialogue and interaction between different actors within government and externals ones such as civil society organisations).

Source: (Mideplan, 2018[83]).

The role of guidelines

Guidelines and other supporting documents such as White Books on evaluation can assist policy makers in conducting policy evaluation successfully. Evidence shows that the majority of countries (31 countries and 26 OECD members) has guidelines to support the implementation of policy evaluation across government (See Table 2.4). Such guidelines for policy evaluation generally intend to assist all those participating in the implementation of a policy in better planning, commissioning and managing its evaluation (OECD-DAC, 2009[84]). Effective implementation requires a structured approach about how the policy will deliver a service and program successfully, considering risks and implementation issues (Department of the Prime Minister and Cabinet (Australia), 2013[85]). Countries such as Australia, Finland, New Zealand and Portugal only have guidelines but do not report a policy or a legal framework.

Guidelines mostly refer to the reporting of evaluation results, followed by the identification and design of evaluation approaches, quality standards of evaluations, and use of evaluation evidence (See Chapter 3). Around half of countries’ toolkits refer to the design of data collection methods, independence of evaluations, and stakeholder engagement in the evaluation process. Canada presents a significant number of guidelines for the implementation and evaluation of policies (see Box 2.8) and the United States recently updated and consolidated its guidance on programme evaluation standards and practices as part of the implementation of the Foundations for Evidence Based Policy Making Act of 2018 (Box 2.3) (Office of Management and Budget, 2020[86]).

copy the linklink copied!
Box 2.8. The role of frameworks and guidelines for the promotion of evidence-informed policy making (EIPM) in Canada

The Results Division of the Secretariat, successor of the Centre of Excellence for Evaluation (CEE), is responsible for evaluation activities within the Government of Canada, under the 2016 Policy on Results [See Box 2.6 on Canada’s Policy on Results]. It offers useful resources, information and tools to government professionals and anyone else interested in evaluation at the federal level. Overall, the Secretariat has functional leadership regarding the implementation, use and development of evaluation practices across government. To support quality EIPM, the Results Division offers a number of useful guidelines:

  • Guide to Rapid Impact Evaluation (RIE): this practical guide gives a range of methods for conducting RIE and advice on when and how it can be used in government. More precisely, it defines RIE, the time and resources needed to conduct one, its key benefits and challenges, and support for planning, analysis and reporting of the results.

  • Assessing Programme Resource Utilization When Evaluating Federal Programmes: this document is made for evaluators of federal government programmes, programme and financial managers, and corporate planners. It helps them understand, plan and undertake evaluations that include the assessment of resource utilization. It provides them with methodological support to ensure that they have the knowledge and competencies to conduct quality and credible programme resource utilization assessments.

  • Theory-Based Approaches to Evaluation: Concepts and Practices: this document introduces key concepts of theory-based approaches to evaluation and their application to federal programmes. It should be complemented by additional readings and advice for step-by-step guidance on conducting evaluations.

  • Supporting Effective Evaluations: A Guide to Developing Performance Measurement Strategies: this guide supports departments, programme managers and heads of evaluation in developing performance measurement to support evaluation activities. It provides recommendations, tools and frameworks for conducting clear and concise performance measurement strategies as well as guidance regarding the roles of those in charge of developing such strategies.

Sources: (Treasury Board Secretariat, 2019[87]), (Treasury Board Secretariat, 2013[88]), (Treasury Board Secretariat, 2010[89]).

More specifically, a majority of countries (11 countries, of which 9 OECD countries) set out guidelines for technical quality and good governance of evaluations, such as Japan (see Box 2.9). This is further explored in Chapter 3.

copy the linklink copied!
Box 2.9. Basic Guidelines for Implementing Policy Evaluation by the Ministry of Internal Affairs and Communications (MIC) of Japan

With its law No. 86 of 2001, Japan enacted the Government Policy Evaluations Act. The act clarifies the administrative organs’s obligation to evaluate policies after their adoption under a clear-cut plan, requires the appropriate implementation of policy evaluations prior to adoption, and specifies what policy evaluations should be conducted by the MIC.

For this purpose, the MIC presented the Basic Guidelines for Implementing Policy Evaluation to support the development of such plans by individual administrative organs and the Government's Policy Evaluation activities, in accordance to the Article 5 of the Act.

These guidelines include the purposes of both ex-ante and ex-post evaluation, different methods to measure policy impacts, recommendations regarding the use of insights from academic experts, the incorporation of evaluation results in policymaking, and the public reporting of those results.

The guidelines also indicate that the MIC shall host liaison meetings with representatives from each ministry in order to foster close communication among them and ensure the smooth and efficient implementation of a policy evaluation system.

Source: Adapted from the Ministry of Internal Affairs and Communications (2017[76]).

copy the linklink copied!

The principal institutions in charge of policy evaluation and their mandates

Institutions within the Executive

The survey results show that the centre of government is the principal institutions in charge of policy evaluation across government. This is the case in 27 countries, including 23 OECD countries. The second actor is the ministry of finance in 26 countries, including 22 OECD countries. Ministries of planning, development or equivalent have competences related to policy evaluation across government in 7 countries including 4 OECD countries. Ministries of public sector reform or equivalent have such competencies in 12 OECD countries.

In fact, there is often a dual role, with policy evaluation to be carried out and institutionalised in all the sectoral ministries and agencies, with some form of coordination from the centre, either through COG processes, or budget and resources related aspects with the Ministry of Finance. The central institution therefore has a key role in managing the evaluation eco-system, making sure that evaluation can take place at the right time and in the right place and that it can feed into decision making. In some cases, the core institution can also develop its own capacity for evaluation, either through the evaluation of public spending in the ministry of finance, or the evaluation of cross cutting government priorities and strategies at the centre of government.

The results shows that 40 countries have at least one institution with responsibilities related to policy evaluation across government. Consequently, a great majority of countries have chosen to allocate the mandate of coordinating policy evaluation across the executive to either one or several institutions (Figure 2.3).

copy the linklink copied!
Figure 2.3. Institutions within the Executive that have competences related to policy evaluation across government
Figure 2.3. Institutions within the Executive that have competences related to policy evaluation across government

Note: OECD Survey on Policy Evaluation (2018).

Source: n=42 (35 OECD member countries). Answers reflect responses to the question, “Which of the following institutions within the executive have competences related to policy evaluation across government? (Check all that apply)”. Answer option “other” is not displayed.

Only in 1 OECD countries (2 countries in total), competences for policy evaluation across government are not centralised in a single institution. The fact that in the majority of OECD countries policy evaluation is conducted by more than one institution underlines the importance of steering and coordination capacities. As analysed in Box 2.10 the institutions responsible for policy evaluation across government differ across OECD countries. In most countries, the centre of government has the broader mandate and is thus well placed to conduct this horizontal task.

copy the linklink copied!
Box 2.10. Examples of institutions responsible for Policy Evaluation in OECD countries

Centre of government: Finland

The centre of government of Finland, which consists of the ministry of finance, the Ministry of justice and the Prime minister’s office, exercises the competences related to policy evaluation. In order to enhance the use of evidence, the government established in 2014 a policy analysis unit under the Prime minister’s office. The unit has the mandate to commission research projects and present evidence to support the government’s decisions on future strategic and economic policy.

Ministry of finance: Chile

The Budgets Directorate (Dirección de Presupuestos), a dependent body of the ministry of finance (Ministerio de Hacienda), is the technical body in charge of ensuring the efficient allocation and use of public funds. To do so, the directorate carries out ex ante, impact and value-for-money evaluations of different governmental policies and programmes. Moreover, it monitors the implementation of government programmes to collect performance information, which is then introduced into the budgetary process and communicated to stakeholders.

Autonomous agency: Mexico

The National Council of Social Development Policy Evaluation (Consejo Nacional de la Política de Desarrollo Social, CONEVAL), was created in 2004 as a decentralised body with budgetary, technical and management autonomy. It has the mandate (embedded in the Constitution in 2014) to set standards, co-ordinate the evaluation exercises of the National Social Development Policy and its subsidiary actions, and provide guidelines to define, identify and measure poverty. The agency carries or contracts out evaluations of the social policies developed by the Mexican government.

Decentralised system: Norway

Norway has a decentralised evaluation system. The Agency for Financial Management has an important role in issuing guidelines, and guiding materials. Evaluations are conducted by individual agencies, and there is a general portal managed between the Norwegian Government Agency for Financial Management (DFØ) and the national library, which contains all public evaluations from 2005 until today. Norway has moreover established a network of evaluators, chaired by the agency for financial management (EVA-Forum). Norway has a strong experience and tradition for evaluation in some sectors such as development policy or education. However, ex post evaluations are only carried out for certain regulations in response to requests from parliament, external groups audit office or due to legal requirements.

Decentralised system: Sweden

The Swedish organisational structure includes specific sectorial agencies within the executive whose main task is to perform analyses and evaluations for the government needs. In addition to the Division for Structural Policy at the ministry of finance, there are also seven sector specific evaluation agencies in Sweden2, in areas such as Growth, Transport and Crime Prevention. There is also an agency for public management (Statskontoret, the Swedish Agency for Public Management), which is the Government’s organisation for analyses and evaluations in all areas of state and state-funded activities.

Sources: from Government Policy Analysis Unit (2017[90]) and Secretaria de Desarrollo Social (2015[91]).

The degree of involvement of institutions in different types of evaluations differ. The OECD Report on Budgeting and Public Expenditures in OECD Countries 2019 finds that line ministries and agencies have a very active role in both ex-ante and ex-post evaluations (OECD, 2019[12]). Supreme audit institutions take more of a substantial role in relation to ex-post reviews ( See Figure 2.4).

copy the linklink copied!
Figure 2.4. Governance of ex ante and ex post evaluation
Figure 2.4. Governance of ex ante and ex post evaluation

Note: Data for Israel and the United States are not available, Information on data for Israel: http://dx.doi.org/10.1787/888932315602.

Source: OECD (2018), OECD Performance Budgeting Survey, Question 30, OECD, Paris

Steering and coordinating policy evaluation - the role of the centre of government

The CoG is known by different names in different countries, such as the Chancellery, Cabinet Office, Office of the President, Office of the Government, etc. It plays an increasingly active role in policy development, co-ordination and monitoring across public administration. The CoG aims to secure a strong, coherent and collective strategic vision - especially as it relates to major cross-departmental policy initiatives (OECD, 2014[92]). Its role can be crucial in policy evaluation across government, as it requires co-ordination across different departments and ministries.

According to the OECD survey on the Organisation and Functions of the Centre of Government (2017), policy co-ordination across government and monitoring the implementation of government policy is among the five key responsibilities of the CoG across the OECD.

The OECD survey on policy evaluation also finds that the CoG plays a crucial role in embedding a whole-of-government approach to policy evaluation. An assessment of the mandate of countries’ CoG reflects its role as a guiding institution in policy evaluation across government (Table 2.3). In 16 OECD countries (18 countries in total), the CoG’s mandate includes the definition and update of the evaluation, while in 15 countries (including 14 OECD countries) it includes providing incentives for carrying out policy evaluation. In 19 of the 23 OECD countries (21 of the 27 countries in total) in which the CoG has a role in policy evaluation across government, it is tasked to promote the use of policy evaluation. The CoG is also in charge of providing guidelines for policy evaluation. Only seven OECD countries (10 in total) mentioned that the CoG is responsible for defining the course of action for commissioning policy evaluation.

copy the linklink copied!
Table 2.3. Mandate of Centre of Government for policy evaluation

 

Defining and updating the evaluation

Developing guideline(s)

Providing incentives for carrying out policy evaluations

Undertaking policy evaluations

Requiring government institutions to undertake specific policy evaluations

Defining course of action for commissioning evaluations

Developing skills, competences and/or qualifications of evaluators

Developing standards for ethical conduct

Ensuring quality standards of evaluations

Promoting stakeholder engagement in evaluations

Overseeing the evaluation calendar and reporting

Promoting the use of evaluation

Serving as a knowledge centre and providing a platform for exchange

Following up on evaluation reports

Australia

Canada

Estonia

Finland

France

Germany

Great Britain

.

f.

.

.

.

.

.

.

.

.

.

.

.

.

Greece

Hungary

Iceland

Israel

Italy

Korea

Latvia

Lithuania

Netherlands

New Zealand

Portugal

Slovakia

Slovenia

Spain

Turkey

United States

OECD Total

● Yes

16

14

14

12

14

7

12

5

10

11

9

19

13

11

○ No

6

8

8

10

8

15

10

17

12

11

13

3

9

11

Argentina

Brazil

Costa Rica

Romania

Note: n=27 (23 OECD member countries). Answers reflect responses to the question, "Please list the duties and responsibilities of this/these institution/s related to policy evaluation across government" for the Centre of Government / Presidency / Prime Minister’s Office / Cabinet Office or equivalent. Answer option "Other" is not included. Note: The UK approach to policy evaluation splits these responsibilities amongst the Cabinet Office, the Treasury, and professional analysts across government (e.g. the Government Economic Service and the Government Social Research Service), with most institutions also developing their own supplementary guidance and some form of ministerial/management response to the results.

Source: OECD (2018) Survey on Policy Evaluation

In sum, the CoG has a vital role to play in providing strategic direction for policy evaluation as well as incentives for other institutions to use the evaluation findings (See Chapter 3). CoG institutions can facilitate policy evaluation across government, due to the centre’s role in steering and coordination. This is consistent with the OECD cross-country analysis of CoG functions, which concludes that “more collaborative strategies for achieving policy goals suggest a role for the centre that is less about being a watchdog or internal auditor and more about providing active facilitation, support and implementation advice to ministries or groups of ministries. This is especially the case for meeting cross-cutting policy goals” (Centre Stage 2 report - The organisation and functions of the centre of government in OECD countries (OECD, 2018[93])).

The close proximity to strategic decision making yields a number of benefits for the role of CoG in relation to promoting evaluation. Allocating the role of principal institution in charge of policy evaluation close to political power can be interpreted as a sign of political commitment. In Germany for example, the main institution in charge of policy evaluation is located in the Chancellery (Bundeskanzleramt) which has government-wide co-ordination powers. In addition, the CoG usually has the political leverage to ensure that the findings of evaluations are subsequently used in forthcoming steps of the policy-cycle. Still, this may require to balance the trade-offs between influence and the perception of independence of the evaluation.

An assessment of the role of the CoG also requires discussing who takes responsibility for policy evaluation. In a majority of countries (17 countries, 16 OECD countries), civil servants are head of the policy evaluation unit within CoG. Surprisingly, only in nine countries (of which six are OECD countries) such as Greece and Hungary, political appointees are given such a role. In all these nine countries apart from Israel, employees appointed for heading evaluation are replaced when government changes. On the contrary, civil servants responsible for evaluation in CoG are rarely replaced.

The CoG majorly finances its evaluation units through its own budget with 19 countries, including 15 OECD countries, doing so. On the other hand, only five countries, such as Finland, directly and independently allocate funds from the national budget to their CoG’s evaluation units. Argentina is the only country where this unit (National Directorate for Information System, Monitoring and Evaluation of Social Programs, SIEMPRO) is financed through both the CoG’s budget and the national budget. In contrast, the Slovakian evaluation unit is financed by European structural and investment funds.

Ministry of Finance / Ministry of Economy / Ministry of Treasure or equivalent

In many countries, the institutionalisation of policy evaluation originated in economic incentives with the aim to enhance the quality of public expenditures and improve results in terms of government’s spending. In Australia for example, the ministry of finance is responsible for the spending review procedures. In the OECD survey, it was mentioned as second most frequent institution (26 countries in total, including 22 OECD countries) with competences for policy evaluation across government (see Box 2.11 for details on the Netherlands).

copy the linklink copied!
Box 2.11. The role of the Ministry of Finance in institutionalising policy evaluation: the experience of the Netherlands’

The Dutch government launched an “Insight into Quality” Plan following the coalition agreement for 2017-21. This initiative aims to increase the knowledge concerning policies’ efficiency and effectiveness. Accordingly, the ministry of finance works in coordination with all departments to understand and strengthen the government-wide structure of the country’s evaluation system and the added-value of its policies for citizens. To achieve these goals, the following initiatives have been established:

  • The ministry of finance started monitoring and ensuring the application of the article associated to the revised Budget Law of January 2018 to strengthen the evaluation system. It will work on a proposal to this effect with other ministries, explicitly involving the parliament as recipient of the information in this process.

  • The ministry of finance also stimulates mutual learning across ministries through interdepartmental seminars.

  • Following the initiatives of other ministries, the ministry of finance has set up a policy quality and evaluation committee, composed of core department officials, agencies and external experts to increase the internal attention to and quality of policy evaluations.

  • The finance ministry worked with other ministries to update the Integrated Assessment Framework by focussing it on effectiveness and efficiency.

The ministry ensures compliance to rules on evaluation of fiscal policy, guaranteeing that these are carried out and coordinating interdepartmental policy studies.

Ministries of finance have been identified as the principal actors to undertake policy evaluations, in 15 OECD countries and 18 countries in total. They less frequently have the role to define the course of action for commissioning evaluations, which was mentioned by only two countries. Ministries of finance have developed (or are developing) guidelines for policy evaluations across government in 13 countries including 12 OECD countries. In 11 countries, including eight OECD countries, the ministries of finance are tasked to follow up on evaluation reports. These findings reflect the ministries’ role as coordinating institution that provides guidance and in some cases supervision to other ministries’ evaluation activities.

copy the linklink copied!
Box 2.12. Key practices for value-for-money assessment- a key responsibility for the Ministry of Finance or equivalent

The 2019 OECD Report Budgeting and Public Expenditures in OECD Countries assessed key practices for value-for-money assessments. The Report finds the following:

“In principle, the general assessment of costs and benefits of an investment should be the driving force for the prudent evaluation of investment decisions (OECD, 2015). Value for money (VfM) can be defined as what a government judges to be an optimal combination of quantity, quality, features and price (i.e. cost), expected over the whole of the project’s lifetime. VfM can be measured in absolute cost-benefit terms (Do the benefits exceed the costs?) or in relative terms (Is one form of delivery more cost-effective than the other – see next section). In many cases, VfM is assessed using a combination of quantitative (such as cost/benefit analysis) and qualitative tools. A majority of surveyed countries conduct both absolute and relative VfM assessments for either all of the projects or for those projects above a certain threshold, no matter whether the projects are delivered via PPPs or traditionally procured. In some, countries, such as Slovakia and Austria VfM assessments are only compulsory for some line ministries (e.g. railways in Austria).

There are several techniques for assessing value for money. Cost-benefit analysis (including total cost of ownership during the life-cycle) is the most popular approach (89%), followed by net present value (70%) and cash-flow estimates over the project cycle (70%). About half of the countries also use other tools, including internal rate of return, analysis of the willingness of users to pay, or business case methodology. In many cases these VfM assessments are assessed by combined approaches. Denmark for example, calculates and reports the socio-economic value and conducts business cases. Norway follows an alternative approach assessing all PPP projects and all large investment projects (over NOK 750 million) within a general quality assurance scheme.”

Source: OECD (2019[12]), Budgeting and Public Expenditures in OECD Countries 2019, https://dx.doi.org/10.1787/9789264307957-en.

Similarly to the CoG, the ministries of Finance’s mandate includes the promotion of the use of evaluation findings. This finding can be embedded in processes led by ministries of finance, such as ensuring value-for-money or feeding into budget related processes such as spending reviews (see Figure 2.5).

copy the linklink copied!
Figure 2.5. Mandate of the Ministry of Finance/Economy/Treasury or equivalent
Figure 2.5. Mandate of the Ministry of Finance/Economy/Treasury or equivalent

Note: n=26 (22 OECD member countries). Answers reflect responses to the question, "Please list the duties and responsibilities of this/these institution/s related to policy evaluation across government" for the Ministry of Finance / Ministry of Economy / Ministry of Treasure or equivalent. Answer option "Other" is not included.

Source: OECD Survey on Policy Evaluation (2018)

Of the 26 countries, which noted that the ministry of finance or equivalent has a role in policy evaluation across government, 19 countries also mentioned the CoG as well. Still this may reflect responsibilities with different parts of the evaluation systems, for example with centres of government leading on evidence informed policy making processes and the coordination of the regulatory process, while Ministries of finance are more concerned with budgeting and expenditure management. In any case, there is a need for a clear-cut allocation of responsibilities in order to ensure alignment.

Further analysis shows that evaluation units within ministries of finance are mostly headed by civil servants. This is the case in 19 out of the 26 countries in which the Ministry of Finance has a mandate relating to policy evaluation, including 16 OECD countries. Similarly to the patterns observed in CoG, civil servants responsible for evaluation are usually replaced when the government changes in Spain and Brazil. Some Ministries of Finance appoint political employees as heads of their evaluation units, as is the case in six countries such as Romania. Half of such political appointees are replaced when the government changes (Chile, Hungary, and Mexico).

More than half of evaluation units hosted by ministries of finance are financed by the budget of the ministry itself, as is the case in 16 countries (13 in the OECD). Only five countries, including four OECD countries (Germany, Finland, the Netherlands and Norway) allocate national budget to the evaluation units within their ministries of finance. In contrast, Lithuania’s Economic Analysis and Evaluation Unit, hosted in the ministry of finance and responsible for the evaluation of EU structural funds, is financed through Technical Assistance allocations.

Ministry of Planning, Development, or equivalent

Not all of the 42 countries surveyed for this Report have established a dedicated Ministry for Planning, Development or equivalent that has competences related to policy evaluation across government (7 countries in total, including 4 OECD countries: Chile, Czech Republic, Poland and Slovakia). In Latin American countries such as Brazil, Colombia and Costa Rica (see Box 2.13 for details on Brazil and Colombia), the ministries of Planning takes an active role in policy evaluation. Latin American countries make up four of the countries in which a ministry of planning, development or equivalent has an active role in policy evaluation across government which can be traced back to the strong role that national development plans have in the region. These strategic plans tend to be evaluated by the ministries of planning or development, which gives them a mandate for policy evaluation across government.

copy the linklink copied!
Box 2.13. Committee for Monitoring and Evaluation of Federal Public Policies (CMAP) in Brazil & Colombia’s National Planning Department

The Committee for Monitoring and Evaluation of Federal Public Policies (CMAP) was created in 2016 under the co-ordination of the Brazilian ministry of planning. It has the objective to improve the actions, programmes and public policies of the federal executive branch, as well as the allocation of resources and the quality of public spending. The CMAP regroups representatives of the ministries of planning, budget and management, ministry of finance, the Civil House of the Presidency of the Republic and the office of the Comptroller general of the Union, with special participation of members of public and private institutions.

Its role is to define the policies, programmes and actions that will be monitored and evaluated, and propose guidelines to improve them by using thematic committees. Moreover, the committee makes recommendations to policy makers on the adoption, adjustments and improvements of policies, under principles of transparency and accountability.

Colombia’s National Planning Department (DNP) is explicitly entrusted national planning responsibilities. The department encompasses the National Public Management Results Evaluation System (SINERGIA) and the Public Policy Monitoring and Evaluation Division (DEPP). Through SINERGIA and the DEPP, DNP has gradually solidified the incorporation of evaluation in government-wide policy implementation. SINERGIA also represents the role international organisations can play in institutionalisation, as the Inter-American Development Bank provided assistance in the development of SINERGIA (Lazaro, 2015[5]).

Source: Diário oficial da Uniao (2016), “Portaria interministerial nº 102”, 7 April 2016.

copy the linklink copied!
Figure 2.6. Mandate of the Ministry of Planning, Development, or equivalent
Figure 2.6. Mandate of the Ministry of Planning, Development, or equivalent

Note: n=7 (4 OECD member countries). Answers reflect responses to the question, "Please list the duties and responsibilities of this/these institution/s related to policy evaluation across government" for the Ministry of Planning, Development, or equivalent. Answer option "Other" is not included.

Source: OECD Survey on Policy Evaluation (2018).

In four countries, including Chile, Brazil, Colombia and Costa Rica, the ministry of planning, development or equivalent, finances its evaluation activities from its own budget. In the three other countries where this ministry has evaluation-related responsibilities, evaluation units are financed through other means, such as EU funds as in Poland.

The role of autonomous agencies

In addition, autonomous agencies have taken up competences related to policy evaluation across government in some countries. One example of such autonomous agency with a role in evaluation is Mexico’s National Council for the Evaluation of Social Development Policy, CONEVAL (see Box 2.14). Another example of a country with several autonomous agencies contributing to policy evaluation is Italy (Box 2.15).

In Denmark, autonomous agencies can have an ad hoc role across government when the evaluation of a specific policy is requested by the parliament. However, there is no institution within the executive who by default have competences related to policy evaluation across the Danish government.

copy the linklink copied!
Box 2.14. The National Council for the Evaluation of Social Development Policy of Mexico (CONEVAL)

CONEVAL’s mandate encompasses between 100 to 130 federal programmes from year to year, all of which are required to execute internal evaluations governed by CONEVAL’s guidelines. The organisation also directly oversees over a dozen evaluations per year. Results from the evaluations are influential. In 2013-2014, half of evaluated programs were substantially refocused and 41 percent of programs underwent corrections of activities or operational aspects (Lázaro, 2015[54]).

CONEVAL is the main vehicle towards the institutionalisation of policy evaluation within Mexico through initiatives that clarify, cement, and advance M&E processes. The 2007 issuance of the mandatory General Guidelines for federal programme evaluations provided definitions, regulations, principles, and requirements for components of the monitoring and evaluation system. In 2008, a tracking system for evaluations was implemented and subsequent efforts were made to develop that system onto digital platforms and make it accessible to the wider public. Training seminars were also organised for programme managers (Gaarder and Briceño, 2010[55]). These efforts may have the long-term effects of structuring evaluation practices and increasing capacity, even outside of CONEVAL affiliated entities— embedding an evaluation culture (Lázaro, 2015[54]).

Sources: (Lázaro, 2015[54]), (Gaarder and Briceño, 2010[55]).

copy the linklink copied!
Box 2.15. The role of autonomous agencies in Italy

Italy offers a good examples of the wide distribution of roles for policy evaluation, which can be given to autonomous agencies, acting either as knowledge brokers, or as part of their duty as regulators, to assess regulatory impacts. Overall the functions remain sectoral.

A number of autonomous agencies have their own legislative framework requiring them to perform policy evaluation, which de facto gives them a knowledge brokerage role. There are no less than 3 agencies in the education area, including INDIRE, the National Institute for Documentation, Innovation and Educational Research, which is the oldest research organisation related to the Italian Ministry of Education, INVALSI, the national institute for the evaluation of the education and training system and ANVUR for higher education, the national institute for evaluation of universities and research. In the area of labour and social inclusion, INAPP, the national institute for public policy analysis.

In addition, some other agencies have sectoral responsibilities, which include tasks in terms of monitoring: an agency like ANPAL, for active labour market policies has responsibility for the analysis monitoring and evaluation of active labour market policies, more in terms of quantifying indicators on the degree of achievement of the annual objectives for ALPs and monitoring the expected results.

In addition, regulators, such as the CONSOB, the Securities and Exchange Commission in Italy (CONSOB) are also practicing evaluations when introducing new regulations. The example of the regulation on equity crowdfunding was provided to the OECD when developing the Policy Framework (Impact Assessment Office, 2018[94]). Crowdfunding provides alternatives to bank loans as the supply of bank loans dwindled during the financial crisis. The evaluation included a mapping of burdens and a qualitative analysis of the costs benefits analysis. The evaluation contributes to the regulatory impact assessment and allows to check whether the objectives intended by legislators have been achieved.

Source: Italian submissions to the Secretariat.

Taking up the discussion on the trade-off between political independence and influence, autonomous agencies tend to be more independent than other institutions or ministries. The prevalence of the agencies’ self-determined evaluation agendas contributes to their independence. In Mexico, the United States and Costa Rica, where autonomous agencies have a role in policy evaluation across government, they are financed through the budget allocation of the hosting institution.

Institutions beyond the Executive

A number of actors and institutions outside the executive have a crucial role in policy evaluation and its institutionalisation. In OECD countries, 27 Supreme Audit Institutions (SAIs) (33 in all countries) have competences on policy evaluation at central/federal level (Figure 2.7). Around one third (11) of OECD countries (13 countries in total) involve Congress or the Parliamentary Budget Office in policy evaluation. In seven OECD countries (8 countries in total), none of the aforementioned institutions has a mandate on policy evaluation. In addition to performing specific evaluations, SAI can also play a useful role to offer general guidance in evaluating the evaluation system as a whole. 3

copy the linklink copied!
Figure 2.7. Institutions beyond the executive that have competences for policy evaluation at central/federal level
Figure 2.7. Institutions beyond the executive that have competences for policy evaluation at central/federal level

Note: n=42 (35 OECD member countries). Answers reflect responses to the question, “Which of the following institutions beyond the executive have competences on policy evaluation at central/federal level? (Check all that apply)”.

Source: OECD Survey on Policy Evaluation (2018).

The OECD is closely collaborating with SAIs and has assessed their roles in detail. Among others, the OECD published the Good Practices in Supporting Supreme Audit Institutions in 2010 (OECD, 2010[95]). In 2016, the OECD launched a Report on Supreme Audit Institutions and Good Governance- Oversight, Insight and Foresight (OECD, 2016[21]). The report mapped the activities of ten leading Supreme Audit Institutions (SAIs) in Brazil, Canada, Chile, France, Korea, the Netherlands, Poland, Portugal, South Africa and the United States (see Box 2.16 for details on Chile). In particular, it examined how these SAIs assess key stages of the policy cycle, and provided examples and case studies of SAIs’ activities, supporting the integration of international good practices into policy formulation, implementation and evaluation.

copy the linklink copied!
Box 2.16. Chile’s Supreme Audit Institution’s role in strengthening good governance

In 2014, the OECD conducted a Public Governance Review of the SAI of Chile. The Report finds that “Chile's supreme audit institution (Contraloría General de la República de Chile or CGR) is at the forefront of an evolution of Supreme Audit Institutions and has undertaken ambitious initiatives for institutional strengthening, capacity development, transparency and citizen participation. The CGR has introduced strategic planning, restructured its workforce and become an exemplary institution with respect to transparency within the Chilean public sector”.

The CGR has a role to enhance good public governance, and improve accountability and the quality of government decision-making. The CGR can provide objective and credible information that is widely recognised as useful.

Source: (OECD, 2014[96]).

In addition, Supreme Audit Institutions can play a significant policy evaluation function in a number of countries, even if these activities may crossover with performance audits in some cases. Parliament can request evaluations, or performance audits with an evaluative approach. Good practices exist in several countries, including Switzerland, the United States and France. In Switzerland, the Federal Audit Office has developed a specific competence centre for Evaluation,4 which includes professional evaluators and follows international guidelines of the supreme audit institutions in the area (ISSAI 300) as well as the standards of the Swiss Evaluation Society. In the United States, the U.S. Government Accountability Office (GAO) is an independent, nonpartisan agency that works for Congress, which provides Congress and federal agencies with objective, reliable information to help the government save money and work more efficiently. The GAO demonstrates many best practices in evaluation, in particular the accessibility of its reports, and the clarity of its line of inquiry, together with a wealth of analytical results. Finally, in France, the Supreme Audit Institution has also received an official mandate for evaluation through the Constitution (Box 2.17).

copy the linklink copied!
Box 2.17. The French Supreme Audit Institution’s role in conducting policy evaluation

In 2008, the role of the French Supreme Audit Institution (Cour des Comptes) in evaluating public policies was embedded in the French Constitution. In 2011, this constitutional competence was translated into law, enabling the French SAI to conduct evaluations either at the request of Parliament or of its own accord.

The Cour des Comptes operates in accordance with INTOSAI guidelines adopted in 2016 (mentioned earlier) as well as with the specific professional standards adopted in 2014. To date, it has carried out and published over 20 evaluations on an array of subjects ranging from health to housing to education.

In conducting its evaluations, the Cour des Comptes leverages both quantitative and qualitative data and collaborates with external laboratories to carry out the analysis. It also actively involves relevant stakeholders in the evaluation exercises.

Source: Input from the Cours des Comptes (France)

The INTOSAI’s working group on programme and public policy evaluation conducted a survey on the implementation of INTOSAI GOV 9400 guidelines (See Box 2.18). The results show that 31% of responding SAIs perform public policy and programme evaluations. Over 60% of SAIs indicated that they only conduct performance audits, without an evaluative approach, whereas 6% carried out performance audits with an evaluative approach. Overall discussions among the SAIs community reflect this duality of roles, between audit strictly speaking and evaluation and the different professional cultures, skills and approaches that the two may require.

copy the linklink copied!
Box 2.18. A cross-country perspective at Supreme Audit Institutions’ role in policy evaluation

The International Organisation of Supreme Audit Institution’s (INTOSAI) has issued the INTOSAI GOV 9400 Guidelines on evaluation of public policies and monitoring its implementation through its working group on evaluation. These Guidelines seek to harness this potential by providing quality standards to enable SAIs to appropriately select topics to be evaluated, involve stakeholders and experts, plan evaluations, choose tools and methods, and apply and publish reports. These guidelines reiterate that given their independent institutional position, grasp of evaluation methodologies, and knowledge of public policies, SAIs are naturally suited for evaluating public policies.

Note: The guidelines are: www.intosaicommunity.net/wgeppp/wp-content/uploads/2019/08/INTOSAI-GOV-9400_ENG.pdf

Source: (INTOSAI and Cour des Comptes (France), 2019[97]), Input from Cour des Comptes (France)

Moreover, although in a majority of respondent countries, the SAIs – alongside parliament – can launch policy evaluations, only 40% of respondents indicated that they conducted more than three evaluations per year. These findings can be attributed to resource and time constraints. The most common difficulties that are reported for SAIs in conducting evaluations include: the timeframe for carrying out evaluations (22%), the use of methodological tools (16%), and insufficient quantitative human resources (13%). Institutional factors, such as the degree of understanding and acceptance of the evaluation process within the SAI, may also create obstacles. Parliamentary Budget Offices can also have a role (see (OECD, 2019[12])).

The rapid growth of independent fiscal institutions, including independent parliamentary budget offices and fiscal councils, gives them an important role, as one of their key function is to “produce, assess and/or endorse macroeconomic or fiscal forecasting, monitoring compliance with fiscal rules, policy costing, long-term fiscal sustainability analysis, and supporting the legislature in budget analysis” (OECD, 2019[12]).

In sum, different actors beyond the executive branch of government have significant roles in policy evaluation. Evaluations that are internal or external to the executive branch carry different functions and may yield different contributions. External evaluations provide greater provisions for transparency and accountability but offer less scope for promoting use as an internal management tool from the government's centre, as when this is coordinated for example through the centre of government, a ministry of finance or equivalent (budget central authority, planning, presidency or internal control office). The two types of evaluations should probably be seen as complementary to one another.

Non institutional actors and International Organisations

The report focuses on the institutional actors and mechanisms that may exist as part of the public sector capacity to promote use and quality of evaluation. However, two types of actors were not addressed through the survey and may play a significant role in countries.

The first includes non-governmental organisations (NGOs) and citizens. NGOs often play a significant role as suppliers of evaluation, to drive and push the issues that are part of their core mission. Equally, they are also often strong users of evaluation products that can serve their purpose. In some countries, particularly those with a less developed public sector, NGOs can help fill the gap of the evaluation ecosystem, ensuring some data collection and providing some evaluations that help to inform policy decisions. In addition, citizens have a significant role to play, to not only inform and engage in evaluation processes, but also as a group to promote the use of evaluations and decisions that can make a different in their daily lives. There is a full topic of citizen engagement in the evaluation processes and the role of public deliberation that may go beyond the scope of the current report, but which is worth mentioning. They are certainly a significant part of a healthy evaluation ecosystem.

The second type of actors is the International Organisations (IOs). IOs, including the OECD, play a significant role in promoting evaluation and peer learning at the domestic level. Various international organisations may have different mandates and functions, some more in the economic and financial domain, and some more geared towards specific topics. OECD work as such also often has an evaluative nature, as it takes a cross-country approach to assess policy outcomes and identify best practices. The question is the extent to which international organisations could have an impact on the evaluation ecosystem as a whole at country level. For example, the OECD recently completed a full review of the Irish Government Economic Evaluation Service and is conducting country specific work in a few other countries. (OECD, 2020[98])

Coordination mechanisms

Coordination bodies or mechanisms such as commissions and integrated services enable aligning and sharing practices across institutions within and beyond government, which is a necessary disposition for fostering a sound evaluation culture. For example, Mexico’s National Council for the Evaluation of Social Development Policies (CONEVAL) aims to improve coordination for better evaluation activities across government and across states. In the United States, an interagency council has been set up that regroups Evaluation Officers and is intended to serve as a forum for exchanging information and advising the Office of Management and Budget on issues affecting the evaluation functions such as the evaluator competencies, best practices for programme evaluation, and evaluation capacity building.

Another example of an integrated cross-government service for building analytic capacity for evaluation to improve policymaking is the Irish Government Economic and Evaluation Service (IGEES). The IGEES is a horizontal structure coordinated by the Department of Expenditure and Reform that offers support to the whole Irish Government in delivering evidence-informed policy making. IGEES staff are working across all departments. (OECD, 2020[98])

However, 18 countries (including 15 OECD countries) do not have regular consultation on policy evaluation issues between the government (executive) and SAIs. Although 12 countries (including 10 OECD countries) do not have consultation mechanisms, they conduct regular ad hoc consultation, as is the case for example in Austria, Czech Republic, Denmark, Greece, Finland, and Ireland. Nevertheless, only seven countries (including 5 OECD countries) confirmed the existence of a formal co-ordination mechanism for regular consultation, such as Estonia and Hungary. Lithuania, Latvia, and Estonia have a formal co-ordination mechanism to avoid overlaps on planned or on-going evaluations.

References

[7] 115th Congress (2019), Public Law No: 115-435 (01/14/2019) - Foundations for Evidence-Based Policymaking Act of 2018, https://www.congress.gov/bill/115th-congress/house-bill/4174.

[18] Acquah, D., K. Lisek and S. Jacobzone (2019), “The Role of Evidence Informed Policy Making in Delivering on Performance: Social Investment in New Zealand”, OECD Journal on Budgeting, Vol. 19/1, https://dx.doi.org/10.1787/74fa8447-en.

[131] AEVAL (2015), Practical guide for the design and implementation of public policy evaluations(Guía práctica para el diseño y la realización de evaluaciones de políticas públicas), http://www.aeval.es/export/sites/aeval/comun/pdf/evaluaciones/Guia_Evaluaciones_AEVAL.pdf (accessed on 21 August 2019).

[161] Alkin, M. and S. Taut (2002), “Unbundling evaluation use”, Studies in Educational Evaluation, Vol. 29/1, pp. 1-12, http://dx.doi.org/10.1016/S0191-491X(03)90001-0.

[148] American Evaluation Association (2018), Guiding Principles.

[151] American Evaluation Association (2015), Core Evaluator Competencies, http://www.eval.org.

[117] Barnett, C. and L. Camfield (2016), “Ethics in evaluation”, ournal of Development Effectiveness.

[142] Better evaluation (2019), Review evaluation (do meta-evaluation), https://www.betterevaluation.org/en/rainbow_framework/manage/review_evaluation_do_meta_evaluation (accessed on 19 August 2019).

[67] Bossuyt, J., L. Shaxson and A. Datta (2014), “Study on the uptake of learning from EuropeAid’s strategic evaluations into development policy and practice”, Evaluation Unit of the Directorate General for Development and Cooperation-EuropeAid (European Commission).

[165] Bridgeland, J. and P. Orszag (2013), Can Government Play Moneyball? - The Atlantic, https://www.theatlantic.com/magazine/archive/2013/07/can-government-play-moneyball/309389/ (accessed on 6 December 2018).

[60] Brown, L. and S. Osborne (2013), “Risk and Innovation”, Public Management Review, Vol. 15/2, pp. 186-208, http://dx.doi.org/10.1080/14719037.2012.707681.

[119] Brown, R. and D. Newman (1992), “Ethical Principles and Evaluation Standards: Do They Match?”, Evaluation Review, Vol. 16/6, pp. 650-663, http://dx.doi.org/10.1177/0193841X9201600605.

[152] Bundesministerium für Finanzen and Bundesministerin für Frauen und öffentlichen Dienst (2013), Handbuch Wirkungsorientierte Folgenabschätzung Arbeitsunterlage, http://www.oeffentlicherdienst.gv.at (accessed on 28 August 2019).

[129] Campbell, S. and G. Harper (2012), Quality in policy impact evaluation: understanding the effects of policy from other influences (supplementary Magenta Book guidance), HM Treasury, http://www.nationalarchives.gov.uk/doc/open- (accessed on 9 July 2019).

[8] Canada Treasury Board (2016), Policy on results, https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=31300 (accessed on 20 September 2019).

[133] Caroline Heider (2018), The Three Pillars of a Working Evaluation Function: IEG’s Experience, http://ieg.worldbankgroup.org/blog/three-pillars-working-evaluation-function-iegs-experience (accessed on 22 August 2019).

[65] Cinar, E., P. Trott and C. Simms (2018), “Public Management Review A systematic review of barriers to public sector innovation process”, http://dx.doi.org/10.1080/14719037.2018.1473477.

[31] Commission on Evidence-Based Policymaking (2017), The Promise of Evidence-Based Policymaking: Report of the Commission on Evidence-Based Policymaking, https://www.cep.gov/report/cep-final-report.pdf (accessed on 6 August 2019).

[23] CONEVAL (2007), Lineamientos generales para la evaluación de los Programas Federales de la Administración Pública Federal, https://www.coneval.org.mx/rw/resource/coneval/eval_mon/361.pdf (accessed on 18 June 2019).

[159] Cooksy, L. and M. Mark (2012), “Influences on evaluation quality”, American Journal of Evaluation, Vol. 33/1, pp. 79-84, http://dx.doi.org/10.1177/1098214011426470.

[45] Crowley, D. et al. (2018), “Standards of Evidence for Conducting and Reporting Economic Evaluations in Prevention Science”, Prevention Science, Vol. 19/3, pp. 366-390, http://dx.doi.org/10.1007/s11121-017-0858-1.

[109] Damschroder, L. et al. (2009), “Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science”, Implementation Science, Vol. 4/1, p. 50, http://dx.doi.org/10.1186/1748-5908-4-50.

[25] Departamento Nacional de Planeación (2016), ¿Qué es una Evaluación?, https://sinergia.dnp.gov.co/Paginas/Internas/Evaluaciones/%C2%BFQu%C3%A9-es-Evaluaciones.aspx.

[42] Department of the Prime Minister and Cabinet (2014), Guide to Implementation Planning, Australian government, https://www.pmc.gov.au/sites/default/files/files/guide-to-implementation-planning.pdf (accessed on 12 July 2019).

[85] Department of the Prime Minister and Cabinet (Australia) (2013), Policy Implementation |, https://www.pmc.gov.au/government/policy-implementation (accessed on 20 September 2019).

[116] Desautels, G. and S. Jacob (2012), “The ethical sensitivity of evaluators: A qualitative study using a vignette design”, Evaluation, Vol. 18/4, pp. 437-450, http://dx.doi.org/10.1177/1356389012461192.

[176] Dobbins, M. et al. (2009), “A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies”, Implementation Science, Vol. 4/1, p. 61, http://dx.doi.org/10.1186/1748-5908-4-61.

[157] Estonian National Audit Office (2011), The state of affairs with the legislative impact assessment.

[171] European Commission (2017), Better Regulation Guidelines, http://europa.eu/about-eu/basic-information/decision-making/treaties/index_en.htm.

[158] European Court of Auditors (2013), Audit Guidelines on Evaluation, https://www.eca.europa.eu/Lists/ECADocuments/GUIDELINES_EVALUATION/Evaluation-Guideline-EN-Oct2013.pdf (accessed on 23 August 2019).

[27] European Environment Agency (2017), “EEA guidance document-policy evaluation”, https://www.researchgate.net/publication/317594615.

[50] Flay, B. et al. (2005), “Standards of Evidence: Criteria for Efficacy, Effectiveness and Dissemination”, Prevention Science, Vol. 6/3, pp. 151-175, http://dx.doi.org/10.1007/s11121-005-5553-y.

[162] Fleischer, D. and C. Christie (2009), “Evaluation use: Results from a survey of U.S. American evaluation Association members”, American Journal of Evaluation, Vol. 30/2, pp. 158-175, http://dx.doi.org/10.1177/1098214008331009.

[61] Flemig, S., S. Osborne and T. Kinder (2016), “Risky business—reconceptualizing risk and innovation in public services”, Public Money & Management, Vol. 36/6, pp. 425-432, http://dx.doi.org/10.1080/09540962.2016.1206751.

[128] France Stratégie (2016), How to evaluate the impact of public policies: a guide for the use of decision makers and practitioners (Comment évaluer l’impact des politiques publiques : un guide à l’usage des décideurs et des praticiens), https://www.strategie.gouv.fr/sites/strategie.gouv.fr/files/atoms/files/guide_methodologique_20160906web.pdf (accessed on 21 August 2019).

[130] France Stratégie, R. Desplatz and M. Ferracci (2016), Comment évaluer l’impact des politiques publiques ? Un guide à l’usage des décideurs et praticiens.

[134] France Stratégie, R. Desplatz and M. Ferracci (n.d.), Comment évaluer l’impact des politiques publiques ? Un guide à l’usage des décideurs et praticiens.

[55] Gaarder, M. and B. Briceño (2010), “Institutionalisation of government evaluation: balancing trade-offs”, Journal of Development Effectiveness, Vol. 2/3, pp. 289-309, http://dx.doi.org/10.1080/19439342.2010.505027.

[28] Gasper, D. (2018), “Policy Evaluation: From Managerialism and Econocracy to a Governance Perspective”, in International Development Governance, Routledge, http://dx.doi.org/10.4324/9781315092577-37.

[30] Gasper, D. (2018), “Policy Evaluation: From Managerialism and Econocracy to a Governance Perspective”, in International Development Governance, Routledge, http://dx.doi.org/10.4324/9781315092577-37.

[168] Gauthier, B. (2015), “Some pointers concerning Evaluation Utilization”.

[51] Goldstein, C. et al. (2018), Ethical issues in pragmatic randomized controlled trials: A review of the recent literature identifies gaps in ethical argumentation, BioMed Central Ltd., http://dx.doi.org/10.1186/s12910-018-0253-x.

[90] Government Policy Analysis Unit (2017), Global Evidence Policy Units: Finland, https://www.ksi-indonesia.org/file_upload/Evidence-Policy-Unit-in-Finland-the-Government-Po-14Jun2017163532.pdf.

[110] Greenhalgh, T. et al. (2004), “Diffusion of Innovations in Service Organizations: Systematic Review and Recommendations”, The Milbank Quarterly, Vol. 82/4, pp. 581-629, http://dx.doi.org/10.1111/j.0887-378X.2004.00325.x.

[170] Haynes, A. et al. (2012), “Identifying Trustworthy Experts: How Do Policymakers Find and Assess Public Health Researchers Worth Consulting or Collaborating With?”, PLoS ONE, Vol. 7/3, p. e32665, http://dx.doi.org/10.1371/journal.pone.0032665.

[173] Haynes, A. et al. (2018), “What can we learn from interventions that aim to increase policy-makers’ capacity to use research? A realist scoping review”, Health Research Policy and Systems, Vol. 16/1, p. 31, http://dx.doi.org/10.1186/s12961-018-0277-1.

[29] Heider, C. (2017), Rethinking Evaluation - Efficiency, Efficiency, Efficiency, https://ieg.worldbankgroup.org/blog/rethinking-evaluation-efficiency.

[41] Hildén, M. (2014), “Evaluation, assessment, and policy innovation: exploring the links in relation to emissions trading”, Environmental Politics, Vol. 23/5, pp. 839-859, http://dx.doi.org/10.1080/09644016.2014.924199.

[9] HM Treasury (2011), The Magenta Book: Guidance for evaluation, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/220542/magenta_book_combined.pdf (accessed on 18 June 2019).

[191] HM Treasury (2011), The Magenta Book: Guidance for evaluation, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/220542/magenta_book_combined.pdf (accessed on 18 June 2019).

[3] Howlett, M. (2019), “Policy analytical capacity and evidence‐based policy‐making: Lessons from Canada”, https://doi.org/10.1111/j.1754-7121.2009.00070_1.x.

[153] IGEES (2014), Irish Government Economic and Evaluation Service, https://igees.gov.ie/ (accessed on 28 January 2019).

[94] Impact Assessment Office (2018), “The Uncompleted Evaluation of Legislative in Italy: Critical Issues, Prospects and Good Practice”, http://www.senato.it/service/PDF/PDFServer/BGT/01082854.pdf (accessed on 23 September 2019).

[127] Independent Evaluation Office of UNDP (2019), UNDP Evaluation Guidelines.

[82] Innovate UK (2018), “Evaluation Framework, How we assess our impact on business and the economy”, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/681741/17.3253_Innovate_UK_Evaluation_Framework_RatherNiceDesign_V2_FINAL_WEB.pdf (accessed on 20 September 2019).

[20] International Organisation of Supreme Audit Institutions (2019), ISSAI 100: Fundamental Principles of Public-Sector Auditing, http://www.issai.orghttp://www.intosai.org (accessed on 8 January 2020).

[155] INTOSAI (2016), Guidelines on the Evaluation of Public Policies, http://www.issai.org (accessed on 22 August 2019).

[122] INTOSAI (2010), Program Evaluation for SAIs, A Primer, https://www.eurosai.org/handle404?exporturi=/export/sites/eurosai/.content/documents/materials/Program-Evaluation-for-SAIs.pdf (accessed on 22 August 2019).

[188] INTOSAI Working Group on Evaluation of Public Policies and Programs (2019), Implementation of the INTOSAI GOV 9400 Guidelines: Survey Results.

[97] INTOSAI and Cour des Comptes (France) (2019), Implementation of the INTOSAI GOV 9400 Guidelines: Survey results.

[118] Jacob, S. and Y. Boisvert (2010), To Be or Not to Be a Profession: Pros, Cons and Challenges for Evaluation, http://dx.doi.org/10.1177/1356389010380001.

[63] Jacob, S., S. Speer and J. Furubo (2015), “The institutionalization of evaluation matters: Updating the International Atlas of Evaluation 10 years later”, Evaluation, Vol. 21/1, pp. 6-31, http://dx.doi.org/10.1177/1356389014564248.

[167] Johnson, K. et al. (2009), “Research on Evaluation Use A Review of the Empirical Literature From 1986 to 2005”, http://dx.doi.org/10.1177/1098214009341660.

[149] King, J. et al. (2001), Toward a Taxonomy of Essential Evaluator Competencies.

[181] Kothari, A. et al. (2009), “Is research working for you? validating a tool to examine the capacity of health organizations to use research”, Implementation Science, Vol. 4/1, p. 46, http://dx.doi.org/10.1186/1748-5908-4-46.

[17] Kroll, A. and D. Moynihan (2018), “The Design and Practice of Integrating Evidence: Connecting Performance Management with Program Evaluation”, Public Administration Review, Vol. 78/2, pp. 183-194, http://dx.doi.org/10.1111/puar.12865.

[103] Kusters, C. (2011), Making evaluations matter: a practical guide for evaluators MSP book View project Strengthening Managing for Impact (SMIP) View project, https://www.researchgate.net/publication/254840956.

[125] Kusters, C. et al. (2011), “Making evaluations matter: a practical guide for evaluators”, Centre for Development Innovation, Wageningen University & Research centre., https://www.researchgate.net/publication/254840956.

[175] Langer, L., J. Tripney and D. Gough (2016), The science of using science: researching the use of Research evidence in decision-making..

[5] Lazaro, B. (2015), Comparative Study on the Institutionalization of Evaluation in Europe and Latin America, Eurosocial Programme.

[54] Lázaro, B. (2015), Comparative study on the institutionalisation of evaluation in Europe and Latin America, Eurosocial Programme, Madrid, http://sia.eurosocial-ii.eu/files/docs/1456851768-E_15_ENfin.pdf (accessed on 9 July 2019).

[57] Ledermann, S. (2012), “Exploring the Necessary Conditions for Evaluation Use in Program Change”, American Journal of Evaluation, Vol. 33/2, pp. 159-178, http://dx.doi.org/10.1177/1098214011411573.

[163] Ledermann, S. (2012), “Exploring the Necessary Conditions for Evaluation Use in Program Change”, http://dx.doi.org/10.1177/1098214011411573.

[100] Leviton, L. and E. Hughes (1981), Research on the Utilization of Evaluations: A Review and Synthesis view and Synthesis.

[135] Little, B. (ed.) (1979), Speaking Truth to Power: The Art and Craft of Policy Analysis.

[68] Liverani, M., B. Hawkins and J. Parkhurst (2013), Political and institutional influences on the use of evidence in public health policy. A systematic review., http://dx.doi.org/10.1371/journal.pone.0077404.

[58] Mackay, K. (2007), How to Build M&E Systems to Support Better Government, The World Bank, http://dx.doi.org/10.1596/978-0-8213-7191-6.

[69] Maeda, A., M. Harrit and S. Mabuchi (2012), Human Development Creating Evidence for Better Health Financing Decisions A Strategic Guide for the Institutionalization of National Health Accounts, The World Bank, Washington, DC, http://dx.doi.org/10.1596/978-0-8213-9469-4.

[140] Malčík, M. and A. Seberová (2010), “Meta-evaluation and Quality Standard of Final Evaluation Report. The New Educational Review”, The New Educational Review, Vol. 22, pp. 149-164.

[10] McDavid, J., I. Huse and L. Hawthorn (2006), Program Evaluation and Performance Measurement: An Introduction to Practice, https://study.sagepub.com/mcdavid3e (accessed on 28 January 2020).

[146] Mcguire, M. and R. Zorzi (2005), “EVALUATOR COMPETENCIES AND PERFORMANCE DEVELOPMENT”, The Canadian Journal of Program Evaluation, Vol. 20/2, pp. 73-99.

[37] Mergaert, L. and R. Minto (2015), “Ex Ante and Ex Post Evaluations: Two Sides of the Same Coin? The Case of Gender Mainstreaming in EU Research Policy”, Symposium on Policy Evaluation in the EU, http://dx.doi.org/10.1017/S1867299X0000427X.

[126] Mideplan (2018), Guide for Terms of References (Guia de Términos de Referencia), https://documentos.mideplan.go.cr/share/s/DVyxtc0OR3a0T6E2QbfQww (accessed on 21 August 2019).

[180] Mideplan (2018), Guide for the use of evaluations: guidelines for its implementation and follow-up on recommendations, https://documentos.mideplan.go.cr/share/s/DDVZ114kTjCsTAxiihi5Kw (accessed on 3 September 2019).

[83] Mideplan (2018), National Evaluation Policy, https://documentos.mideplan.go.cr/share/s/Ymx1WmMJTOWe9YyjyeCHKQ (accessed on 20 September 2019).

[35] Ministerio de Desarrollo Social y Familia de Chile (2019), Evaluación Social Ex Ante, http://sni.ministeriodesarrollosocial.gob.cl/evaluacion-iniciativas-de-inversion/evaluacion-ex-ante/ (accessed on 5 July 2019).

[24] Ministerio de Planificación Nacional y Política Económica (2018), Manual de Evaluación para Intervenciones Públicas, https://documentos.mideplan.go.cr/share/s/6eepeLCESrKkft6Mf5SToA (accessed on 5 August 2019).

[38] Ministry of Finance (2006), The Norwegian Government Agency for Financial Management (DFØ), https://www.regjeringen.no/en/dep/fin/about-the-ministry/etater-og-virksomheter-under-finansdepartementet/subordinateagencies/the-norwegian-government-agency-for-fina/id270409/ (accessed on 12 July 2019).

[132] Ministry of Finance (Lithuania) (2011), Recommendations on Implementation of Programs Evaluation Methodology, https://finmin.lrv.lt/uploads/finmin/documents/files/LT_ver/Veiklos_sritys/Veiklos_efektyvumo_tobulinimas/PVrekomendacijos2011.pdf (accessed on 27 August 2019).

[6] Ministry of Finance of The Netherlands (2018), Arrangements for periodic evaluation research, https://wetten.overheid.nl/BWBR0040754/2018-03-27 (accessed on 12 July 2019).

[76] Ministry of Internal Affairs and Communications (2017), “Basic Guidelines for Implementing Policy Evaluation (Revised)”, https://www.soumu.go.jp/main_content/000556221.pdf (accessed on 16 September 2019).

[190] Ministry of Internal and Communications (2017), Basic Guidelines for Implementing Policy Evaluation Revised, http://www.soumu.go.jp/main_content/000556221.pdf.

[49] Morton, M. (2009), Applicability of Impact Evaluation to Cohesion Policy 1 Report Working Paper of, https://ec.europa.eu/regional_policy/archive/policy/future/pdf/4_morton_final-formatted.pdf (accessed on 8 August 2019).

[154] National Audit Office (GBR) (2013), Evaluation in Government, https://www.nao.org.uk/report/evaluation-government/ (accessed on 22 August 2019).

[177] Neuhoff, A. et al. (2015), The What Works Marketplace Helping Leaders Use Evidence to Make Smarter Choices 2 Invest in What Works Policy Series, The Bridgespan group, http://www.results4america.org.

[164] Newman, J., A. Cherney and B. Head (2017), “Policy capacity and evidence-based policy in the public service”, Public Management Review, Vol. 19/2, pp. 157-174, http://dx.doi.org/10.1080/14719037.2016.1148191.

[111] Newman, K., C. Fisher and L. Shaxson (2012), “Stimulating Demand for Research Evidence: What Role for Capacity-building?”, IDS Bulletin, Vol. 43/5, pp. 17-24, http://dx.doi.org/10.1111/j.1759-5436.2012.00358.x.

[183] Newman, K., C. Fisher and L. Shaxson (2012), “Stimulating Demand for Research Evidence: What Role for Capacity-building?”, IDS Bulletin, Vol. 43/5, pp. 17-24, http://dx.doi.org/10.1111/j.1759-5436.2012.00358.x.

[99] OECD (2020), Building capacity for evidence informed policy making:, OECD, Paris.

[98] OECD (2020), The Irish Government Economic and Evaluation Service: Using Evidence-Informed Policy Making to Improve Performance, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/cdda3cb0-en.

[12] OECD (2019), Budgeting and Public Expenditures in OECD Countries 2019, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264307957-en.

[59] OECD (2019), Evaluating Public Sector Innovation Support or hindrance to innovation?, Observatory of Public Sector Innovation-OPSI, Paris, https://oecd-opsi.org/wp-content/uploads/2019/05/Evaluating-Public-Sector-Innovation-Part-5a-of-Lifecycle-Report.pdf (accessed on 11 September 2019).

[107] OECD (2019), “Open Government Data Report”.

[11] OECD (2019), Open Government in Biscay, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/e4e1a40c-en.

[1] OECD (2018), Building Capacity for Evidence Informed Policy Making: Towards a Baseline Skill Set, http://www.oecd.org/gov/building-capacity-for-evidence-informed-policymaking.pdf (accessed on 3 September 2019).

[93] OECD (2018), Centre Stage 2- The organisation and functions of the centre of government in OECD countries, https://www.oecd.org/gov/centre-stage-2.pdf.

[47] OECD (2018), Cost-Benefit Analysis and the Environment: Further Developments and Policy Use, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264085169-en.

[2] OECD (2018), Draft Policy Framework on Sound Public Governance, http://www.oecd.org/gov/draft-policy-framework-on-sound-public-governance.pdf (accessed on 8 July 2019).

[123] OECD (2018), “OECD Best Practice Principles for Regulatory Policy: Reviewing the Stock of Regulation”.

[192] OECD (2018), OECD Performance Budgeting Survey.

[172] OECD (2018), OECD Regulatory Policy Outlook 2018, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264303072-en.

[112] OECD (2017), Governing Better Through Evidence-Informed Policy Making Options for an OECD Work Agenda.

[4] OECD (2017), Government at a Glance - OECD, OECD, Paris, http://www.oecd.org/gov/govataglance.htm (accessed on 9 July 2019).

[105] OECD (2017), Making policy evaluation work.

[186] OECD (2017), Policy Advisory Systems: Supporting Good Governance and Sound Public Decision Making, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264283664-en.

[53] OECD (2017), Systems Approaches to Public Sector Challenges: Working with Change, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264279865-en.

[184] OECD (2016), Engaging Public Employees for a High-Performing Civil Service, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264267190-en.

[144] OECD (2016), Evaluation Systems in Development Co-operation: 2016 Review, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264262065-en.

[43] OECD (2016), Open Government: The Global Context and the Way Forward, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264268104-en.

[21] OECD (2016), Supreme Audit Institutions and Good Governance: Oversight, Insight and Foresight, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/9789264263871-en.

[81] OECD (2014), “Budget Review: Germany”, OECD Journal on Budgeting, Vol. 2.

[92] OECD (2014), “Centre Stage Driving Better Policies from the Centre of Government”, http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=gov/pgc/mpm(2014)3&doclanguage=en (accessed on 23 September 2019).

[96] OECD (2014), Chile’s Supreme Audit Institution: Enhancing Strategic Agility and Public Trust, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264207561-en.

[174] OECD (2011), “Government at a Glance”.

[13] OECD (2011), “Typology and implementation of spending reviews”, OECD SBO Meeting on Performance and Results, http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=GOV/PGC/SBO(2011)9&doclanguage=en (accessed on 1 August 2019).

[48] OECD (2010), “DAC Guidelines and Reference Series: Quality Standards for Development Evaluation”, https://www.oecd.org/development/evaluation/qualitystandards.pdf (accessed on 9 July 2019).

[95] OECD (2010), Good Practices in Supporting Supreme Audit Institutions, http://www.oecd.org/dac/effectiveness/Final%20SAI%20Good%20Practice%20Note.pdf.

[19] OECD (2008), Performance Budgeting: A Users’ Guide, https://www.oecd.org/gov/budgeting/Performance-Budgeting-Guide.pdf (accessed on 2 August 2019).

[106] OECD (forthcoming), Ensuring the Good Governance of Evidence, taking stock of standards for policy design, implementation and evaluation, OECD, Paris.

[84] OECD-DAC (2009), “Guidelines for Project and Programme Evaluations”, https://www.entwicklung.at/fileadmin/user_upload/Dokumente/Projektabwicklung/Englisch/Guidelines_for_Project_and_Progamme_Evaluations.PDF (accessed on 20 September 2019).

[26] OECD-DAC (2002), Glossary of Key Terms in Evaluation and Results Based Management, https://www.oecd.org/dac/evaluation/18074294.pdf (accessed on 18 June 2019).

[137] Office fédéral de la justice (2005), Guide for the evaluation of the efficacy of the Confederation (Guide de l’évaluation de l’efficacité à la Confédération), http://www.ofj.admin.ch/ejpd/fr/home/themen/staat_und_buerger/ref_evaluation/ref_umsetzung_art.html (accessed on 27 August 2019).

[86] Office of Management and Budget (2020), MB-20-12 Program Evaluation Standards and Practices, https://www.whitehouse.gov/wp-content/uploads/2020/03/M-20-12.pdf).

[136] Office of Management and Budget (2018), Monitoring and Evaluation Guidelines for Federal Departments and Agencies that Administer United States Foreign Assistance, https://www.whitehouse.gov/wp-content/uploads/2017/11/M-18-04-Final.pdf (accessed on 27 August 2019).

[182] Olejniczak, K. and E. Raimondo (2016), “Evaluation units as knowledge brokers: Testing and calibrating an innovative framework”, Evaluation, Vol. 22/2, pp. 168-189, http://dx.doi.org/10.1177/1356389016638752.

[64] Olejniczak, K., E. Raimondo and T. Kupiec (2016), “Evaluation units as knowledge brokers: Testing and calibrating an innovative framework”, Evaluation, Vol. 22/2, pp. 168-189, http://dx.doi.org/10.1177/1356389016638752.

[169] Oliver, K. et al. (2015), “Identifying public health policymakers’ sources of information: comparing survey and network analyses”, The European Journal of Public Health, Vol. 27/suppl_2, p. ckv083, http://dx.doi.org/10.1093/eurpub/ckv083.

[178] OMB (2010), Section 200 - Overview of the Federal Performance Framework.

[156] Operational and Evaluation Audit Division of Costa Rica (2014), INFORME DE AUDITORÍA DE CARÁCTER ESPECIAL SOBRE LOS PROCESOS DE SEGUIMIENTO, EVALUACIÓN Y RENDICIÓN DE CUENTAS PÚBLICA EN, https://cgrfiles.cgr.go.cr/publico/jaguar/sad_docs/2015/DFOE-SAF-IF-09-2014-Recurrido.pdf (accessed on 22 August 2019).

[32] Parkhurst, J. (2017), The politics of evidence : from evidence-based policy to the good governance of evidence, Routledge, London, http://researchonline.lshtm.ac.uk/3298900/ (accessed on 23 November 2018).

[102] Patton, M. (1978), “Utilization-focused evaluation”.

[56] Picciotto, R. (2013), “Evaluation Independence in Organizations”, Journal of MultiDisciplinary Evaluation, Vol. 9/20, p. 15.

[120] Picciotto, R. (n.d.), The Value of Evaluation Standards: A Comparative Assessment, http://evaluation.wmich.edu/jmde/Articles.

[115] Pleger, L. and S. Hadorn (2018), “The big bad wolf’s view: The evaluation clients’ perspectives on independence of evaluations”, Evaluation, Vol. 24/4, pp. 456-474, http://dx.doi.org/10.1177/1356389018796004.

[138] Pleger, L. and S. Hadorn (2018), “The big bad wolf’s view: The evaluation clients’ perspectives on independence of evaluations”, Evaluation, Vol. 24/4, pp. 456-474, http://dx.doi.org/10.1177/1356389018796004.

[150] Podems, D. (2013), “Evaluator competencies and professionalizing the field: Where are we now?”, Canadian Journal of Program Evaluation, Vol. 28/3, pp. 127-136.

[33] Poder Ejecutivo Nacional (2018), Decreto 292/2018: Evaluación de Políticas y Programas Sociales, 11-04-2018, http://servicios.infoleg.gob.ar/infolegInternet/verNorma.do?id=308653 (accessed on 8 July 2019).

[22] Poder Ejecutivo Nacional de Argentina (2018), Decreto 292/2018: Evaluación de Políticas y Programas Sociales.

[145] Polish Ministry of Infrastructure and Development (2015), Guidelines of cohesion policy evaluation for period 2014-2020, http://www.ewaluacja.gov.pl/media/13209/wytyczne_090915_final.pdf (accessed on 22 August 2019).

[185] Punton, M. et al. (2016), HOW CAN CAPACITY DEVELOPMENT PROMOTE EVIDENCE-INFORMED POLICY MAKING? Literature Review for the Building Capacity to Use Research Evidence (BCURE) Programme, http://www.itad.com/knowledge-and-resources/bcure (accessed on 6 September 2019).

[166] Results for America (2017), Government Mechanisms to Advance the Use of Data and Evidence in Policymaking: A Landscape Review.

[193] Results for America (2017), Government Mechanisms to Advance the Use of Data and Evidence in Policymaking: A Landscape Review.

[114] Robert Picciotto (2013), “Evaluation Independence in Organizations”, Journal of MultiDisciplinary Evaluation, Vol. 9/20, p. 15.

[187] Robinson, M. (2014), Spending reviews, http://www.pfmresults.com. (accessed on 25 June 2019).

[79] Roh, J. (2018), “Improving the government performance management system in South Korea”, Asian Education and Development Studies, Vol. 7/3, pp. 266-278, http://dx.doi.org/10.1108/AEDS-11-2017-0112.

[52] Rutter, J. (2012), Evidence and Evaluation in Policy making, Institute for Government, https://www.instituteforgovernment.org.uk/sites/default/files/publications/evidence%20and%20evaluation%20in%20template_final_0.pdf.

[62] Schillemans, T. and M. Bovens (2011), The Challenge of Multiple Accountability: Does Redundancy lead to Overload?.

[39] Schoenefeld, J. and A. Jordan (2017), “Governing policy evaluation? Towards a new typology”, Evaluation, Vol. 23/3, pp. 274-293, http://dx.doi.org/10.1177/1356389017715366.

[139] Scriven, M. (1969), “An introduction to meta-evaluation”, Educational Product Report, Vol. 2.

[91] Secretaria de Desarrollo Social (2015), Decree for which the Council of social Development Policy Evalaution is regulated..

[108] Shaxson, L. (2019), “Uncovering the practices of evidence-informed policy-making”, Public Money & Management, Vol. 39/1, pp. 46-55, http://dx.doi.org/10.1080/09540962.2019.1537705.

[113] Sinatra, G., D. Kienhues and B. Hofer (2014), “Educational Psychologist Addressing Challenges to Public Understanding of Science: Epistemic Cognition, Motivated Reasoning, and Conceptual Change”, http://dx.doi.org/10.1080/00461520.2014.916216.

[14] Smismans, S. (2015), “Policy Evaluation in the EU: The Challenges of Linking Ex Ante and Ex Post Appraisal”, Symposium on Policy Evaluation in the EU, http://dx.doi.org/10.1017/S1867299X00004244.

[34] Smismans, S. (2015), “Policy Evaluation in the EU: The Challenges of Linking Ex Ante and Ex Post Appraisal”, Symposium on Policy Evaluation in the EU, http://dx.doi.org/10.1017/S1867299X00004244.

[101] Stern, E., M. Saunders and N. Stame (2015), “Standing back and looking forward: Editors’ reflections on the 20th Anniversary of Evaluation”, Evaluation, Vol. 21/4, pp. 380-390, http://dx.doi.org/10.1177/1356389015608757.

[46] Steuerle, E. and L. Jackson (2016), Advancing the power of economic evidence to inform investments in children, youth, and families, National Academies Press, http://dx.doi.org/10.17226/23481.

[147] Stevahn, L. et al. (2005), “Establishing Essential Competencies for Program Evaluators”, ARTICLE American Journal of Evaluation, http://dx.doi.org/10.1177/1098214004273180.

[143] Stufflebeam, D. (2001), Method Notes Evaluation Checklists: Practical Tools for Guiding and Judging Evaluations, http://www.wmich.edu/evalctr/checklists/.

[141] Stufflebeam, D. (1978), “Meta evaluation: an overview”, Evaluation and The Health Professions, Vol. 1/1, https://journals.sagepub.com/doi/pdf/10.1177/016327877800100102 (accessed on 19 August 2019).

[179] Superu (2018), Making sense of evidence: A guide to using evidence in policy, https://thehub.sia.govt.nz/assets/Uploads/Making-Sense-of-Evidence-handbook-FINAL.pdf (accessed on 3 September 2019).

[189] Swiss Federal Audit Office (2019), Involving Stakeholders in Evaluation at the Swiss Federal Audit Office, http://www.program-evaluation.ccomptes.fr/images/stories/evenements/Vilnius_2019/Presentation_Switzerland_Stakeholder_Involvement_WGEPPP_2019.pdf (accessed on 6 September 2019).

[74] The Cabinet Secretariat (2019), The status-quo about the promotion of statistics reform, http://www.kantei.go.jp/jp/singi/toukeikaikaku/dai5/siryou1.pdf (accessed on 2 September 2019).

[73] The Committee on Promoting EBPM (2017), Guidelines on securing and developing human resources for the promotion of EBPM, https://www.gyoukaku.go.jp/ebpm/img/guideline1.pdf (accessed on 2 September 2019).

[36] The European Network for Rural Development (2014), The Ex Ante Evaluation 2014-2020 RDPs, https://enrd.ec.europa.eu/evaluation/publications/guidelines-ex-ante-evaluation-2014-2020-rdps_en (accessed on 29 January 2020).

[78] The Ministry of Internal Affairs and Communication (2005), Policy Evaluation Implementation Guidelines, https://www.soumu.go.jp/main_content/000556222.pdf.

[77] The Ministry of Internal Affairs and Communications (2010), “Guidelines for Publication of Information on Policy Evaluation”, http://www.soumu.go.jp/main_content/000556224.pdf (accessed on 16 September 2019).

[72] The Statistical Reform Promotion Council (2017), The final report of the Statistical Reform Promotion Council. (In Japanese), http://www.kantei.go.jp/jp/singi/toukeikaikaku/pdf/saishu_honbun.pdf (accessed on 2 September 2019).

[15] The World Bank (2018), Spending Review Manual: Bulgaria.

[87] Treasury Board Secretariat (2019), Evaluation in the Government of Canada - Canada.ca, https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada.html (accessed on 20 September 2019).

[88] Treasury Board Secretariat (2013), Assessing program resource utilization when evaluating federal programs, https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/centre-excellence-evaluation/assessing-program-resource-utilization-evaluating-federal-programs.html (accessed on 2 August 2019).

[89] Treasury Board Secretariat (2010), Supporting Effective Evaluations: A Guide to Developing Performance Measurement Strategies, https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/centre-excellence-evaluation/guide-developing-performance-measurement-strategies.html (accessed on 2 August 2019).

[121] United Nations Evaluation Group (2016), Norms and Standards for Evaluation.

[75] United States Office of Management and Budget (2019), Phase 1 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Leaming Agendas, Personnel, and Planning Guidance, https://www.whitehouse.gov/wp-content/uploads/2019/07/M-19-23.pdf.

[104] Vaessen, J. (2018), New blogpost - Five ways to think about quality in evaluation, https://www.linkedin.com/pulse/new-blogpost-five-ways-think-quality-evaluation-jos-vaessen (accessed on 21 June 2019).

[44] Vammalle, C. and A. Ruiz Rivadeneira (2017), “Budgeting in Chile”, OECD Journal on Budgeting, Vol. 16/3, https://dx.doi.org/10.1787/budget-16-5jfw22b3c0r3.

[71] van Ooijen, C., B. Ubaldi and B. Welby (2019), “A data-driven public sector: Enabling the strategic use of data for productive, inclusive and trustworthy governance”, OECD Working Papers on Public Governance, No. 33, OECD Publishing, Paris, https://dx.doi.org/10.1787/09ab162c-en.

[66] Viñuela, L., D. Ortega and F. Gomes (2015), Technical Note - Mechanisms and incentives for the adoption of evaluation of Policies and Programs to improve the Efficiency of Public Expenditure.

[16] Walker, K. and K. Moore (2011), Performance Management and Evaluation: What’s the difference?, https://www.childtrends.org/wp-content/uploads/2013/06/2011-02PerformMgmt.pdf (accessed on 18 July 2019).

[40] Weiss, C. (1993), “Where politics and evaluation research meet”, Evaluation Practice, Vol. 14/1, pp. 93-106, http://dx.doi.org/10.1016/0886-1633(93)90046-R.

[160] Weiss, C. and C. Weiaa Harvard (1998), “Have We Learned Anything New About the Use of Evaluation?”, American Journal of Evaluation, Vol. 19/1, pp. 21-33.

[124] World Bank et al. (2019), World Bank Group Evaluation Principles, http://www.worldbank.org.

[80] Yang, S. and A. Torneo (2016), “Government Performance Management and Evaluation in South Korea: History and Current Practices”, Public Performance & Management Review, Vol. 39/2, pp. 279-296, http://dx.doi.org/10.1080/15309576.2015.1108767.

[70] Zida, A. et al. (2017), “Evaluating the Process and Extent of Institutionalization: A Case Study of a Rapid Response Unit for Health Policy in Burkina Faso.”, International journal of health policy and management, Vol. 7/1, pp. 15-26, http://dx.doi.org/10.15171/ijhpm.2017.39.

copy the linklink copied!
Annex 2.A. Annex guidelines and methods
copy the linklink copied!
Table 2.4. Guidelines and methods for policy evaluation

Country

Year

Author

Title

Australia

2014

Department of Finance

Resource Management Guidance for the Public Governance, Performance and Accountability Act 2013 RMG 131 Developing Good Performance Information

2013

Department of the Prime Minister and Cabinet

Monitoring, review and evaluation (Cabinet Implementation Unit Toolkit)

Austria

2013

Federal Chancellery

Handbook for Performance Management

2013

Federal Chancellery

Handbook for Performance Management

Canada

2017

Treasury Board Secretariat

Guide to Rapid Impact Evaluation

2013

Treasury Board Secretariat

Assessing Program Resource Utilization When Evaluating Federal Programs

2012

Treasury Board Secretariat

Theory-Based Approaches to Evaluation: Concepts and Practices

2010

Treasury Board Secretariat

Supporting Effective Evaluations: A Guide to Developing Performance Measurement Strategies

2019

Treasury Board Secretariat

Integrating Gender-Based Analysis Plus into Evaluation: A Primer

Czech Republic

2016

Min.Regional Development

Methodological guidance for evaluations in the 2014-2020 programming period

Estonia

2012

Government of Estonia

Methodology of Impact Assessment

2012

Estonian Evaluation Association

Good Public Evaluation Code of Practice

2011

Government of Estonia

Good Public Engagement Code of Practice

Finland

Annually

Council of State/PMO

Government's Annual Plan for research, foresight and evaluation

France

2017

France Stratégie

Guide de l’évaluation socio-économique des investissements publics

2016

France Stratégie

Comment évaluer l’impact des politiques publiques : un guide l’usage des décideurs et des praticiens

2010

INSEE

Méthodes économétriques pour l’évaluation des politiques publiques

Youth Experimentation Fund

Guide méthodologique relative aux évaluations du FEJ

Germany

2016

Federal Ministry of the Interior

Handbuch für Organisationsuntersuchungen und Personalbedarfsermittlung

2011

Federal Ministry of Finance

Arbeitsanleitung Einführung in Wirtschaftlichkeitsuntersuchungen

2007

Federal Ministry of the Interior

Empfehlungen für interne Revisionen in der Bundesverwaltung

2000

Federal Ministry for Family Affairs

Zielgeführte Evaluation von Programmen

Great Britain

2018

HM Treasury

Guide to developing the project business case

2018

HM Treasury

Guide to developing the programme business case

2018

Better Regulation Executive

Better regulation framework

2015

Government Social Research Service (HM Treasury)

Government Social Research Publication Protocol

Greece

2018

Secretariat General of the Government

Manual of Inter-Ministerial Coordination

2015

European Commission

Better Regulation Toolbox

2015

European Commission

Better Regulation Guidelines

Ireland

2018

Department of Public Expenditure & Reform

Public Spending Code

Italy

2018

PCM

Guidelines RIA

2017

NUVAP

Guidelines for ex post and ongoing evaluations: requesting and using evaluations

2017

NUVAP

GL ex post & ongoing evaluations

2017

Decree President Council of Ministers

Guidelines for ex-ante and ex-post impact analysis of regulatory acts

2015

NUVAP

GL Evaluation Plans

2015

NUVAP

Evaluation Plans 2014-2020: general orientation and a short guide on available guidance

Japan

2017

Ministry of Internal Affairs and Communications

Basic Guidelines for Implementing Policy Evaluation (Revised)

2013

Ministry of Internal Affairs and Communications

Target Management-based Policy Evaluation Implementation Guidelines

2010

Ministry of Internal Affairs and Communications

Guidelines for Publication of Information on Policy Evaluation

2010

Ministry of Internal Affairs and Communications

Implementation Guidelines for Policy Evaluation Pertaining to Special Taxation Measures

2007

Ministry of Internal Affairs and Communications

Implementation Guidelines for Policy Evaluation of Regulations

2005

Ministry of Internal Affairs and Communications

Policy Evaluation Implementation Guidelines

Korea

2017

Office for Government Policy Coordination

Government Performance Evaluation Manual

Lithuania

2011

Ministry of Finance

Recommendations on Implementation of Programs Evaluation Methodology

2010

Ministry of Finance

Evaluation of EU structural assistance: Methodological guidance

Latvia

2018

Ministry of Finance

Instruction on Analysis of the Execution of State Budget

2016

Cross-sectoral Coordination Centre

Manual on Policy Making

Mexico

2007

Ministry of Finance, Ministry of Public Administration, National Council for the Evaluation of Social Development Policy

General Guidelines for the Evaluation of Federal Programs

Norway

2018

DFO

Strategic and systematic use of evaluation in management/governance

2009

Ministry of Justice

Evaluation of laws

2007

DFO

Evaluation of central governmental grants

2005

Ministry of Finance

Guidelines to carry out evaluations

New Zealand

2018

Superu

Making sense of evidence: A guide to using evidence in policy

2015

Superu

Evaluation Standards for People Commissioning, Using, Participating in, or Conducting Evaluations

Poland

2018

Ministry of Economic Development

Guidelines for the evaluation of cohesion policy” (updated)

2015

Ministry of Economic Development

Guidelines for the evaluation of cohesion policy

Portugal

2018

Juris App

Manual

Spain

2015

AEVAL

Practical guide for the design and implementation of public policy evaluations

2007

AECID (Spanish Agency for International Cooperation for Development)

Spanish Cooperation Evaluation Management Manual

Slovakia

2016

Ministry of Finance

Value for money

Forthcoming

Ministry of Economy

RIA 2020

Switzerland

2015

Federal office of justice

Planifier une évaluation, en assurer le suivi et en valoriser les résultats

2013

State secretariat for economic affairs

Analyse d'impact de la réglementation - Manuel

2012

Federal office of justice

Recommandations de l'Office fédéral de la justice pour la formulation des clauses d'évaluation

2005

Federal office of justice

Guide de l'évaluation de l'efficacité la Confédération

United States

2018

OMB

A-11 section 200: an overview of the Federal Performance Framework

2018

OMB

M-18-04: Monitoring and Evaluation guidelines for agencies that administer foreign assistance

2019

OMB

M-19-23: guidelines for the implementation of the Foundations for Evidence-Based Policymaking Act of 2018

2020

OMB

M-20-12 Phase 4 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Program Evaluation Standards and Practices

Argentina

2018

National Council for the Coordination of Social Policies

Resolución No.310 Lineamientos de MyE

2018

Cabinet Office

Resolución No.212/18 Plan Anual de MyE

Brazil

2018

Civil House, Ministry of Finance, Ministry of Planning, Ministry of Transparency and Comptroller General

Public Policies Evaluation: practical guide for ex ante analysis

2017

Ministry of Social Development

"How to promote impact evaluation in social programs"

2015

Ministry of Transparency and Comptroller General

Methodology Manual for Evaluating Government Programs Execution

2014

Federal Court of Auditors

Referential for Governance Evaluation in Public Policies

Colombia

2018

Departamento Administrativo de Planeación Nacional

Guide for the evaluation of public policies

Costa Rica

2018

Ministry of National Planning and Economic Policy

Guide for the use of evaluations: guidelines for its implementation and follow-up on recommendations.

2017

Ministry of National Planning and Economic Policy

Manual of evaluation for public interventions

2017

Ministry of National Planning and Economic Policy

Guide on the approach of gender equality and human rights in evaluation: guidelines for its incorporation into the evaluation process.

2017

Ministry of National Planning and Economic Policy

Guide of evaluability: methodological guidelines for the evaluability of public interventions.

Kazakhstan

2017

Ministry of National Economy

State planning system

Source: OECD Survey on Policy Evaluation (2018).

Notes

← 1. Centre of government is defined as an administrative structure that serves the Executive (President or Prime Minister, and the Cabinet collectively). For further information about CoG, see sub-section Institutions within the Executive.

← 2. Inspektionen för socialförsäkringen (The Swedish Social Insurance Inspectorate), Kulturanalys (The Swedish Agency for Cultural Policy Analysis), Tillväxtanalys (The Swedish Agency for Growth Policy Analysis), Trafikanalys (Transport Analysis), Vårdanalys (The Swedish Agency for Health and Care Services Analysis), Brå (The Swedish National Council for Crime Prevention) and IFAU (Institute for evaluation of labour market and education policy).

← 3. See Federal Auditor report on the capacity of federal services to evaluate public services www.ccrek.be/Docs/2018_09_CapaciteServicesPublicsFederauxAEvaluerLesPolitiquesPubliques.pdf

← 4. https://www.efk.admin.ch/fr/ueber-uns/organisation/centres-de-competence/1262-fb6-f.html

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

https://doi.org/10.1787/89b1577d-en

© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.