Chapter 5. Building a monitoring and evaluation framework for open government in Argentina

This chapter assesses Argentina’s efforts to monitor and evaluate open government strategies and initiatives. It argues that Argentina has made substantial progress since 2016 in promoting monitoring and evaluation (M&E) practices across government. Nevertheless, it notes that the respective M&E and open government agendas could be further aligned to improve the M&E of open government efforts. Therefore, the chapter provides recommendations on how to foster M&E of open government, considering areas such as the institutional set-up for sharing data, the design of indicators, the development of M&E capacities among public officials and the evaluation of open government initiatives. The chapter also discusses Argentina’s promotion of M&E across governments with a focus on the provinces.

    

Introduction

The Government of Argentina’s commitment to open government calls for sound monitoring and evaluation of open government strategies and initiatives

The Government of Argentina’s (GoA) commitment to implement and mainstream open government calls for solid monitoring and evaluation (M&E) tools to support and promote operational and strategic decision-making, performance, accountability and learning. Despite Argentina’s limited policy monitoring and evaluation culture across government, a number of important ongoing initiatives and existing tools in the area of M&E are relevant for the open government agenda. The strategic use of M&E has the potential to foster the implementation, visibility and impact of the country’s open government agenda and to inspire other areas of the public sector on how to use M&E strategically.

Building upon provision 5 of the OECD Recommendation of the Council on Open Government (Box 5.1, hereafter the OECD Recommendation), this chapter assesses the GoA’s efforts to monitor and evaluate open government strategies and initiatives, and provides actionable recommendations for further improvement. More specifically, it analyses Argentina’s efforts to build an institutional framework for M&E, to promote the development of relevant indicators and to foster an M&E culture among public officials in charge of open government. In addition, it explores Argentina’s capacities to evaluate open government initiatives and to promote M&E of open government initiatives across levels of government. Given that the evaluation of open government is an embryonic area of work for the GoA – as it is for many other governments – the assessment presented here focuses primarily on monitoring, while also exploring opportunities for strengthening evaluation efforts in the future.

Box ‎5.1. Provision 5 of the OECD Recommendation of the Council on Open Government

“Develop and implement monitoring, evaluation and learning mechanisms for open government strategies and initiatives by:

  1. 1. Identifying institutional actors to be in charge of collecting and disseminating up-to-date and reliable information and data in an open format

  2. 2. Developing comparable indicators to measure processes, outputs, outcomes, and impact in collaboration with stakeholders

  3. 3. Fostering a culture of monitoring, evaluation and learning among public officials by increasing their capacity to regularly conduct exercises for these purposes in collaboration with relevant stakeholders”.

Source: OECD (2017c), Recommendation of the Council on Open Government, OECD, Paris, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0438 (accessed 30 November 2018).

The benefits of monitoring and evaluating open government strategies and initiatives

Monitoring and evaluation are two different but complementary practices that contribute to better decision-making and service delivery

One of the great challenges for OECD countries in the area of open government is to move the focus of open government strategies and initiatives from process to outcomes and impact. This would allow governments to link open government with the effectiveness and quality of public governance and the delivery of public policies and services.

M&E systems are crucial to understanding the output, outcome and impact of open government reforms. Solid M&E mechanisms can help to ensure that policies are achieving the intended goals, contribute to the identification of policy design and implementation barriers, and orient policy choices by building on past experiences. M&E is instrumental to initiating changes and communicating policy results in a timely and accessible manner. M&E data can moreover serve to highlight the relevance of open government initiatives, thereby creating incentives to ensure that all public policies are designed with an open government perspective. Last but not least, by feeding into further policy design, M&E results can improve policy effectiveness and value for money (OECD, 2016). In its consideration of the overall relevance of M&E, the OECD Recommendation accords substantial importance to the monitoring and evaluation of open government strategies and initiatives.

Notwithstanding their complementarity, monitoring and evaluation are two different practices, with different dynamics and goals. Policy monitoring refers to a continuous function that uses systematic data collection on specific indicators to provide policy makers and stakeholders with information regarding the progress and achievements of an ongoing public policy initiative and/or the use of allocated funds (OECD, 2018; 2016; 2009). Monitoring contributes to planning and operational decision-making, as it provides evidence to measure performance and can help to raise specific questions in order to identify implementation delays or bottlenecks. It can also strengthen accountability related to the use of resources, the efficiency of internal management processes or the outputs of a given policy initiative (OECD, 2017).

Policy evaluation refers to the structured and objective assessment of the design, implementation and/or results of a future, ongoing or completed policy initiative. The aim is to determine the relevance and fulfilment of policy objectives, as well as to assess dimensions such as public policies’ efficiency, effectiveness, impact or sustainability. As such, policy evaluation refers to the process of determining the worth or significance of a policy (OECD, 2018; 2016; 2009). It serves three main purposes. It fosters learning by helping policy makers to understand why and how a policy was successful or not. Consequently, it contributes to strategic decision-making, by providing insights into how to improve the links between policy decisions and outcomes. Lastly, policy evaluation promotes accountability, as it provides citizens and a broad range of stakeholders – such as journalists and academics – with information on whether efforts carried out by the government, including the financial resources mobilised for them, are producing the expected results (OECD, 2017).

Therefore, while policy monitoring is descriptive and an important (but not exclusive) source of information that can be used within the context of an evaluation, policy evaluation is a different activity that seeks to analyse and understand cause-effect links between a policy intervention and its results. Table 5.1 highlights the main distinguishing traits of both functions.

Table ‎5.1. Comparing policy monitoring and policy evaluation

Policy monitoring

Policy evaluation

Ongoing (leading to operational decision-making)

Episodic (leading to strategic decision-making)

Monitoring systems are generally suitable for broad issues/questions that were anticipated in the policy design

Issue-specific

Measures are developed and data are usually gathered through routinised processes

Measures are usually customised for each policy evaluation

Attribution is generally assumed

Attribution of observed outcomes is usually a key question

Because monitoring is ongoing, resources usually form part of the programme or organisational infrastructure

Targeted resources are needed for each policy evaluation

Use of the information can evolve over time to reflect changing information needs and priorities

The intended purposes of a policy evaluation are usually negotiated upfront

Source: Adapted from McDavid, J.C. and L.R.L. Hawthorn, (2006), Program Evaluation and Performance Measurement, an Introduction to Practice, Thousand Oaks, CA, Sage.

Challenges and enablers for the monitoring and evaluation of open government strategies

Given their multidimensional and cross-cutting nature, national open government strategies and related initiatives are difficult to monitor and evaluate. Implementing open government strategies usually involves initiatives in a variety of areas, such as stakeholder participation, integrity, transparency and digital government, among others, and requires the involvement of multiple stakeholders, such as ministries, agencies and civil society organisations (CSOs) – and in some cases – different levels of government (e.g. provinces and municipalities), as in the case of Argentina.

Notwithstanding the complexity of implementing open government strategies, the necessity of having a more concrete understanding of the dynamics and effects of open government has made M&E particularly relevant. As outlined by the OECD Recommendation (OECD, 2017c), potential enablers promoting systematic M&E include the existence of mandated institutional actors, comparable indicators and an M&E culture across public officials (including capacity and skills). Moreover, the features of the broader M&E ecosystem across the public sector can be expected to impact the capacity of a government to monitor and evaluate open government strategies and initiatives (OECD, 2017). The assessment presented in this chapter addresses the key features of provision 5 of the OECD Recommendation, while also linking it to the broader M&E ecosystem in Argentina.

The institutional framework: Identifying institutional actors to be responsible for collecting and disseminating up-to-date and reliable information and data in an open format

The SGM’s key projects on open government are monitored by the Office of the Chief of Cabinet of Ministers as part of the 100 government policy priorities.

While formal requirements for planning, monitoring and evaluating public policies in Argentina were largely absent until recently (CIPPEC, 2017), the country initiated a move towards the institutionalisation of an M&E culture in 2016 with the introduction of the State Modernisation Plan (see Chapter 2 on the Policy Framework). The roll-out of this plan is characterised by a gradual approach, starting with building planning capacities at the line ministry level, moving to monitoring policy priorities and, eventually, evaluating results.

This approach is spelled out in the 2016 State Modernisation Plan. Axis 3 of the Plan focuses on the development of a culture of management for results and public commitments, with a special emphasis on the 100 government policy priorities and 8 government priorities. The axis establishes five main objectives:

  1. 1. Development and strengthening of planning systems, with the aim of optimising management capacities and resource allocation, according to the government’s priority goals, and under the guidance and co-ordination of the Office of the Chief of Cabinet of Ministers (JGM).

  2. 2. Organisational, administrative processes and control re-engineering, with the aim of optimising the organisational structure and providing greater efficiency to the administrative and control circuits to achieve government objectives.

  3. 3. Strengthening of the policy goal monitoring system, with the objective of improving the quality of public services by controlling the delivery of commitments across the administration, in accordance with the JGM’s directives. This includes the development of a control dashboard, a system of measurement indicators and the development of training activities.

  4. 4. Development of citizen and performance commitments to improve the services provided by public bodies.

  5. 5. Quality management, consisting of promoting the development and dissemination of quality assurance and control processes and systems, to meet the needs and expectations of society.

Under the leadership of the Office of the Chief of Cabinet of Ministers (JGM), the Government Secretariat of Modernisation (SGM), besides being the body in charge of the open government agenda, is also responsible for working with line ministries to standardise planning, monitoring and evaluation. For this purpose, the government designed an integral monitoring and evaluation management system, based on a results-based management methodology, which emphasises public management planning, monitoring and control. To facilitate the implementation of the methodology, different dashboards were created that allow key data to be processed in order to keep track of government commitments. Examples include the Results Management Dashboard (Tablero de Gestión por Resultados, Gpr), the Integral Management Dashboard (Tablero de Gestión Integral) and the Strategic Monthly Report (Informe Mensual Estratégico). As part of this system, the SGM asks each ministry to develop a strategic and operational plans for all priority projects, including main goals and success indicators.

Following its relocation to the JGM, the SGM reports directly to the President of the Nation on open government priorities, including open data processes and Open Government Partnership (OGP) commitments, along with two other key axes for state modernisation: public employment and transversal systems (e.g. document management systems, public procurement, etc.).

The current system monitors open government key projects (outputs) but not open government’s strategic medium and long-term goals (outcomes and impact objectives).

Key projects related to open government include the organisation of the conference Open Argentina 2018 (Argentina Abierta 2018), follow-up of the third OGP Action Plan, the establishment of the Open Government Roundtable, the creation of the fourth OGP Action Plan and the management of the Open Government Commission of the Modernisation Federal Council (COFEMOD), as well as Argentina’s recent membership of the OGP Steering Committee. The system also includes sub-activities and a success indicator, which mainly measure processes. For example, in the case of the third OGP Action Plan, sub-activities mainly consist of follow-up processes, including: sending follow-up emails to public officials responsible for each OGP commitment, organising a webinar seminar to inform on progress regarding implementation of the OGP Plan and organising a federal follow-up meeting. The success indicator for this project is the reporting of the 97 milestones of the OGP Action Plan.

Whereas this high-level monitoring mechanism is a useful tool for keeping track of the implementation of the SGM’s key projects on open government, it works mainly as an input for the JGM – the latter fulfilling the role of a Delivery Unit, primarily focused on improving the project’s implementation and achieving the government’s main policy goals.

For the monitoring of its OGP Action Plan, the GoA also relies on the OGP monitoring mechanisms – the independent reporting mechanism (IRM) and the self-assessment report – as key elements of the IRM (see Box 5.2).

Box ‎5.2. OGP Country self-assessment and independent reporting

Self-assessment report: During the two-year National Action Plan (NAP) cycle, governments will produce yearly self-assessment reports. In order to minimise the administrative burden, the two self-assessment reports will have similar content to one another, differing primarily in terms of the time period covered. The mid-term self-assessment should focus on the development of the NAP, the consultation process, the relevance and ambitiousness of the commitments, and progress to date. The end-of-term self-assessment should focus on the results of the reforms completed in the NAP, consultation during implementation and lessons learned. The development of the self-assessment reports must include a two-week public consultation period, as stipulated in the OGP Guidelines.

Independent reporting mechanism: The Independent Reporting Mechanism (IRM) is a key means by which all stakeholders can track OGP progress in participating countries. The IRM produces annual independent progress reports for each country participating in the Open Government Partnership. The reports assess governments on the development and implementation of OGP Action Plans, track their progress in fulfilling open government principles, and make technical recommendations for improvements. These reports are intended to stimulate dialogue and promote accountability between member governments and citizens.

Source: Open Government Partnership (n.d.a), “Self-Assessment Process”, OGP, www.opengovpartnership.org/how-it-works/self-assessment-process (accessed January 2019); OGP (n.d.b), “IRM Reports”, OGP, www.opengovpartnership.org/irm/irm-reports (accessed January 2019).

In this regard, as explained in Chapter 2 on the Policy Framework, the GoA has not yet established open government medium-term to long-term goals (outcomes and impacts objectives) to strategically link high-level political commitments (e.g. “open government” as part of the 100 government priorities) to short-term activities (outputs such as line ministries’ open government initiatives). Definition of these strategic goals – as recommended in Chapter 2 – would also allow for more robust and efficient monitoring –and eventually evaluation – of the GoA open government agenda, as explained in this section.

Monitoring of open government initiatives at sector level is done primarily through OGP mechanisms.

In Argentina, the SGM’s Undersecretariat of Public Innovation and Open Government (UOG) follows up on the different ongoing open government initiatives at sector level. These initiatives focus primarily on the 44 commitments of the third Open Government Partnership’s (OGP) Action Plan (2017-2019). During implementation of the OGP Action Plan, the government institutions responsible for implementing each commitment must report any progress in the execution of the milestone activities. This reporting is done using Trello, an online project management tool (Figure 5.1).

Figure ‎5.1. Trello System for monitoring the open government commitments
Figure ‎5.1. Trello System for monitoring the open government commitments

Source: Trello (n.d.) Compromisos Transparencia, https://trello.com/b/BqqCfLNS/compromisos-transparencia (accessed 11 January 2019).

The information provided through Trello is managed internally by the SGM through a dedicated dashboard, which differs from the one used for the Results Management Dashboard (Tablero de Gestión, Gpr). SGM tracks progress and reports on a weekly basis to the Government Secretary of Modernisation – who is also Deputy Chief of the Cabinet of Ministers – for each OGP commitment. The information is displayed in percentages, which represent an average of progress made during implementation of each commitment’s milestone. For example, Commitment 25 of Argentina’s OGP Action Plan aims to open the debate and build capacities on the electoral process in Argentina, and includes the following four milestones monitored through Trello:

  1. 1. Organise one co-ordination meeting with civil society and universities to define priority and strategic issues to be addressed in a debate cycle.

  2. 2. Develop electoral training material for young people.

  3. 3. Organise at least six meetings to discuss electoral processes within the framework of a cycle of debates.

  4. 4. Conduct meetings with civil society for the presentation and evaluation of electoral training material, and electoral capacity-building activities aimed at young people during 2017 (secondary schools and universities).

While the dashboard seems to be instrumental for the SGM in overseeing the implementation of the OGP Action Plan, in most of the cases the information collected for each milestone only allows users to ascertain whether or not it was finalised (e.g. the realisation of a co-ordination meeting or the elaboration of training material). Thus, for most of the milestones, the values used are either 0% or 100%. The dashboard also includes a brief assessment of the quality of the milestone reporting (good, regular or poor), based on the reporting guidelines provided by the SGM.

Another tool used by the SGM is the Citizen’s Dashboard, which provides information regarding state modernisation projects implemented by the SGM. This tool was recently launched by the government as part of a commitment of the third OGP Action Plan, in order to inform citizens about implementation progress for these priority projects. The SGM’s goal is to replicate this tool in every ministry by 2023, where it will monitor at least five priority projects from each one.

Box ‎5.3. Monitoring for accountability: the Citizen’s Dashboard

The citizen’s dashboard was launched in 2018 as an output of the third OGP Action Plan (2017-2019). Its goal is to improve public officials’ accountability and active transparency through the development of a tool that allows citizens to consult and analyse the SGM’s degree of progress in implementing its priority projects. The dashboard is organised around the State Modernisation Plan and includes information on five key areas: open government, public employment, digital government, digital inclusion and connectivity.

The dashboard summarises information on 20 projects including public sector training, the open data plan, the third OGP Plan and the development of a public procurement electronic system. The dashboard includes a description of each project, its expected impact, its starting and expected end dates as well as the degree of progress in implementation through process and output indicators (e.g. the number of public officials trained in comparison with an annual target).

Source: Government Secretariat of Modernisation (n.d.), Tablero Ciudadano, Buenos Aires, www.argentina.gob.ar/tablero-ciudadano (accessed 11 January 2019).

Based on the information provided by Trello and the SGM dashboard:

  • The Undersecretariat of Public Innovation and Open Government (UOG)’s team monitors the information provided by ministries and sends them reminders in case of upcoming deadlines, delays or reports to be improved on a weekly basis (if applicable).

  • The UOG holds closed meetings with the institutions responsible for each commitment. These meetings take place every two to six months, depending on the commitment. In the later stages of the Plan, meetings can occur on a monthly basis.

  • The government also holds open meetings within the framework of the National Open Government Roundtable with representatives of CSOs. Each institution reports (via video streaming) on the progress of their commitment. In 2018, these meetings were held on a monthly basis and 19 institutions and 2 provinces publicly reported their progress.

  • The CSOs that take part in the National Open Government Roundtable also send quarterly reports to the Roundtable which follow-up on their commitments.

Finally, as mentioned above, the Government Secretary of Modernisation reports to the President on the degree of progress of a number of major commitments. Reporting takes place every two weeks on average, according to interviews conducted by the OECD.

The monitoring of line ministries’ open government initiatives is not aligned to any strategic outcome or impact objective on open government. As mentioned throughout this Review (see in particular Chapter 4 on Implementation), Argentina has made strategic use of the OGP Action Plan to achieve substantial progress in spreading awareness and building open government networks across government. Monitoring of implementation of the 44 OGP commitments (which includes almost all ministries and several decentralised institutions) has been a key factor in the identification of institutional counterparts across the administration. For instance, as explained in Chapter 4, most national line ministries now have either an office or a person in charge of open government. This collaborative process also led to the creation of the National Open Government Roundtable in 2017. The Roundtable is pivotal for the design and co-ordination of open government strategies, as well as for developing and collecting data.

In terms of M&E efforts related to open government, Argentina tends to rely primarily upon monitoring the implementation of OGP commitments, despite the fact that line ministries are implementing a variety of open government initiatives that go beyond the OGP process, as explained in Chapter 2 on the Policy Framework. The existing monitoring mechanisms developed by the GoA (outlined in Table 5.2) are able to verify whether an activity was carried out or not; however, they do not involve systematic data collection to assess performance (e.g. by tracking the resources used to implement an activity or its results).

Table ‎5.2. Government monitoring mechanisms for open government strategies and initiatives

Focus

Responsible party

Type of tool

Frequency of monitoring

Monitoring and Evaluation Management System

Open government main strategic projects (e.g. OGP Action Plan) as part of the 100 government policy priorities

Chief of Cabinet of Ministers (JGM)

Internal management tool

Monthly

SGM dashboard

OGP commitments (primary)

Government Secretary of Modernisation

Internal management tool

Weekly follow-up meetings and Evaluation meetings every four months.

Trello system

OGP commitments

Government Secretary of Modernisation

Public management tool

Citizen’s dashboard

SGM’s priority projects

Government Secretary of Modernisation

Public dashboard

Depends on the project

Source: Author’s own elaboration.

The practice of systematically monitoring open government initiatives has not yet been fully expanded to the sector level. Among the ministries surveyed, 67% responded affirmatively, however half of these institutions only monitor these initiatives through the SGM’s public mechanism (the Trello application) to follow-up on OGP commitments Table 5.3). Moreover, several ministries and agencies, such as the Ministry of Transport, use their own monitoring systems, adding another layer of complexity to information sharing.

The SGM’s public follow-up mechanism (Trello) only contributes to one of the Results Management Dashboard’s key priority projects – follow-up of the third OGP Action Plan. Such limited connection within the JGM’s high-level monitoring system and the absence of medium and long-term whole-of-government strategic goals on open government might weaken line ministries’ incentives to co-operate in a systematic manner. In this regard, as explained in Chapter 2 on the Policy Framework, setting high-level strategic objectives (outcomes and impact goals) can align open government initiatives, thereby helping to articulate short, medium and long-term priorities and steer their implementation (OECD, 2018c).

Table ‎5.3. Monitoring mechanisms for open government initiatives

Institution

The Ministry of Modernisation’s Control Panel for OGP commitments (Trello)*

A single office/person in charge of monitoring all the institution’s open government initiatives

An institution’s ad hoc monitoring mechanism

The usual monitoring activities of the institution

Other

INSSJP-PAMI

X

 

X

 

Ministry of Culture

X

 

 

 

Ministry of Defence

X

 

X

 

Ministry of Finance

X

 

X

 

Ministry of Justice and Human Rights

X

 

X

 

Ministry of Production

X

 

 

 

Ministry of Health

X

 

 

 

Ministry of Labour, Employment and Social Security

X

 

 

 

Ministry of Transport

 

X

 

Ministry of Interior, Public Works and Housing

X

 

 

X

Secretary of Mining Policy Co-ordination

X

X

 

 

 

Accounting Office of the State (SIGEN)

 

X

 

 

Government Secretariat for Environment and Sustainable Development

X

National Institute of Women

X

Ministry of Health and Social Development

X

X

Ministry of Modernisation

X

X

X

X

Note: The data cover ministries that were involved in the 2nd and 3rd OGP National Action Plans.

Source: Responses to OECD (2018a), OECD Surveys on Open Government in Argentina, OECD, Paris.

Framing monitoring and evaluation provisions within a National Open Government Strategy would foster collaboration, decision-making and accountability across government.

In order to advance policy monitoring as a tool to inform planning, decision-making and accountability, the government could consider establishing specific provisions for systematic monitoring – and eventually evaluation – of its open government efforts in an integrated way, as part of a high-level strategic document on open government (see the proposed National Open Government Strategy in Chapter 2 on the Policy Framework). These provisions should include the OGP commitments, but could also go further by strategically linking the monitoring of government-wide open government outcome and impact goals – as recommended in Chapter 2 – with the different initiatives taking place at sector level. The inclusion of M&E provisions in strategic plans is a recurrent practice across OECD countries. In this regard, the “Resources and waste strategy for England”, published in 2018, could be of particular interest to the Government of Argentina (Box 5.4).

Box ‎5.4. The resources and waste strategy for England

Launched in 2018, the “Resources and waste strategy for England” aims to define how the country “will preserve our stock of material resources by minimising waste, promoting resource efficiency and moving towards a circular economy”. The strategy combines short-term commitments with long-term policy directions in line with the “UK 25 Year Environment Plan”.

Chapter 8 of the Strategy focuses on “Measuring progress: data, monitoring and evaluation”. Stating that “high-quality data, information and insights are essential for effective policy making”, the chapter sets out the government approach towards:

  • transforming gathering and reporting of data

  • monitoring progress

  • evaluating the success of policy interventions and feeding back learning into future policy development.

The strategy proposes, among others, an indicator framework, key strategic indicators and metrics for adoption. It also includes a draft evaluation plan, which outlines policies to be evaluated and the likely approach used (theory-based, trial-based, etc.). This draft evaluation plan will constitute the basis of a Resources and Waste Strategy Evaluation Plan to be published in the first quarter of 2019.

Source: Government of the United Kingdom (2018), Resources and Waste Strategy for England, London, www.gov.uk/government/publications/resources-and-waste-strategy-for-england (accessed 11 January 2019).

A sound strategy should specify who is responsible for M&E. In the case of Argentina, such a strategy could provide a specific mandate to the JGM/SGM to develop an annual M&E plan for the National Open Government Strategy. The National Open Government Steering Committee, recommended in Chapter 4 on Implementation, could serve as an institutional platform to follow up and discuss progress on the strategic goals – and the different objectives – in a systematic manner. Meanwhile, the Undersecretariat for Open Government and Public Innovation (UOG) could be responsible for ensuring the monitoring of the strategy.

The government could also consider the development of specific operating principles to monitor open government initiatives, such as:

  • Standards for developing open government outcomes and impact objectives and specific provisions and guidelines for building indicators.

  • Standards and templates for monitoring reports, including provisions on what can be published for a larger audience and what information will constitute the basis for internal discussion.

  • Decisions regarding frequency of monitoring – for instance, the National Open Government Steering Committee (recommended in Chapter 4) could discuss progress on the objectives on a quarterly basis, while the open government team could interact with the relevant stakeholders on a monthly basis.

  • Provisions for stakeholder engagement, to ensure the presence of civil society and other stakeholders in discussions on the advancement of open government projects.

The rules could also include similar provisions for undertaking evaluations, including inter alia standards, templates, frequency, stakeholder engagement, evaluator profiles and the budget for evaluations.

Furthermore, the work of the National Open Government Steering Committee in monitoring performance and results could include discussion of the results of the OGP self-assessment report and of any other relevant evaluation carried out in the area of open government.

Developing comparable indicators to measure processes, outputs, outcomes and impact in collaboration with stakeholders

The government relies solely on process and output indicators to measure open government strategy and initiatives.

The GoA’s commitment to take important steps to implement and mainstream open government principles across government also requires the development of indicators to monitor progress. Indicators are a key input for analytical work that informs policy recommendations and policy making (OECD, 2011). However, no indicator captures the totality of any reform. A variety of indicators are employed, ranging from context indicators to impact indicators, each of which serves a different purpose (Box 5.5). In the area of public governance, input, process and output indicators usually measure activities that the public sector can control (e.g. the design and implementation of a policy), while outcome and impact indicators measure the short and long-term effects of these activities (e.g. their economic, social and political effects) (Lafortune, Gonzalez and Lonti, 2017).

Box ‎5.5. Typology of indicators

A classic typology of indicators distinguishes between the following types:

  • Context indicators, when considering the public sector as an open system, can monitor external factors such as socio-economic trends, but can also include policy measures by other governments or supranational organisations (Van Dooren, Bouckaert and Halligan, 2015). Ideally, a comprehensive M&E system should include indicators to monitor the existence and development of environmental/context factors that can influence the governance of open government strategies and initiatives.

  • Input indicators measure resources in the broad sense (i.e. human and financial resources, logistics) devoted to a particular open government strategy or initiative. In the context of the governance of open government, input indicators could include the number of staff working in the office in charge of open government or the budget allocated for a given open government initiative.

  • Process indicators refer to the link between input and output (i.e. activities that use resources and lead to an output). In the context of the governance of open government strategies and initiatives, these indicators could include the duration of the process to create an office responsible for the co-ordination of open government strategies and initiatives or the time allocated to their design.

  • Output indicators refer to the quantity, type and quality of outputs that result from the inputs allocated, and encompass operational goals or objectives. For instance, in the context of this policy area, output indicators can refer to the existence of a law on access to information or the existence of training courses for public officials on the implementation of open government principles.

  • Outcome/impact indicators refer to the (strategic) objectives of a policy intervention. In a public policy context, intended effects often relate to a target group or region, but can also relate to the internal functioning of an administration. Effects can occur or be expected with varying time gaps following the policy intervention. Regarding the difference between outcome and impact, the term “outcome” usually refers to shorter-term effects, while “impact” refers to longer-term effects. Examples in this field could include the share of public servants aware of an open government strategy or the number of citizens’ complaints against public policy decisions.

Source: OECD (2017a), “Towards Open Government Indicators: Framework for the Governance of Open Government (GOOG) Index and the Checklist for Open Government Impact Indicators” (concept note), OECD, Paris; Van Dooren, W., G. Bouckaert and J. Halligan (2015), Performance Management in the Public Sector, Routledge, London.

In the case of Argentina, government-wide open government priority goals and the efforts of line ministries are monitored mainly through the application of process and output indicators. These measure, inter alia, whether a planned meeting was carried out, whether a specific regulation was issued or whether a specific platform was put in place.

Out of the 23 institutions that the OECD surveyed, 13 monitor processes, 10 monitor outputs, 8 monitor outcomes and only 1 institution has confirmed that it monitors impact (Figure 5.2). The affirmative response was received from the Comprehensive Medical Attention Programme (Programa de Atención Médica Integral), a public health insurance agency dependent on the Ministry of Health and Social Development. Although almost one-third of the institutions claimed to monitor outcomes, the examples provided on the indicators used by them link to processes and outputs (e.g. the number of people visiting an agency webpage, the number of roundtables held as part of a certain commitments, etc.) This might indicate that the distinction between process, output and outcome indicators among practitioners is not always clear.

Figure ‎5.2. Different indicators used to monitor open government initiatives by line ministry in Argentina
Figure ‎5.2. Different indicators used to monitor open government initiatives by line ministry in Argentina

Source: Responses to OECD (2018a), OECD Surveys on Open Government in Argentina, OECD, Paris.

While process and output indicators can be useful to measure activity progress, they cannot assess whether a policy initiative is delivering the expected results. Moreover, these indicators are useful primarily for internal management purposes, but do not offer much added value to external stakeholders such as citizens, who are interested mainly in the quality of policies and services (Lafortune et al., 2017; OECD, 2017).

Argentina could adopt a theory of change approach for the development of open government initiatives, to ensure that each initiative pursues a specific objective and includes output, outcome and impact indicators.

While acknowledging that the development of robust and relevant output, outcome and impact indicators is a complex endeavour, the GoA could implement specific initiatives to gradually work towards this goal. One such initiative is the adoption of a theory of change approach to the design of open government strategies and initiatives. A theory of change is a “description of the cascade of cause and effect leading from an intervention to its desired effects” (OECD, 2014, p. 2). As opposed to a logic model (see Figure 5.3), a theory of change not only shows the relationship between resources, activities, outputs and outcomes; it also takes into consideration environmental complexity (things that the intervention cannot control), works to highlight the different paths that might lead to change, and describes how and why a change is expected to happen. In this regard, it is used mainly to design and evaluate programmes (Bisits Bullen, 2013.).

Figure ‎5.3. Example of indicators associated with an open government initiative
Figure ‎5.3. Example of indicators associated with an open government initiative

Source: Authors’ own elaboration.

This approach can support critical thinking regarding the design, implementation and evaluation of a programme (OECD, 2012). It is based on theoretical assumptions about why and how a desired change is expected to happen. Theories of change should also incorporate the input of practitioners and stakeholders. They can be drawn from experience, or be rooted in research/evidence obtained, for instance, from policy evaluations. Despite some limitations (e.g. they tend to omit unexpected results and/or overestimate the effect of certain interventions, OECD 2014), adopting such approach can be instrumental to ensuring that each open government initiative pursues a specific objective (outcome and impact) related to the improvement of public governance and/or policy making and service delivery. Furthermore, this methodology would help Argentina promote stronger alignment between open government initiatives and broader strategic objectives, in line with the recommendation provided in section 3.4. In this regard, Canada’s initiative to create a logic model and a performance management framework for open government, despite its limitations vis-à-vis a theory of change, represents an interesting example of the efforts currently being carried out in this area (Box 5.6).

Box ‎5.6. Canada’s draft performance management framework and logic model for open government

Canada has undertaken substantial efforts to develop an open government performance management framework. Commitment 5 of Canada’s third OGP Action Plan obligates the government to “integrate performance indicators for openness and transparency into a Performance Management Framework for Open Government” (Government of Canada, 2018). To this end, Canada’s Treasury Board Secretariat worked with a risk consultancy firm (SecDev) to develop a draft logic model for open government and a proposed performance management framework with related indicators. The draft logic model was published in 2017. As can be observed below, the model distinguishes between activities, outputs and immediate, intermediate and long-term outcomes.

Figure ‎5.4. Draft logic model of Canada’s Treasury Board Secretariat
Figure ‎5.4. Draft logic model of Canada’s Treasury Board Secretariat

Despite the fact that the model currently lacks “a robust result chain and a coherent theory of change to explain how the gap between outputs and outcomes will be bridged” (SecDev, 2018, p. 19), it represents an important step forward in understanding the underlying theory motivating open government actions.

Source: Government of Canada (2018), End-of-Term Self-Assessment Report on Canada’s Third Biennial Plan to the Open Government Partnership 2016-2018, Ottawa, https://open.canada.ca/en/content/end-term-self-assessment-report-canadas-third-biennial-plan-open-government-partnership; SecDev (2018) Open Government Performance: Measuring Impact, Treasury Board of Canada Secretariat, Ottawa, https://open.canada.ca/ckan/en/dataset/f637580f-e0f7-5939-bf3f-ded35ce72d2a.

The GoA could create a platform to support the co-creation of robust indicators.

Ensuring the robustness and quality of indicators is a prerequisite to measuring and managing performance. Using the right indicators helps policy makers to benchmark, monitor, and evaluate progress and policies, as well as to identify bottlenecks. In order to effectively support public sector reforms, indicators should generally measure actual and observable facts, practices and implementation progress. To the extent possible, indicators should also be connected to a clear and valuable outcome and impact, which can be related to better government performance or improved quality of public services (Lafortune et al., 2017).

In line with the previous recommendation, Argentina could consider developing a platform to co-create robust open government indicators. This could be done, for instance, within the context of the implementation of the Open Government National Strategy recommended in previous chapters. The network could include key stakeholders such as COFEMOD, provincial and municipal governments, civil society organisations, academia and key line ministries. In addition, the SGM – and in particular UOG – could help to ensure that the proposed indicators undergo a quality assurance process, by discussing them with experts in the field, such as the National Statistics Office (Instituto Nacional de Estadística y Censos, INDEC), the System of Information, Evaluation and Monitoring of Social Programmes (SIEMPRO), and specialists from civil society (CIPPEC) and academia. Box 5.7 presents some criteria to evaluate the relevance and robustness of public governance indicators.

Box ‎5.7. Toward a framework for assessing the relevance and robustness of public governance indicators

Based on the worked carried out by the OECD on public governance indicators, Lafortune, Gonzalez and Lonti (2017) propose a set of criteria to evaluate the relevance and robustness of public governance indicators.

Relevance corresponds to the degree to which indicators serve a clear purpose and provide useful information that can guide public sector reforms. To be relevant, the indicators sets provided must be:

  • Action worthy: an indicator should measure something that is important and meaningful for policy makers and society.

  • Actionable: governments should know what actions they need to take in order to improve their performance. Indicators should provide useful and informative insights on the type of reform in which countries should engage.

  • Behavioural: when measuring the existence of directives, laws and other institutional documents (e.g. an access to information law), provided some information on the legal framework in place, what matters most is that these documents are actually implemented (output) and the nature of the actual outcome/impact. The existence of an access to information law does not imply better access to information from citizens or journalists.

Robustness corresponds to the statistical soundness of indicators. In this regard, the authors outline two main characteristics:

  • Validity: A valid indicator measures precisely the concept it is intended to measure.

  • Reliability. The measure should produce consistent results when repeated across populations, settings and events, when assessed by different people at different times.

Source: Lafortune, G., S. Gonzalez and Z. Lonti (2017), “Government at a glance: A dashboard approach to indicators”, in D. Malito, G. Umbach and N. Bhuta (eds.), The Palgrave Handbook of Indicators of Global Governance, Palgrave Macmillan, Basingstoke, UK.

Fostering a culture of monitoring, evaluation and learning among public officials by increasing their capacity to conduct regular exercises in collaboration with relevant stakeholders.

Argentina’s efforts to promote skills development for M&E are not connected with existing training and capacity-building activities related to open government.

Sound institutional frameworks and guidelines for monitoring and evaluating open government initiatives will not have the desired impact if public officials lack the right skills and incentives to carry out M&E activities successfully. In this regard, Argentina is making substantial progress in building public service capability on open government issues and results-oriented management.

There are two relevant actors in the GoA when it comes to building M&E’s skills within the public administration. First, the National Institute of Public Administration (INAP), under the SGM, conducts training sessions, designed by the UOG, on open government and results-oriented management. INAP’s mission is to carry out training sessions for all public servants with the objective of consolidating a citizen-oriented state (INAP, n.d.). In the area of open government, as explained in Chapter 4 on Implementation, INAP provides training mainly on issues related to citizen participation and service delivery.

INAP also develops training sessions on results-based management (RBM), with the SGM’s National Direction of Results-Based Management (Dirección Nacional de Gestión por Resultados, DNGpR), the body in charge of promoting RBM across the administration. The training introduces the M&E guidelines defined by the JGM, describes their components, and seeks to facilitate implementation at the central and ministerial levels. The training is organised into five components: 1) ministerial strategic planning, 2) ministerial follow-up, 3) linkages between planning and budget, 4) management tools and 5) preparation of a planning matrix. This training course is currently being re-designed by the DNGpR and INAP to integrate the training related to RBM. In addition, the DNGpR and INAP are also designing a self-organised training course (curso autogestionado) which will include an M&E component.

As discussed in other chapters of this Review, the SGM has created a Design Academy of Public Policy (Academia de Diseño de Políticas Públicas), which focuses on providing public servants (from senior management to administrative staff) with innovative tools to design and implement public policies, including the use of a theory of change. Using the OECD’s Core Skills for Public Sector Innovation (OECD, 2017b) as a starting point, the Academy focuses their technical assistance on the following areas: 1) orientation to results, 2) data literacy, 3) user centricity, 4) iteration, 5) insurgency, 6) digital, 7) curiosity and 8) storytelling. In this regard, the first four approaches are particularly relevant to building an M&E culture:

  • Orientation to results. This area consists of skills for project planning and monitoring, which are oriented towards results, value for money and stronger impact.

  • Data literacy. This area comprises skills to leverage data in order to inform innovation projects at every stage of their cycle. It includes competencies to analyse and link existing datasets to bring new insights, collect new data and translate the evidence into actionable innovation.

  • User centricity. This area refers to skills that bring public employees closer to citizens and serve to ensure that services focus on responding to users’ needs. They might include ethnographic observation, outreach and communication, as well as facilitation and networking skills with user groups and stakeholders.

  • Iteration. This consists of skills to incrementally develop public policies and services. They include competencies related to experimental policy design and capacities to bring policy, implementation and evaluation skills together under a more agile approach to project management.

In the light of the government’s need to enhance the capacity of public officials to design, monitor and evaluate open government initiatives, these training sessions – despite the relevance of this initiative – seem somewhat disconnected and are addressed primarily to different audiences. Training on open government lacks an M&E component, and the training on M&E is targeted to public officials in charge of reporting to the JGM on the monitoring and evaluation of government priorities (termed the Guía del Sistema de Gestión por Resultados).

Another instrument to support the development of capacities in the public sector is the development of guidelines and toolkits. The GoA has developed guidelines including an “Open Government Toolkit”, which focuses mostly on explaining the benefits of open government; an “Evaluation Toolkit”, which offers insights into planning, policy design and theory of change approaches; and several materials provided within the framework of the Design Academy of Public Policy. Nevertheless, similar to the training sessions, and according to the information gathered during the fact-finding mission, the available guidelines on M&E are generally not used for the design, monitoring and evaluation of open government initiatives.

The government could develop capacity-building activities on M&E of open government strategies and initiatives, in collaboration with relevant stakeholders.

Argentina could build on ongoing efforts by exploring synergies between existing (but separate) training courses on open government and M&E, with a view to developing dedicated training modules on the design, monitoring and evaluation of open government initiatives. These modules could be addressed primarily to the main open government interlocutors in line ministries and provinces, and could include the following elements:

  • design of open government initiatives using the theory of change approach recommended in the previous section

  • training for internal or external evaluators with a special focus on open government initiatives.

Argentina could also consider the development of a dedicated set of guidelines to elaborate open government initiatives, building on existing tools (Open Government Toolkit, Evaluation Toolkit). The guidelines could include specific tools and provide guidance for the development of indicators for each phase of the policy cycle, as well as concrete examples. Using this approach would facilitate the development of process, outputs, outcomes and impact indicators, in line with the recommendation of section 5.4. In addition, the development of outcome and impact indicators would promote ex ante analysis of the relevance of each activity.

The government could also consider specific capacity-building strategies to ensure adoption of the theory of change approach recommended in section 5.4. Experience shows that, while guidelines are relatively easy to elaborate, the main challenge is to ensure their use by policy makers. In this regard, the proposed National Open Government Strategy could mandate the open government co-ordination team to train and assist the different institutions in using the theory of change approach to develop their sector initiatives. A feasible starting point could be piloting projects with specific institutions.

In some OECD countries, civil society organisations – such as evaluation societies – and academia have played a pivotal role in promoting and supporting M&E practices (Jacob, Speer and Furubo, 2015). As observed during the OECD fact-finding mission in Argentina, demand for the development of policy monitoring and evaluation capacities (often targeted to government), coexists with an increasing supply from thinks tanks and academia. For example:

  • The Centre for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC), which operates as a think tank, has promoted the development of policy monitoring and evaluation in Argentina, and works proactively with subnational governments. CIPPEC is currently collaborating with the city of Santa Fe on the institutionalisation of their evaluation system. The think tank also works with national government entities, such as the Presidency’s National Council for the Co-ordination of Social Policies, where it supports the development of the Annual Plan of Monitoring and Evaluation of Social Policies (described in the next section).

  • In the academic field, the University Torcuato Di Tella recently created the Centre for the Evaluation of Evidence-Based Policies (CEPE), which aims to improve the quality of policies through the provision – and evaluation – of evidence. To this end, the Centre conducts impact evaluations as well as training in policy evaluation and other practices related to public management.

The government could promote synergies between these actors, by incorporating them into capacity-building activities and the development of indicators.

The way forward: Toward the evaluation of open government initiatives

Argentina’s capacities for evaluating open government initiatives are limited due to the absence of a broader institutional framework for policy evaluation.

Assessing the outcomes and impact of policies related to open government is a relatively new area of interest among policy makers and researchers and therefore a shared challenge across OECD countries. In Argentina, the lack of an evaluation culture across government and the absence of a broader institutional framework for policy evaluation, as the starting point for the development of a policy evaluation system (Box 5.8), have limited the government’s capacities to evaluate open government initiatives. In OECD countries, 56% of respondents affirmed that they evaluate their open government initiatives (OECD, 2016). In Argentina, only PAMI (public health insurance agency) responded affirmatively out of the 20 institutions which were surveyed. None of the surveyed ministries stated positively that they follow a broader evaluation policy or have in place government-wide policy evaluation guidelines to assess their open government initiatives.

Box ‎5.8. What is a policy evaluation system? The OECD’s governance perspective on policy evaluation

A sound policy evaluation system implies that policy evaluation is part and parcel of the policy cycle, that policy evaluation is carried out rigorously and systematically, that its results are used by decision makers, and that information is readily available to the public (see Lazaro, 2015).

The OECD’s ongoing work on “the governance of policy evaluation” focuses on the institutionalisation of policy evaluation, along with measures in place to promote quality and use of policy evaluations. More specifically, internationally comparative data are analysed to assess the existence and nature of:

  • an institutional framework for policy evaluation that provides (a) the legal basis to undertake policy evaluations, (b) macro-level guidance on when and how to carry out policy evaluations, and (c) clearly mandated institutional actors with allocated resources to oversee or carry out policy evaluations

  • a policy evaluation culture, including – among others – the promotion of the quality and use of policy evaluations across government, through a skilled public service and appropriate stakeholder engagement mechanisms.

Source: OECD (2018b) OECD Survey on Policy Evaluation, unpublished; OECD (forthcoming) Policy Evaluation Report, OECD, Paris.

There is no one-size-fits-all model for setting up an institutional framework for policy evaluation. As the rationale for evaluation differs among countries, so does the nature of institutionalisation. Some countries such as France and Switzerland have embedded the use of evaluations in their constitutions, while others have framed evaluation as part of larger public management reforms adopted by legislation, as is the case in the United States (the 2010 Government Performance and Results Act Modernisation Act). Several countries have adopted policies specifically devoted to government-wide evaluation, as is the case for Korea (Framework Act on Government Performance Evaluation, 2006).

Contrary to the Latin American trend of creating centralised policy evaluation systems, Argentina never formalised a government-wide policy or legal framework on policy evaluation (CIPPEC, 2017). The current draft of the State Modernisation Law, for instance, does not include specific articles on policy evaluation. Nevertheless, the country has laws in place that outline partial aspects of a policy evaluation system, although they are disjointed and focus mainly on expenditure control (CIPPEC, 2017; Aquilino et al., 2015). As part of this set of norms and policies, the government recently launched the Monitoring and Evaluation Plan for Social Policies and Programmes (Plan Anual de Monitoreo y Evaluación de Políticas y Programas Sociales) (April 2018), which is explained in more detail in this section. This policy has the potential to become a milestone for the development of a broader policy evaluation system at the national level, similarly to Mexico’s National Council for the Evaluation of Social Development Policy (CONEVAL), which progressively expanded its role toward providing guidance for, and co-ordination of, policy evaluation across government (Box 5.9).

Regarding the institutional actors that carry out policy evaluations, the landscape in OECD countries is also quite diverse. Within the executive branch, one way to organise policy evaluation is through the creation of evaluation departments or units with competencies across government. For example, some countries have created departments or offices under the Presidency or the Prime Minister Office, as in the case of Korea, with its Government Performance Evaluation Office. In several countries, bodies under the Ministry of Finance play an important role in cross-government evaluation. This is the case for Chile and Norway. As such, these evaluation units or departments come in different shapes, depending on their mandate, financial resources and capacity. Some of them have technical, managerial and/or budgeting autonomy, while others do not.

Box ‎5.9. Examples of institutions responsible for M&E in OECD countries

Centre of Government: Finland

The Centre of Government of Finland, which consists of the Ministry of Finance, the Ministry of Justice and the Prime Minister’s Office, exercises the competences related to policy evaluation. In order to enhance the use of evidence, the government established in 2014 the Policy Analysis Unit under the Prime Minister’s Office. The unit has the mandate to commission research projects and present evidence to support the government’s decisions on future strategic and economic policy.

Autonomous Agency: Mexico

The National Council of Social Development Policy Evaluation (Consejo Nacional de la Política de Desarrollo Social, CONEVAL), was created in 2004 as a decentralised body with budgetary, technical and management autonomy. It has the mandate (embedded in the Constitution in 2014) to set the standards and co-ordinate the evaluation exercises of the National Social Development Policy and its subsidiary actions and provide the guidelines to define, identify and measure poverty. The agency carries out or commissions evaluation exercises of the social policies developed by the Mexican government.

Ministry of Finance: Chile

The Budgets Directorate (Dirección de Presupuestos), as a dependent body of the Ministry of Finance (Ministerio de Hacienda), is the technical body in charge of ensuring the efficient allocation and use of public funds. In order to do so, the Directorate carries out ex ante, impact and value-for-money evaluations of different governmental policies and programmes. Moreover, it monitors the implementation of government programmes to collect performance information, which is then introduced into the budgetary process and communicated to stakeholders.

Source: Knowledge Sector Initiative (2017), Global Evidence Units – Finland, Government Policy Analysis Unit, Helsinki, www.ksi-indonesia.org/file_upload/Evidence-Policy-Unit-in-Finland-the-Government-Po-14Jun2017163532.pdf, Secretaría de Desarrollo Social (2005), Decree for which the Council of Social Development Policy Evaluation is regulated. [Decree 24/08/2005]. DOF www.coneval.org.mx/quienessomos/Conocenos/Paginas/Funciones.aspx, www.dipres.cl/598/w3-channel.html.

Institutional anchorage and sources of funding, as well as accountability and reporting mechanisms, can all affect the degree of independence and influence the body in charge of leading the promotion and use of policy evaluation (Gaarder and Briceño, 2010). In Argentina, the JGM hosts three bodies with responsibilities for policy evaluation across government:

  • The SGM, as explained in the beginning of this chapter, is responsible for the development of M&E for government priorities, as well for hosting the Policy Evaluations Bank (a public website containing evaluations reports, although they have not been updated since 2015).

  • The Undersecretariat of Budgetary Evaluation and Public Investment works with the Ministry of Finance’s Budget Office to assess budgetary performance across government.

  • The JGM’s National Council for Co-ordination of Social Policies (NCCSP) is in charge of co-ordinating the areas of the national state that implement social policies.

This broader institutional framework, characterised by a limited evaluation culture across the administration, has affected the capacity to evaluate open government strategies and initiatives in a recurrent way. However, as mentioned previously, the GoA launched an Annual Monitoring and Evaluation Plan for Social Policies and Programmes in April 2018, which has been prepared – and is being executed – by the NCCSP. This Plan is mandatory for all public sector bodies at the national level that carry out social policies, programmes, plans and projects financed with funds from the National Treasury and international organisations. In this regard, despite the lack of a government-wide evaluation policy, the Annual Monitoring and Evaluation Plan for Social Policies and Programmes can be used by the GoA to evaluate how open policy making can lead to better governance and services.

Pilot evaluations of the openness of social policies can serve to assess how open government improves policy outcomes and impacts.

Adherents to the OECD Recommendation, including Argentina, recognise that open government “is critical to building citizen trust and is a key contributor to achieving different policy outcomes”, such as public sector integrity, public sector modernisation and civic freedom, among others (OECD, 2017c, p. 1). Open government, and more specifically stakeholder participation, is also pursued under the rationale that it improves “government accountability, broadens citizens’ empowerment and influence on decisions, builds civic capacity, improves the evidence base for policy making, reduces implementation costs, and taps wider networks for innovation in policy making and service delivery” (ibid.). In this sense, one way to assess how open government contributes to better policy making and service delivery is to evaluate the openness of specific sectorial policies. For instance, an evaluation can assess if – and how – a stakeholder consultation process has affected the outcomes and impact of a policy.

As mentioned above, the GoA has recently developed an Annual Monitoring and Evaluation Plan for Social Policies and Programmes. The NCCSP, the body in charge of designing and implementing the plan, proposes the projects to be evaluated and the JGM approves them. The Plan’s objective is to evaluate ten policies per year and send the evaluation results to the National Congress. The Council is planning to evaluate both policy design and processes (in which “openness” in policy design and implementation could potentially be analysed), as well as their impact, focusing on the beneficiary’s perspective.

This Plan gives Argentina an opportunity to explore the causal chain by which open government can lead to better policies and services. The government could, for instance, consider evaluating specifically the transparency or stakeholder participation dimensions of initiatives where interaction and consultation with stakeholders would be key to improving outcomes. An example of this is the Government’s Early Childhood Plan (Plan de Primera Infancia) and/or specific initiatives to help parents with children with disabilities. The Transparency for Development (T4D) project, which was developed by the Harvard Kennedy School in partnership with Results for Development (a global non-profit development organisation), constitutes an interesting example of a specific evaluation exploring whether well-designed transparency and accountability interventions improve health outcomes (See Box 5.10).

Box ‎5.10. Transparency for Development project

The research project Transparency for Development (T4D), launched by Harvard Kennedy School in partnership with Results for Development, looks to disentangle whether, why and in what context community-led transparency and accountability activities improve the outcomes of social development programmes. Working with local civil society, the project carried out an intervention in Indonesia and Tanzania. The researchers first carried out a group of surveys to collect information on health infrastructure and newborn children and mothers. The community was then asked to discuss the information collected to identify the barriers preventing improvement in the provision of public services for mothers and newborn children, and to come up with an action plan to overcome these barriers. Following implementation of the action plan, T4D will carry out an evaluation of the impact of transparency and accountability on the intervention’s results using a randomised controlled trial (RCT) methodology. Finally, after analysing the results of the evaluation, the project will look to replicate the intervention in other areas, in order to build up a comprehensive view of different contexts.

Source: https://epod.cid.harvard.edu/project/transparency-development-t4d.

Moreover, strengthening the link between the SGM and the NCCSP – both located within the JGM – could offer an opportunity to foster the openness of the Annual Monitoring and Evaluation Plan. Incorporating open government practices into this M&E Annual Plan could be instrumental to increasing its robustness, in particular due to the important role that stakeholder engagement and reporting play in promoting the quality of M&E and the use of its results in policy making. 

Monitoring and evaluation of multi-level open government initiatives

Efforts are ongoing to strategically use M&E to improve the multi-level governance and capacities of open government at the provincial and municipal level.

Argentina is carrying out intensive efforts to spread open government to all levels of government and branches of power. This includes active engagement with provinces and municipalities within the framework of the Federal Commitment for the Modernisation of the State (see Chapter 2), the Federal Council for Modernisation and Innovation in Public Management (COFEMOD) and through the forum Argentina Abierta (see Chapter 7 on Open State). COFEMOD is the representative federal organ for matters of state modernisation. It has six technical commissions which mainly reflect the priorities of the Federal Commitment for the Modernisation of the State – training and public employment, results and quality-oriented management, open government and innovation, equality of opportunities and responsible practices, technological infrastructure and cybersecurity, and administrative modernisation.

Many interlocutors at the provincial and municipal levels have stressed the need to have some kind of guidance to develop their own open government strategies. At present, only 6 of the 15 surveyed provinces monitor their open government strategies and initiatives. Moreover, the City of Buenos Aires evaluates its open government initiatives, but only through the OGP’s Independent Reporting Mechanism. Hence, in order to strengthen multi-level governance and enhance provincial capacities for the monitoring and evaluation of open government strategies and initiatives, COFEMOD agreed in 2018 on common criteria to measure the progress of the Federal Modernisation Commitment. The result is a dashboard with a set of baseline indicators that enable calculation of the degree of fulfilment of commitments based on the goals that the provinces agreed in the Council. This will enable provinces to measure and compare their own performance with that of other provinces over the years. In the area of open government the indicators are structured as follows:

  • Whether or not provinces have a data portal

  • The quality of the data (i.e. the type of datasets they publish and the publication format)

  • Whether or not they have laws or regulations on access to public information.

This type of peer benchmarking can serve as an incentive for the development of sound open government strategies and initiatives. Strategic guidance, including some indicators to measure the implementation of open government strategies, can serve as a tool to:

  • harmonise the structure and language of the different open government strategies at the provincial level, taking into consideration the provinces’ autonomy

  • promote an M&E culture at the provincial level (the existence of high-level objectives and indicators will push provinces to plan actions to achieve these objectives).

The GoA and COFEMOD should continue ongoing efforts to develop these baseline indicators. Mexico’s experience in developing an open government metric (Box 5.11) provides an interesting example of composite indicators, from the perspective of government and citizens, drawn from a single definition of open government, as recommended in Chapter 2 on the Policy Framework.

Box ‎5.11. Mexico’s baseline indicators on open government

Mexico’s Open Government Metrics were developed by the Centre for Economic Research and Teaching (CIDE), and were based on an initiative of the National Institute for Transparency, Access to Information and Personal Data Protection (INAI).

The metrics are designed as a baseline to measure the current state of the National System of Transparency, Access to Information and Protection of Personal Data (SNT) and its open government and transparency policies. Aiming to be an “x-ray of the starting point of the open government policy of the Mexican State” at the national and subnational level, its focus goes beyond measuring the compliance with regulations, and aims to capture performance information on the outcomes of open government and transparency policies from the perspective of both government and citizens.

The metrics start with an operational definition of open government structured around two dimensions: transparency and public participation. Each dimension is approached from two perspectives: government and citizens.

 

Transparency dimension

Public participation dimension

Government perspective

Does the government make public information about its decisions and actions?

To what extent is this done?

What is the quality of this information?

In what ways can citizens have an impact on public decisions?

Citizen perspective

How feasible is it for a citizen to obtain timely and relevant information in order to make decisions?

Can citizens activate a mechanism that allows them to influence public decisions?

The CIDE team developed an Open Government Index, consisting of measurements of transparency and participation from the perspective of both government and citizens. The construction of these indexes involved the analysis of existing regulations, a review of government websites, and user simulations, including information requests.

The Metrics survey included a sample of 908 governmental bodies at the national and subnational level; 754 portals were reviewed and 3 635 requests for information were sent. The resulting Open Government Index of Mexico was 0.39 (on a scale of 0 to 1). The index showed that the transparency dimension has a much higher value (0.50) than the participation dimension (0.28).

Source: INAI (2017), Resultados Edición 2017, http://eventos.inai.org.mx/metricasga/index.php/descargables (accessed 11 January 2019).

COFEMOD is also taking active measures to build M&E capacities at the provincial level. For instance, in 2018, planning and M&E training sessions were carried out by COFEMOD’s Results-Based and Quality Management Commission (Comisión de Gestión por Resultados y Calidad), with the participation of more than 50 officials from provincial governments.

However, despite the progress made in fostering co-operation with several provinces, to date COFEMOD and its Open Government Commission still lack the necessary tools to monitor the agreed commitments. According to information collected during the fact-finding mission, the Federal Council works mainly as a forum to reach political agreements on high-level issues, but still faces human resources and financial challenges to promote multi-level governance and horizontal co-operation from a technical point of view.

Argentina could continue ongoing efforts to strengthen the technical capacities of COFEMOD to promote capacity building and horizontal co-operation on M&E

The GoA could continue ongoing efforts to strengthen the technical capacities of COFEMOD, promoting its capability to provide advice to provinces and municipalities regarding the development of M&E capacities and indicators for open government. This could be operationalised through:

  • The establishment of a small technical team providing short-term assistance to provinces for the development of their M&E systems and indicators.

  • The promotion of COFEMOD as a space to promote horizontal co-operation in a systematic manner. For instance, the City of Buenos Aires is a frontrunner in monitoring strategic priorities and its experience and “know-how” could be shared with other provinces and municipalities through COFEMOD.

  • Strengthening the co-operation and co-ordination between open government commissions and the Results and Quality-Oriented Management Commission, which is currently conducting capacity-building activities on M&E. These can be useful for officials in charge of open government policies at the provincial level.

Furthermore, universities and CSOs could play a key role in providing technical capacities for monitoring and evaluating open government initiatives. In the case of the Province of Mendoza, for instance, the CSO Nuestra Mendoza is promoting the development of government plans and performance indicators, and monitors both. Meanwhile, the Public Policy Observatory of the University of Cuyo has developed governance performance indicators in collaboration with the Provincial Government and is planning to develop a public policy evaluators’ network.

Recommendations

Identifying institutional actors to be responsible for collecting and disseminating up-to-date and reliable information and data in an open format.

  • Consider framing monitoring and evaluation provisions within a National Open Government Strategy. Depending on its legal nature, this could provide a specific mandate to the JGM to develop an annual M&E plan for the National Open Government Strategy.

  • Link the monitoring of a government-wide open government goals with the different initiatives taking place on the ground at sector level, including the OGP commitments.

  • Use the recommended National Open Government Steering Committee as an institutional platform to follow up and discuss progress on the strategic goals – and the different objectives – in a systematic manner.

  • Develop specific operating principles to monitor open government initiatives.

Developing comparable indicators to measure processes, outputs, outcomes, and impact in collaboration with stakeholders

  • Consider adopting a theory of change approach for the development of open government initiatives.

  • Create a platform to support the co-creation of robust indicators, with the participation of key stakeholders, such as CSOs, universities and think tanks.

Fostering a culture of monitoring, evaluation and learning among public officials by increasing their capacity to conduct regular exercises in collaboration with relevant stakeholders.

  • Develop capacity-building activities on M&E of open government strategies and initiatives, in collaboration with relevant stakeholders.

  • Consider the development of a dedicated set of guidelines for the development of open government initiatives, in order to facilitate the inclusion of process, outputs, outcomes and impact indicators.

  • Consider mandating the team of the UOG in the SGM to train and assist the different institutions in using a theory of change approach in the development of sectoral initiatives. Piloting projects with specific institutions could be a feasible starting point.

  • Incorporate M&E thinks tanks and academia in the development of capacity-building activities.

The way forward: Toward the evaluation of open government initiatives

  • Consider the development of pilot evaluations on the openness of social policies to assess how open government improves policy outcomes and impacts.

Monitoring and evaluation of multi-level open government initiatives

  • Continue ongoing efforts to strengthen the technical capacities of COFEMOD, in order to promote capacity building and horizontal co-operation on the M&E of open government strategies and initiatives.

References

Aquilino, N. et al. (2015), “Hacia un análisis de evaluabilidad de planes y programas sociales: un estudio sobre 33 iniciativas implementadas en Argentina” [Towards an evaluability analysis of social plans and programmes: a study on 33 initiatives implemented in Argentina], Studia Politicae Review, 34, Universidad Católica de Córdoba.

Bisits Bullen, P. (3 April 2013), “Theory of Change vs Logical Framework – what’s the difference?”, tools4dev blog, www.tools4dev.org/resources/theory-of-change-vs-logical-framework-whats-the-difference-in-practice.

CIPPEC (2017), Do All Paths Lead to Rome? Comparative Analysis in the Institutionalization of Evaluation, Work Document, No. 159, Center for the Implementation of Public Policies for Equity and Growth, Buenos Aires, www.academia.edu/38120648/Do_All_Paths_Lead_to_Rome_Comparative_Analysis_in_the_Institutionalization_of_Evaluation.

Gaarder, M.M. and B. Briceño (2010), “Institutionalisation of government evaluation: Balancing trade-offs”, Journal of Development Effectiveness, Vol. 2/3, pp. 289-309.

Government of Canada (2018), End-of-Term Self-Assessment Report on Canada’s Third Biennial Plan to the Open Government Partnership 2016-2018, Ottawa, https://open.canada.ca/en/content/end-term-self-assessment-report-canadas-third-biennial-plan-open-government-partnership.

Government of the United Kingdom (2018), Resources and Waste Strategy for England, London, www.gov.uk/government/publications/resources-and-waste-strategy-for-england (accessed 11 January 2019).

Government Secretariat of Modernisation (n.d.), Tablero Ciudadano [Citizen Board], Buenos Aires, www.argentina.gob.ar/tablero-ciudadano (accessed 11 January 2019).

Hawthorn, L.R.L. (2006), Program Evaluation and Performance Measurement, an Introduction to Practice, Thousand Oaks, Sage, California.

INAI (2017), Resultados Edición 2017 (database), http://eventos.inai.org.mx/metricasga/index.php/descargables (accessed 11 January 2019).

INAP (2018), Instituto Nacional de la Administración Pública [National Institute of Public Administration] (website), www.argentina.gob.ar/inap (accessed 11 January 2019).

Jacob, S., S. Speer and J-E. Furubo (2015), “The institutionalization of evaluation matters: Updating the International Atlas of Evaluation 10 years later”, Evaluation, Vol. 21/1, pp. 6-31, http://evi.sagepub.com/content/21/1/6.abstract.

Knowledge Sector Initiative (2017), Global Evidence Units – Finland, Government Policy Analysis Unit, Helsinki, www.ksi-indonesia.org/file_upload/Evidence-Policy-Unit-in-Finland-the-Government-Po-14Jun2017163532.pdf.

Lafortune, G., S. Gonzalez and Z. Lonti (2017), “Government at a glance: A dashboard approach to indicators”, in D. Malito, G. Umbach and N. Bhuta, (eds.), The Palgrave Handbook of Indicators of Global Governance, Palgrave Macmillan, Basingstoke, UK.

Lazaro, B. (2015), Comparative Study on the Institutionalization of Evaluation in Europe and Latin America, Studies, No. 15, Eurosocial, Madrid, http://sia.eurosocial-ii.eu/files/docs/1456851768-E_15_ENfin.pdf.

McDavid, J.C. and L.R.L. Hawthorn, (2006), Program Evaluation and Performance Measurement, an Introduction to Practice, Thousand Oaks, CA, Sage.

OECD (forthcoming), Policy Evaluation Report, Paris, OECD.

OECD (2018a), OECD Surveys on Open Government in Argentina, OECD, Paris.

OECD (2018b), OECD Survey on Policy Evaluation, OECD, Paris.

OECD (2018c), OECD Public Governance Reviews: Paraguay: Pursuing National Development through Integrated Public Governance, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264301856-en.

OECD (2017a), “Towards Open Government Indicators: Framework for the Governance of Open Government (GOOG) Index and the Checklist for Open Government Impact Indicators” (concept note), OECD, Paris.

OECD (2017b), Core Skills for Public Sector Innovation, OECD, Paris, www.oecd.org/media/oecdorg/satellitesites/opsi/contents/files/OECD_OPSI-core_skills_for_public_sector_innovation-201704.pdf.

OECD (2017c), Recommendation of the Council on Open Government, OECD, Paris, www.oecd.org/gov/Recommendation-Open-Government-Approved-Council-141217.pdf.

OECD (2016), “The monitoring and evaluation of open government strategies and practices, in Open Government: The Global Context and the Way Forward, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264268104-en.

OECD (2014), What is Impact Assessment?, OECD, Paris, www.oecd.org/sti/inno/What-is-impact-assessment-OECDImpact.pdf.

OECD (2012), “Understanding and evaluating theories of change”, in Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results, OECD Publishing, Paris.

OECD (2011), Quality Framework and Guidelines for OECD Statistical Activities, OECD, Paris, www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=std/qfs(2011)1&doclanguage=en.

OECD (2009), “OECD DAC Glossary” in Guidelines for Project and Programme Evaluations, OECD, Paris, www.oecd.org/dac/dac-glossary.htm.

Results for Development (n.d.), “Our History” (webpage), www.r4d.org/about/our-history (accessed 11 January 2019).

SecDev (2018), Open Government Performance: Measuring Impact, Treasury Board of Canada Secretariat, Ottawa, https://open.canada.ca/ckan/en/dataset/f637580f-e0f7-5939-bf3f-ded35ce72d2a.

Secretaría de Desarrollo Social (2005), Decree for which the Council of Social Development Policy Evaluation is regulated. [Decree 24/08/2005]. DOF, www.coneval.org.mx/quienessomos/Conocenos/Paginas/Funciones.aspx, www.dipres.cl/598/w3-channel.html.

Trello (n.d.) Compromisos Transparencia [Transparency Commitments] (website), https://trello.com/b/BqqCfLNS/compromisos-transparencia (accessed 11 January 2019).

World Bank Group (2016), Open Government Impact & Outcomes: Mapping the Landscape of Ongoing Research, World Bank Open Government Global Solutions Group, Washington, DC, http://opengovimpact.org/img/og-impact-full-report.pdf.

End of the section – Back to iLibrary publication page