3. Monitoring the Results of the Strategic Plan of Nuevo León

Sound monitoring means that monitoring is part and parcel of the policy cycle; that it is carried out systematically and rigorously; that decision makers use its results; and that information is readily available to the public (Lázaro, 2015[1]). It offers policy makers the tools and evidence to detect policy challenges, to adapt or adjust policy implementation, as well as to communicate policy results in a timely and accessible manner.

In Nuevo León, sound monitoring can facilitate planning and operational decision-making by providing evidence to measure performance and help raise specific questions to identify implementation delays and bottlenecks. It can also strengthen accountability and public information in regard to the implementation of the Plan, as information regarding the use of resources is measured and made public. Yet, the monitoring set-up in Nuevo León, whether for the Strategic Plan 2015-2030 or the State Development Plan lacks clarity for its actors and legibility for citizens.

This chapter discusses the respective roles of the Council of Nuevo León and the state public administration in regard to monitoring the results of the Strategic Plan. It includes a detailed description of the current institutional set-up for monitoring the Plan, from the data collection process all the way to communication of results. It draws on comparative approaches to recommend a clarification of this set-up and a reinforcement of the state public administration’s mandate in this regard.

The chapter analyses this institutional set-up and suggests diversifying the Plan’s indicators and strengthening their robustness, in order to improve the overall quality of the monitoring exercise. The chapter adopts a forward-looking approach that recommends the development of quality assurance and quality control mechanisms to strengthen the overall credibility of the monitoring set-up.

Finally, the chapter examines how these monitoring results can provide the appropriate users timely performance feedback, to improve decision-making, accountability and information.

In Nuevo León, article 18 of the Strategic Planning law defines monitoring, as well as evaluation, as “the measurement of the effectiveness and efficiency of the planning instruments and their execution” (State of Nuevo León, 2014[2]). This definition, therefore, does not provide information on the objectives of monitoring or the manner in which it should be conducted, in terms of methodology and quality attributes. As mentioned in chapter 1, this definition does not distinguish between monitoring and evaluation. This confusion is also present in article 37 of the “Reglamento de la Ley de Planeación”.

Monitoring differs from evaluation in substantive ways. The objectives of monitoring are to facilitate planning and operational decision-making by providing evidence to measure performance and help raise specific questions to identify implementation delays and bottlenecks. It can also strengthen accountability and public information, as information regarding the use of resources, efficiency of internal management processes and outputs of policy initiatives is measured and publicised. Unlike evaluation, monitoring is driven by routines and ongoing processes of data collection. Thus, it requires resources to be integrated into an organisational infrastructure. Whereas policy evaluation studies the extent to which the observed outcome can be attributed to the policy intervention, monitoring provides descriptive information and does not offer evidence to analyse and understand cause-effect links between a policy initiative and its results (OECD, 2019[3]).

A clear and comprehensive definition of monitoring would contribute to a shared understanding of its objectives and modalities among the main actors in Nuevo León. This would facilitate greater cooperation between relevant actors by not only eliminating confusion regarding the role of monitoring vis-à-vis other tools to measure government performance, but also by making stakeholders more aware of the mutual benefit of monitoring exercises (e.g. more informed decision-making processes, ongoing supply of performance indicators, etc.). Such a definition could be included in the regulatory framework.

A robust monitoring system first and foremost implies the presence of an institutional framework for monitoring that provides: (a) the legal basis to undertake monitoring; (b) macro-level guidance on when and how to carry out monitoring and (c) clearly mandated institutional actors with allocated resources to oversee or carry out monitoring (OECD, 2019[3]).

There is a solid legal basis for monitoring the Strategic Plan 2015-2030 (SP) and the State Development Plan (SDP) in Nuevo León. Article 18 of the Strategic Planning Law mandates monitoring and evaluation to be carried out for both planning instruments (State of Nuevo León, 2014[2]). Moreover, monitoring of the SDP is also embedded in the guidelines. The “General Guidelines of the Executive Power of the state of Nuevo León for the Consolidation of the Results-based Budget and Performance Assessment System” stipulate that “the agencies, entities and administrative tribunals, shall send quarterly reports to the Secretariat, in accordance with the provisions issued by the latter, reports on the performance of the budgetary exercise and programmatic progress.” (State of Nuevo Leon, 2017[4]), which have to be aligned to the objectives of the SDP (article 9 of the guidelines). However, the quality of these quarterly reports is not always high enough to help dependencies track spending decisions along budgetary programmes.

Some OECD countries have similarly adopted clear legal frameworks for performance monitoring. In the United States, for example, the Government Performance and Results Modernization Act of 2010 mandates the government to define government-wide performance goals, as well as each agency to define sectoral goals. In Canada, the Management Accountability Framework was implemented in 2003 to hold heads of departments and agencies accountable for performance management, and to continuously improve performance management (see Box 3.1).

Having a clear legal framework, in the form of a primary law accompanied by guidelines, would be useful in order to underline the importance that the state of Nuevo León attributes to this practice, within the council and across government. Nevertheless, the presence of a legal-basis for monitoring alone is not enough. A robust monitoring system needs to specify the actors involved, their mandates, the timeline, the methodology and tools for monitoring. In the case of Nuevo León, such a monitoring system would need to clarify the articulation of the monitoring for the SP and the SDP, as was done in the US for the 2010 Government Performance and Results Modernisation Act for instance (see Box 3.2).

The Strategic Planning Law assigns responsibility for monitoring the Strategic Plan and the State Development Plan to the council, with the collaboration of “the dependencies and entities of the state public administration” (State of Nuevo León, 2014[2]). However, a careful analysis of the Law and its regulations, suggests that the role of the various actors and units within the centre of government is not clear in terms of their responsibilities for the monitoring of the SP and the SDP. Regarding the Strategic Plan, the council is responsible for the coordination and promotion of its monitoring, as laid out in article 18 of the Strategic Planning Law. The council is also mandated to establish criteria for the identification of indicators to monitor the SP, to monitor their progress by collecting the data and documents necessary and to communicate the monitoring results to citizens (State of Nuevo León, 2014[2]).

However, it is unclear how the council is intended to collect this data, as it does not actually implement the policies outlined by the Strategic Plan. In practice, the CoG has played a facilitating role between the council and the line ministries in order to gather the aforementioned data. However, neither the CoG nor the line ministries have an explicit (i.e. included in their legal regulations) mandate to cooperate with the council in this regard and they do not have specific human or financial resources allocated to this task. Adapting the internal regulations of the line ministries to include data collection and analysis for the SP would greatly facilitate the cooperation between these actors.

Regarding the monitoring of the State Development Plan, on the other hand, the role of the state public administration and its dependencies is explicitly laid out. The centre of government is in charge of coordinating and promoting the monitoring of the SDP (article 14.I and 14.IV of the Strategic Planning Law regulation), while the line ministries monitor the budget and programmes of their own entities and report to the CoG (article 14.V.e of the Strategic Planning Law regulation).

This practice of attributing the monitoring of strategic priorities to the CoG can also be found in different forms in many OECD countries (see Box 3.3). This monitoring can either be done by the CoG itself, like in Finland, or by units within the CoG with a special mandate to do so, as is the case for the Results and Delivery Unit of Canada.

This division of labour is also supported by the fact that certain line ministries are also explicitly mandated, through their organisational decree, to report on the implementation of the SDP. For example, the Secretary of Economy is responsible for preparing monitoring reports for the implementation of the State Development Plan (article 14.VIII of the regulations for the Ministry of Economy and Labour). There is no such provision regarding the Strategic Plan for any of the ministries.

Rather, the role of the council regarding the monitoring of the SDP is ambiguous. Indeed, the council is also mandated (article 18 of the Strategic Planning Law) to monitor the SDP, seemingly creating a duplication of this function between the council and the CoG (see Table 3.2).

The overlaps and potential gaps in the mandates of the main actors of the monitoring system result in unnecessary complexity and a lack of incentives to collaborate. Monitoring and the resulting data collection processes often involve complex chains of command shared across stakeholders who do not always see the immediate benefits of the monitoring exercise. Clarifying mandates through formal mechanisms, such as laws or regulations, can therefore be an important tool for governments to create incentives for these stakeholders to participate in such an exercise.

In the case of Nuevo León, neither the CoG nor the line ministries have an explicit mandate to report to the council on the Strategic Plan’s programmes nor do they have the allocated resources to do so. As such, without an official delineation of the roles and the scope of the monitoring system (i.e. SP and SDP) in Nuevo León, the monitoring exercise becomes complex, potentially less effective and can generate unnecessary tensions.

It may therefore be useful to clarify the different roles of each actor by updating the regulatory framework, giving an explicit legal mandate to each of the actors and specifying their respective responsibilities (collecting data, analysing data, using data, coordinating and promoting monitoring, designing guidelines for monitoring, etc.). The internal regulations of the secretariats and their dependencies could also be updated to clarify their role in monitoring the plans. In doing so, it is important to clearly separate the coordination and promotion role of the CoG from that of data collection, analysis and use performed by line ministries, in order to avoid blurring incentives between actors.

Moreover, Nuevo León does not have macro-level guidance (document, policy) in place on when and how to carry out monitoring, despite the fact that article 19 of the law stipulates that the monitoring and evaluation report should be produced annually. For example, it does not elaborate on how this report ought to be practically prepared. By creating clear obligations without proper guidance, this blurs the institutional framework.

More importantly, the law and its supporting regulation do not outline the existence of a monitoring set-up beyond the production of an annual report. In particular, the following elements are missing:

  • the expected frequency of data collection for each level of indicator (SP impact/ outcome indicators or SDP outcome/output indicators);

  • the governance and calendar of the performance dialogue between line ministries, the CoG and the council, that is what actors meet to discuss what issues at what frequency and with what intent;

  • and the criteria according to which certain issues need to be escalated to a higher level of decision-making (for instance, from line ministry level to CoG level), as well as how to follow-up on decisions.

Despite the lack of a legally defined monitoring process, an informal process has been developed, both for the preparation of the annual report and for the infra-annual performance dialogue between the council and the state public administration. This process has been carried out in full once since the adoption of the Strategic Plan.

The council spearheads the monitoring report process; the commissions receive and review the relevant information from relevant ministries and generate recommendations accordingly. The CoG coordinates the relationship between different ministries and the council (and its commissions) by requesting and collecting information from the ministries on behalf of the council. The ministries respond to the council’s recommendations in an official document delivered to the legislative branch.

Figure 3.2 outlines a schematic view of this process.

Before 2018, there was also no standard procedure for preparing the monitoring report across commissions; it was only during the second evaluation that the council attempted to harmonise procedures by standardising the questionnaires sent to ministries and the crosschecking procedures.

Likewise, an informal set-up has been developed whereby the state public administration reports to the council – and to each thematic commission in particular – on an infra-annual basis. In this exercise, working methods across commissions have been very heterogeneous, with each commission deciding, more or less collaboratively with the state public administration, on the number of meetings per year, the level and type of information they seek to receive and the potential outputs of these meetings. For example, the Education sub-commission did not have any meetings in 2018.

As a result of this informal set-up, the quality of exchanges between the state public administration and the council has often rested on the existence of trusting personal relationships between the Secretaries and the members of the commissions, rather than on sound and clear working methods and mandates. Moreover, the lack of standardised tools for data collection and analysis has meant that the intended output of the monitoring sessions between the commissions and the secretaries is not only heterogeneous, but also sometimes unclear. This may lead to a lack of incentive to collaborate, especially from the line ministries who do not have an explicit mandate and allocated resources to do so.

Monitoring evidence can be used to pursue three main objectives (OECD, 2019[3]):

  • it contributes to operational decision making, by providing evidence to measure performance and raising specific questions in order to identify implementation delays or bottlenecks;

  • it can also strengthen accountability related to the use of resources, the efficiency of internal management processes, or the outputs of a given policy initiative;

  • it contributes to transparency, providing citizens and stakeholders with information on whether the efforts carried out by the government are producing the expected results.

Each of these goals implies using monitoring data in a distinct manner. Firstly, for monitoring evidence to serve as a management tool, it must be embedded in a performance dialogue that is conducted regularly and frequently enough that it allows practitioners and decision-makers to identify implementation issues, determine resource constraints and adapt their efforts/ resources in order to solve them. Such an exercise is closely tied to policy implementation and management. In the case of Nuevo Léon, this exercise should be conducted within the state public administration, ideally between the highest levels of the centre of government and the operational levels of the dependencies.

Nuevo León has both a Strategic Plan and a State Development Plan. Given the plurality of strategic frameworks, it is important to ensure that the monitoring processes are streamlined and aligned to minimise the additional burden occurring with each strategy (Vági and Rimkute, 2018[11]). As a result, it is important to clarify that the state public administration should conduct this performance dialogue regarding both the SP and the SDP simultaneously. Given the more strategic nature of the Strategic Plan, the performance dialogue for the Strategic Plan could be held on a biennial basis, depending on the theory of change of every strategic objective and information availability. Two main functions can be identified in the context of this performance dialogue: a coordination and promotion function, which can be naturally conducted by the centre of government, and a data collection, analysis and use function, which can be the responsibility of the line ministries.

This performance dialogue could be centred around the action plans – as recommended in chapter 2 – to monitor the lines of action (processes) and strategies (outputs) which are common to the SP and the SDP, at the level of line ministries, on a quarterly basis1. This dialogue would create an incentive for line ministries to resolve implementation issues at the technical or sectorial level through a gradual escalation process. If the problem is still unresolved after two quarters, it could be referred to the CoG twice a year for decision. Finally, any implementation issues that would require cross-ministerial coordination and mobilisation of additional resources may be referred to cabinet meetings on a yearly basis (see Figure 3.3).

These cabinet meetings may also be an opportunity to update and analyse the outcome/impact indicators included in the SP and the SDP. Indeed, it is important to link process and output indicators to the progress of outcome and impact indicators to actually track results and not just activities (The World Bank, 2017[12]). Figure 3.4 demonstrates this process over a one-year period. The next section of this chapter will focus on the relevance and quality of indicators.

As previously mentioned, monitoring evidence can also serve as a tool for remaining accountable to stakeholders in the implementation of the Strategic Plan. In order to do so, in preparation of the annual cabinet meeting on the SDP and the SP, the state public administration and the council could conduct a collaborative review of the plans’ indicators, which would be presented to the cabinet. This collaborative review, which could build on the four performance dialogue sessions held every year, could be conducted jointly by the state secretariats and the council’s commissions to review the data to be presented to cabinet, as well as give the opportunity to the council to serve as a sounding board for the administration. Indeed, while attempting to interpret the data and understand implementation gaps, the administration could seek the advice of the council. This review would be an opportunity for the council to offer the state public administration different insights on implementation gaps identified during the performance dialogue. Finally, given the council’s multi-stakeholder composition, it could also offer implementation support to the cabinet where relevant and possible.

In order to carry on the performance dialogue and contribute to better accountability, the state public administration could consider uploading and updating the indicators of the SDP and the SP in a dashboard. This dashboard would include information concerning both the implementation of the action plan recommended in chapter 2, as well as that of the SP and the SDP outcome/impact indicators, but only the action plan indicators will be updated quarterly – the indicators from the SDP and the SP may only be updated annually or when new information is available. Monitoring dashboards are intended as management and decision-making tools, and therefore offer a performance narrative that allows for the identification and analysis of implementation gaps and potential solutions. To develop these dashboards, the state public administration could draw on a range of good practices. For instance, the current citizen councillors who belong to the private sector could share their best practices with the public servants in regard to preparing dashboards and monitoring strategies.

Moreover, the commissions may want to clarify their working methods for conducting this annual review. In particular, the council could create a guide for commissioners and for their technical secretariats in order to define the agenda for review sessions, the role of the commissions, the tools they may use, and how decisions should be taken in regard to this annual review (see chapters 4 and 5 for a more general discussion of commissions’ working methods).

Finally, monitoring evidence serves to foster transparency, in particular to communicate to citizens. The centre of government could choose to produce a communication report on the SP. This communication report, which could be updated every three years, would replace the current “hybrid” monitoring/evaluation report conducted by the council on an annual basis. It would be designed primarily as a communication tool (see section on use of monitoring evidence for a further discussion of this leaflet). Moreover, it is important to communicate more regularly with citizens, notably through an up-to-date website showcasing the monitoring information of the SDP and the SP (see section on promoting the use of monitoring for a more detailed discussion of this recommendation).

Access to robust and credible data is an essential component of building a results-based monitoring system. Data should not only be technically robust, valid, verifiable and policy-actionable, but they should also be transparent and free from undue influence (Zall, Ray and Rist, 2004[13]).

Developing performance indicators, their baseline and targets is an important stage in the strategy development process. Although article 9 of the Strategic Planning Law sets among the council’s responsibilities the establishment of criteria for the monitoring and evaluation indicators, there does not seem to be an explicit and systematic framework for their design.

Firstly, there is a lack of systematic linkage between each layer of the Plan and the indicators making it hard for stakeholders to monitor the progress in terms of aspirations/opportunity areas/strategic lines (see Table 3.3). While most commissions have identified a set of indicators, they are not presented in a way that clearly indicates their connection with elements of the Plan (opportunity areas, strategic lines or initiatives). All indicators for the commission/sub-commission are grouped together in a table at the end of the section related to that thematic area. Some sub-commissions2 have introduced indicators that cannot be directly or indirectly linked to either an opportunity area or a strategic line/initiative.

Moreover, not all commissions have defined indicators. For instance, four sub-commissions3 of the Human Development Commission have not defined any indicators, suggesting that there is no way for their members to monitor and evaluate the achievement of the policy goals set out in that part of the Strategic Plan. Similarly, the Economic Development Commission is the only one that has defined indicators for both priority opportunity areas and strategic lines: 40% of opportunity areas have no indicators relevant to them.

Therefore, explicitly linking each indicator to an opportunity area (strategic objective as recommended in chapter 2), first and foremost visually in the strategy document, will be essential to clarify the monitoring structure of the Plan. This exercise should be undertaken by the council.

Such an analysis also suggests that some output indicators may have been linked to outcome-level objectives. It would be preferable to maintain, to a feasible extent, a focus on outcomes in a performance-oriented framework (see example of Scotland in Box 3.4).

Monitoring a policy, programme or project – such as the strategic plan 2015-2030 or the action plans – implies identifying indicators that are:

  • sound, meaning that they are methodologically robust;

  • policy actionable, meaning that they correspond to an observable variable that captures the intended direction of change and is under the remit of the actor in charge of implementation.

Firstly, the indicators in the strategic plan 2015-2030 are not always sound. In order for indicators to provide decision makers information on what course of action to take in order to achieve the intended policy objectives, they should be accompanied by information that allows for their appropriate interpretation. That is why, regardless of their typology, all indicators should be presented in a way that provides enough information:

  • Description of the indicator: name, unit of measurement, data source and formula.

  • Responsibility for the indicator: institution, department or authority responsible for gathering the data;

  • Frequency of data collection and update of the indicator;

  • Baseline that serves as a starting point to measure progress;

  • Target or expected result.

For the most part, the indicators in the strategic plan are explicitly stated, as reflected by the fact that the commissions have chosen to present the indicators in a table often including information on the data source, the baseline and the target values. Nevertheless, some indicators do not have an existing baseline or source responsible for collecting data. The indicators also do not make explicit the institution or person responsible for collecting the data and updating the indicator.

Moreover, the indicators chosen for the Strategic Plan do not all fit the RACER approach (see Box 3.5 for an explanation of this criteria).

For instance, not all indicators are from public or reliable sources. The Strategic Plan also contains indicators that are largely based on international and national statistical data (see Box 3.6 for a detailed description of types of monitoring data). Yet, this type of data is best suited to long-term impact and context indicators, for which the underlying changes are less certain than for outcome indicators (DG NEAR, 2016[70]). Administrative data and data collected as part of an intervention’s implementation are usually better suited for process, output and intermediate outcome indicators, such as those found in the State Development Plan.

Likewise, the frequency at which some indicators can be collected is not relevant to the yearly monitoring mandate of the council. This is the case for the sub-commission on Social Development that proposes an indicator for “Mujeres en situación de violencia de pareja” using data from the “Encuesta Nacional sobre la Dinámica de las Relaciones en los Hogares (ENDIREH, INEGI)”, which is only collected every five years.4 As a result, 23% of indicators could not be collected in 2018.

Moreover, the current indicators identified for the Strategic Plan are not sufficiently policy actionable, meaning that they do not provide relevant and timely information for decision-making related to the implementation of the Plan. Some indicators in the plan measure actions that fall outside the responsibilities and control of the executive, meaning that they are not actionable or relevant. Indeed, if the Strategic Plan is to be implemented by the executive, its specific actions should therefore be under the remit of this branch of government. Nevertheless, the commission on Security and Justice included opportunity areas and indicators (e.g. “Duración de casos no judicializados”) that fall under the responsibility of the judicial branch.

Similarly, some indicators proposed in the Plan do not have a state-level scope. The sub-commission on health, for example, uses OECD health data for its indicators, which are collected at a national level and therefore cannot directly reflect Nuevo León’s efforts in this area. Nevertheless, in some cases, proxy indicators may be of use when state-level indicators do not exist. This is the case, for instance, for metropolitan area indicators, which cover 90% of Nuevo León’s population.

The Council of Nuevo León could consider identifying indicators for the Strategic Plan that are a mix of impact and context indicators, that would be available at state level, calculated on the basis of international and national statistical data – for instance in collaboration with INEGI – as well as outcome indicators calculated on the basis of administrative data or even ad hoc perception survey data. For example, these indicators could be taken from the national survey on government impact and quality prepared by INEGI every other year (INEGI, 2019[19]) (See Box 3.7). Indeed, while output indicators should not be used to measure outcomes and impacts, context and impact indicators say little about the policy levers to achieve the long-term goals associated with the strategic objectives. Instead, identifying intermediate outcomes, which respond fairly rapidly to changes in policy systems but can be linked to longer term objectives, may be beneficial in monitoring the plan (Acquah, Lisek and Jacobzone, 2019[20]).

The council could consider applying the RACER approach in order to assess the extent to which the indicator could support decision-making5. It should include representatives of the different stakeholders, which during implementation have the responsibility of collecting, analysing and reporting on data (DG NEAR, 2016[15]). Each indicator, qualitative as well as quantitative, must correspond to an existing source, be it a statistical source or an administrative one.

Furthermore, in order to monitor the action plans, the state public administration could consider identifying process indicators mostly through administrative sources. Indeed, for this type of information/monitoring, authorities should tend towards regular flows of information, ideally at minimum administrative cost, automatically. The state public administration could consider submitting these indicators for review and comment by the Council of Nuevo León, in order to benefit from the opinion and expertise of a wider range of stakeholders.

Although crucial to monitoring, collecting data is not sufficient in and of itself; in order to support decision-making and serve as a communication tool to the public, data needs to be analysed (Zall, Ray and Rist, 2004[13]). Indeed, the analysis of the data which serves as the backbone of the monitoring exercise needs to be tailored to the user, focused and relevant (OECD, 2018[22]).

In fact, it is important to identify the types of use for the monitoring information (demonstrate, convince, show accountability, involve stakeholders, etc.). The analysis of data cannot be targeted efficiently without a clear definition of its audience(s) and the types of questions that audience is likely to raise about findings. There are typically two main types of users of monitoring and reporting information:

  • External users: citizens, media, NGOs, professional bodies, practitioners, academia, financial donors, etc. They usually seek user-friendly information, which is provided using a simple structure and language, concise text and as visually as possible.

  • Internal users: congress, government, ministers, managers and operational staff, and in Nuevo León, the council. They are more interested in strategic information related to overall progress against the objectives and targets, and will seek to understand the challenges encountered in their implementation, as well as requiring indications of action required in response to data findings. Furthermore, the higher the level of decision-making, the more information should be aggregate and outcome-oriented (OECD, 2018[22]).

Currently, these types of users are not clearly differentiated in the annual monitoring report as the report is meant to serve as a transparency tool to communicate vis-à-vis external users (it is published on the website) and as an accountability and decision-making tool for internal users (for instance, the report is to be sent to Congress).

Clearly differentiating between the three monitoring exercises (performance dialogue, the joint review, and the annual report on the SDP) should allow for more fit-for-purpose analyses in each situation. The following section on promoting the use of monitoring evidence will discuss in further detail how to best communicate monitoring information to external users.

While the performance dialogue and the joint review are exercises aimed at an internal audience, they still need to present monitoring evidence in a manner that supports decision-making (i.e. by focusing on strategic implementation challenges, explaining them through analysis and proposing solutions). A robust monitoring dashboard could include the department or agency in charge of updating it, the frequency at which the relevant indicator(s) should be updated, as well as the analysis and action plan (or proposed solutions) in case implementation gaps can be identified. Finally, the dashboard could identify whether or not the issue must be brought to a higher level of decision-making (see Figure 3.3 for the proposed performance dialogue process). Figure 3.5 showcases an example of such a dashboard.

More generally, the information included in the dashboards should be analysed and presented in a way that is (OECD, 2018[22]):

  • User-friendly, in that the information is provided using a simple structure and language, with visual information;

  • Focused, in that the performance information provided is limited to the most important aspects, such as objectives, targets and activities, and is linked to the achievement of results (outcome/impacts). It is therefore important that reports on the implementation of the action plan include both strategic (outcome/impact) and operational (process/ output) information, even if the main object of the meeting is the monitoring of the action plan.

  • Relevant, in that only the points requiring decisions or encountering implementation bottlenecks should be discussed, in order to avoid overloading the reader with non-critical information. Only information that helps decision-makers manage the implementation of the strategy and key policy decisions should be provided. Where possible, the reports should include explanations about poor outcomes and identify steps taken or planned to resolve problems.

Quality means credible, timely and relevant data and analysis. There are three main ways to ensure that the data and analysis are of quality: ensuring capacity, developing quality assurance and quality control.

Establishing a monitoring set-up that can produce credible, timely and relevant data and analysis requires skills and capacities, which can be defined as “the totality of the strengths and resources available within the machinery of government. It refers to the organisational, structural and technical systems, as well as individual competencies that create and implement policies in response to the needs of the public, consistent with political direction” (OECD, 2008[23]). In particular, two types of skills are crucial:

  • Analytical skills to identify indicators, review and analyse data and formulate judgements or conclusions (Zall, Ray and Rist, 2004[13]).

  • Communication skills to structure monitoring reports and dashboards in a way that is attractive to the end user (OECD, 2018[22]).

In order to have the necessary skills and capacities, monitoring set-ups require a critical mass of technically trained staff and managers, such as staff trained in data analytics, data sciences, communication, and in at least basic information technology (Zall, Ray and Rist, 2004[13]). All three of these elements rest on the availability of dedicated resources (human and financial) for the monitoring function. These have been framed in a number of relevant cases as “delivery units” or “policy implementation units” and are often located at the centre of government. Yet, currently, the state public administration does not have such resources for monitoring of the Strategic Plan.

A first step in strengthening the capacities of the state public administration may be the creation of dedicated reporting units in each of the line ministries, who would be in charge of collecting and analysing the data in the context of the performance dialogue. Moreover, the centre of government may wish to strengthen its analysis and communication skills with a dedicated delivery unit in order to provide methodological support and direction to the line ministries in preparing impactful monitoring reports.

Currently, the data collected for the monitoring report are subject to an audit by a private accounting organisation. This form of quality control (i.e. looking at the end product) serves to provide assurance that the data used for monitoring are valid and reliable, that is to say that the data collection system is consistent across time and space (validity), and that it measures actual and intended performance levels (reliability).

However, quality assurance (looking at the process) is also important. Some countries have developed mechanisms to ensure that monitoring is properly conducted, that is to say that the process of collecting and analysing respects certain quality criteria. In order to do so, countries have developed guidelines, which serve to impose a certain uniformity in this process (Zall, Ray and Rist, 2004[13]). Box 3.8 explores in more detail the difference between quality assurance and quality control.

In Nuevo León, two types of guidelines may be of use. The state public administration could issue guidelines in order to clarify the working methods and tools that will support the performance dialogue to be conducted. In particular, these guidelines could specify quality assurance processes in order to strengthen the quality of the data that are collected in the context of this monitoring exercise, to be applied by every line ministry. Similarly, these guidelines may seek to clarify the criteria for escalating issues from the line ministry level to the CoG level, in order to harmonise this process across government.

Using a system to measure the results in terms of performance and delivery is the main purpose of building a monitoring set-up. In other words, producing monitoring results serves no purpose if this information does not get to the appropriate users in a timely fashion so that the performance feedback can be used to better public decision-making, accountability and citizen information.

Greater publicity of monitoring results can increase the pressure on decision-makers for implementation and for a more systematic follow-up of recommendations, while providing accountability to citizens concerning the impact of public policies and the use of public funds.

The monitoring report for the Strategic Plan is made public every year on the council online platform (www.conl.mx/evaluacion).Similarly, the government report on the State Development Plan can be found on the government of Nuevo León website, which constitutes a first step in providing information to the public. However, the current monitoring report is not user friendly due to its length. The report includes an executive summary, which is a useful tool for diffusing key messages to citizens. Nevertheless, the executive summary is over ten pages long, separates the main findings from their associated recommendations and does not give a description of major issues encountered in the implementation of the Plan. A shorter and punchier executive summary may be more impactful in reaching stakeholders and the general public alike. Moreover, contrarily to the annual “Informe de gobierno”, the monitoring report is not very visual.

Overall, the format and timeline of the communication report on the advancement of the Strategic Plan could be revised to better fit its primary intended use: to reach a wider audience and inform a variety of stakeholders about the progress of the implementation of the Plan based on tangible and possibly outcome indicators. To that extent, the centre of government could choose to produce a communication report on the Strategic Plan, which could be updated at mid-term and at the end of every gubernatorial mandate, i.e. every three years. This report would replace the current monitoring/evaluation report conducted by the council and would be designed primarily as a communication tool, so it should have a clear summary and be accompanied by clear messages for the press.

The council has also created an online platform, which is currently under construction, which seeks to provide up-to-date information on the evaluation of the Strategic Plan’s indicators. In order to ensure timely updates and minimal use of extra resources, the platform could be directly updated by the state public administration. The platform could focus on the indicators from the Strategic Plan and the State Development Plan in order to ensure more focused communication of the objectives that concern and affect citizens. Importantly, the indicators defined for the State Development Plan should be aligned with those of the Strategic Plan and reflect the underlying theory of change. This could be shown on platforms similar to those created by other Mexican states, such as MIDE Jalisco (see Box 3.9). Avanza NL should thereby seek to remain as simple to use and up-to-date as possible.

Besides the communication report and the Avanza NL platform, the council and the state public administration could develop other communication tools in order to reach a wide audience. These could include social media strategies, newsletters with focuses on council actions and editorials in local newspapers (World Bank, 2017[68]), such as the newsletter, social media outlets and editorial column that the Council has on the local media Verificado. It could be useful to this end for the council to allocate a portion of its yearly budget towards communication.

Another way to increase public engagement in the Plan and its monitoring would be to collect periodic feedback on the Strategic Plan and its implementation through a survey of the general population, business leaders and civil society organisations. This survey could be conducted in collaboration with INEGI and the National Survey on Government Quality and Impact (INEGI, 2019[19]), which has experience in conducting opinion surveys on similar topics. It would be good to provide incentives to some of the NGOS to ensure that this could be published.

While preparing fit-for-purpose monitoring analysis is important, as it gives users quick and easy access to clear monitoring results, they do not systematically translate to better uptake of the outcomes in decision-making. In fact, it is pivotal that monitoring evidence be presented in a way that is compelling to its audience. Monitoring dashboards should include a narrative on performance, interpreting and using the results (The World Bank, 2017[12]) to understand implementation gaps and propose corrective policy action in a way that creates a coherent and impactful narrative. In order to create such a narrative, it may be necessary to filter the relevant data and to focus the information presented on the most pressing bottlenecks or the reforms with the biggest potential impact. Key messages, takeaways and suggested courses of action should accompany any raw data (i.e. indicators) (Zall, Ray and Rist, 2004[13]).

Moreover, involving stakeholders in the design of the monitoring set-up can increase the legitimacy of the resulting evidence and ultimately lead to greater impact in decision-making. In fact, cross country evidence shows that it is important to create ownership and manage change in both the public and private sectors when seeking to implement such an ambitious transformation programme (The World Bank, 2017[12]). Involving internal and external stakeholders in the definition of indicators may improve their quality as stakeholders may sometimes be better placed in order to identify which dimensions of change should garner the most attention (DG NEAR, 2016[15]), in order words what output indicators best measure the progression of the causal chain (outcome/ impact).

The council itself is meant to involve stakeholders. Yet, the oversight of the council in the definition of the Plan indicators could have been stronger6. Revising the Plan could be an opportunity for the council to revise the indicators, in order to create buy-in among the implementers, within and outside the administration (through the composition of the council itself). Similarly, the state public administration (each ministry or entity) could consider asking the council for advice when selecting the indicators for the implementation action plan, in order to ensure that there is a common vision for how to operationalize the Strategic Plan and the State Development Plan.

Formal organisations and institutional mechanisms constitute a sound foundation for use of evidence in policy and decision-making (Results for America, 2017[18]). Mechanisms that enable the creation of feedback loops between monitoring and implementation of policies can be incorporated either:

  • in the monitoring process itself, such as through the performance cycle (whereby performance evidence is discussed either at the level of the individual line ministry or at the CoG);

  • through the incorporation of performance findings into other processes, for instance the policy-making cycle, the annual performance assessment of senior public sector executives, the budget cycle or discussions in congress.

As discussed in the section clarifying the monitoring set-up, the creation of a performance dialogue could allow practitioners and decision-makers to use monitoring evidence to identify implementation issues, constraints and adapt their efforts/resources in order to solve them. In particular, linking the strategic objectives with individual performance objectives is key to creating incentives for results in the state public administration, particularly at the level of senior public sector executives and leadership. There is a need to ensure the participation of government officials, such as heads of agencies or departments and to ensure that their organisation is contributing to the achievement of high-priority cross-government outcomes such as the Strategic Plan. In Chile, for example, both collective and individual incentives have been used in order to promote public sector performance in line with strategic objectives (see Box 3.10 for a more detailed explanation of this system).

Other uses of monitoring results also include using the data for evaluations, supporting strategic and long-term planning efforts by providing baseline information and communicating to the public (see discussion above). Moreover, monitoring information produced through the performance dialogue could be used to feed into the budget cycle. Given that the performance dialogue should be linked to the monitoring of budget programmes (see previous section on clarifying the monitoring set-up and how these are linked through the action plan), the evidence produced through this exercise could provide useful information to the congress about the efficiency and effectiveness of budgetary spending through spending reviews. According to data from the budgeting and public expenditures survey (OECD, 2007[30]), spending reviews are a widely used tool in OECD countries as part of the budget cycle, which can be informed by monitoring data (Robinson, 2014[31]) (Smismans, 2015[32]) (The World Bank, 2017[12]). Finally, monitoring evidence may be used in the budgetary cycle through performance budgeting practices, as can be seen in Box 3.3.

  • Adopt a comprehensive definition of monitoring to establish a shared understanding of its objectives and modalities within the public sector. Such a definition could read as follows :

    • Monitoring is a routine process of data collection and analysis of the data to identify gaps in the implementation of a public intervention. The objectives of monitoring are to facilitate planning and operational decision-making by providing evidence to measure performance and help raise specific questions to identify implementation delays and bottlenecks. It can also strengthen accountability and public information, as information regarding the use of resources, efficiency of internal management processes and outputs of policy initiatives is measured and publicised.

  • Clarify the roles of the key actors, including the council and the state public administration.

    • This requires updating the Strategic Planning Law and its regulation, and giving an explicit legal mandate to each of the actors, to clarify their respective responsibilities: collecting data, analysing data, using data, coordinating and promoting monitoring, designing guidelines for monitoring, etc.

    • Update the internal regulations of the secretariats and their entities in order to clarify their role in monitoring the Plans.

  • Clarify, for example through guidelines, the different monitoring set-ups – including the actors involved, the timeline, the methodology and tools for monitoring, for each type of main objective pursued by monitoring:

    • operational decision-making (i);

    • accountability related to the use of resources, the efficiency of internal management processes, or the outputs of a given policy initiative (ii);

    • The information provided to citizens and stakeholders on whether the efforts carried out by the government are producing the expected results (iii).

  • Set-up a performance dialogue within the state public administration in order to improve operational decision-making at the level of line ministries and of the centre of government (CoG), regarding both the SP and the SDP simultaneously.

    • Centre this performance dialogue around the action plan to monitor the lines of action (processes) and strategies (outputs) which are common to the SP and the SDP, at the level of line ministries, on a quarterly basis (thus at the same time as the line ministries monitor their budgetary programmes).

    • Identify common criteria in order to elevate the issue to the CoG (executive office of the governor) if identified implementation problems have not been lifted after two quarters.

    • Raise implementation issues that would require cross-ministerial coordination and mobilisation of additional resources at the cabinet meetings on an annual basis.

  • Conduct an annual joint review of the SDP and SP objectives between the council’s thematic commissions and the secretariats. Have each secretary present a preliminary version of the data and analysis to be presented to the cabinet meetings for this review, while the commissioners provide potential insights on why some indicators have progressed or not, and offer support in implementation where relevant and possible.

  • Produce a communication leaflet on the SP. This communication leaflet, which could be updated every three years by the state public administration, would replace the current monitoring/ evaluation report conducted by the council and could be designed first and foremost as a communication tool.

  • Harmonise the tools and working methods used by the state public administration and the council in order to monitor the SDP and the SDP:

    • Design a dashboard for the performance dialogue, which includes information concerning both the implementation of the action plan, as well as that of the SP and the SDP outcome/impact indicators (see Figure 3.5 for an example of a dashboard). To elaborate these dashboards, the state public administration could draw on a range of good practices. For instance, the current citizen councillors who belong to the private sector could share their best practices with the public servants in regards to preparing dashboards and monitoring strategies.

    • Elaborate guidelines for the commission and their technical secretariats on how to conduct the annual performance review, in order to clarify the agenda of review sessions, the role of the commissions, the tools they may use, and how decisions should be taken.

  • Clarify the coherence between the indicators and the Strategic Plan’s layers.

  • Explicitly link each prioritised opportunity area to at least one outcome/impact level indicator. A single opportunity area could be associated to several indicators as long as the latter are linked to each other following causal logic.

    • Diversify the type of indicators used to measure the achievement of priority/opportunity areas, in order to capture all core aspects of each policy objective.

    • Consider including beside each prioritised opportunity area, the relevant output level indicators (and their long-term targets) from the State Development Plan to clarify the causal logic of the indicators.

  • Strengthen the robustness of the Plan’s indicators.

    • Consider identifying policy-actionable indicators, and impact and context indicators calculated on the basis of international and national statistical data.

    • For each indicator include key background information in order to facilitate its monitoring and evaluation, including baselines and targets.

    • Assess the soundness of indicators against the RACER model and replace the indicators that do not meet these criteria.

  • Provide fit-for-purpose and user-friendly analysis in the dashboards.

  • Increase the state public administration’s capacities to monitor the Strategic Plan

    • Develop skills in the state public administration to include analytical skills as well as communication skills.

    • Dedicate specific resources to monitoring in the state public administration, in order to have a critical mass of technically trained staff and managers, and the appropriate IT tools.

    • Create dedicated units in each line ministry.

  • Develop quality assurance mechanisms in addition to the current quality control mechanisms in place.

    • Design guidelines (state public administration) in order to strengthen the quality of the data collection process, to be applied by every line ministry. Clarify in these guidelines the criteria for escalating issues from the line ministry level to the CoG level.

    • Design guidelines (council) in order to clarify the commissions’ working methods for the yearly monitoring review exercise.

  • Produce a communication leaflet (state public administration), every three years, in order to inform the general public.

  • Update the Avanza Nuevo León platform (state public administration) with indicators from the Plan, at regular intervals (e.g. indicators for three opportunity areas every year).

  • Promote the development of a performance narrative in the monitoring dashboards in order to promote the uptake of performance information by decision-makers.

  • Feed monitoring evidence produced through the performance dialogue into the budget-cycle.

References

[20] Acquah, D., K. Lisek and S. Jacobzone (2019), “The Role of Evidence Informed Policy Making in Delivering on Performance: Social Investment in New Zealand”, OECD Journal on Budgeting, Vol. 19/1, https://dx.doi.org/10.1787/74fa8447-en.

[15] DG NEAR (2016), Guidelines on linking planning/programming, monitoring and evaluation.

[27] Elmqvist, T. et al. (2018), Urban Planet: Knowledge Towards Sustainable Cities, Cambridge University Press.

[5] Government of Canada (2020), Treasury Board of Canada Secretariat, https://www.canada.ca/en/treasury-board-secretariat.html (accessed on 25 November 2020).

[14] Government of Scotland (2020), National Performance Framework, https://nationalperformance.gov.scot/ (accessed on 6 March 2020).

[19] INEGI (2019), “Encuesta Nacional de Calidad e Impacto Gubernamental”, https://www.inegi.org.mx/programas/encig/2019/.

[17] Innovations for Poverty Action (2016), Using Administrative Data for Monitoring and Evaluation.

[29] Irarrazaval, I. and B. Ríos (2014), “Monitoreo y Evaluación de políticas Públicas”, https://www.researchgate.net/publication/267694798_Monitoreo_y_Evaluacion_de_politicas_Publicas (accessed on 25 November 2020).

[10] La Coordinación Ejecutiva De La Administración Pública Del Estado (2018), “Reglamento Interior De La Coordinación Ejecutiva De La Administración Pública Del Estado”.

[8] La Secretaría de Economía y Trabajo (2016), “Reglamento Interior De La Secretaría De Economía Y Trabajo”.

[9] La Secretaría de Educación (2017), Reglamento Interior De La Secretaría De Educación.

[1] Lázaro (2015), Comparative study on the institutionalisation of evaluation in Europe and Latin America.

[21] OECD (2020), Improving Governance with Policy Evaluation: Lessons From Country Experiences, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/89b1577d-en.

[3] OECD (2019), Open Government in Biscay, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/e4e1a40c-en.

[33] OECD (2018), OECD Best Practices for Performance Budgeting.

[22] OECD (2018), Toolkit for the preparation, implementation, monitoring, reporting and evaluation of public administration reform and sector strategies: guidance for SIGMA partners, http://www.oecd.org/termsandconditions. (accessed on 18 June 2019).

[6] OECD (2014), Kazakhstan: Review of the Central Administration, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264224605-en.

[23] OECD (2008), Ireland: Towards an Integrated Public Service.

[30] OECD (2007), Performance budgeting in OECD countries., OECD, http://www.oecd.org/gov/budgeting/performancebudgetinginoecdcountries.htm (accessed on 28 February 2018).

[28] OECD/CAF/ECLAC (2018), Latin American Economic Outlook 2018: Rethinking Institutions for Development, OECD Publishing, Paris, https://dx.doi.org/10.1787/leo-2018-en.

[24] Picciotto, S. (2007), “Constructing compliance: Game playing, tax law, and the regulatory state”, Law and Policy, Vol. 29/1, pp. 11-30, https://doi.org/10.1111/j.1467-9930.2007.00243.x.

[18] Results for America (2017), 100+ Government Mechanisms to Advance the Use of Data and Evidence in Policymaking: A Landscape Review.

[31] Robinson, M. (2014), Connecting Evaluation and Budgeting, https://ideas.repec.org/b/wbk/wbpubs/18997.html (accessed on 12 March 2018).

[32] Smismans, S. (2015), “Policy Evaluation in the EU: The Challenges of Linking Ex Ante and Ex Post Appraisal”, Symposium on Policy Evaluation in the EU, https://doi.org/10.1017/S1867299X00004244.

[26] State of Jalisco (2020), MIDE Jalisco, https://seplan.app.jalisco.gob.mx/mide/panelCiudadano/inicio (accessed on 27 February 2020).

[4] State of Nuevo Leon (2017), General guidelines of the Executive Branch of the State of Nuevo León for the consolidation of Results-Based Budget and the Performance Evaluation System, http://sgi.nl.gob.mx/Transparencia_2015/Archivos/AC_0001_0007_00161230_000001.pdf (accessed on 6 November 2019).

[2] State of Nuevo León (2014), Ley De Planeación Estratégica Del Estado.

[7] State of Nuevo León (2014), Reglamento De La Ley De Planeación Estratégica Del Estado De Nuevo León, http://pbr-sed.nl.gob.mx/sites/default/files/reglamento_de_la_ley_de_planeacion.pdf.

[12] The World Bank (2017), Driving Performance from the Center, Malaysia’s Experience with PEMANDU.

[16] UN Global Pulse (2016), Integrating Big Data into the Monitoring and Evaluation of Development Programmes.

[11] Vági, P. and E. Rimkute (2018), “Toolkit for the preparation, implementation, monitoring, reporting and evaluation of public administration reform and sector strategies: Guidance for SIGMA partners”, SIGMA Papers, No. 57, OECD Publishing, Paris, https://dx.doi.org/10.1787/37e212e6-en.

[25] van Ooijen, C., B. Ubaldi and B. Welby (2019), “A data-driven public sector: Enabling the strategic use of data for productive, inclusive and trustworthy governance”, OECD Working Papers on Public Governance, No. 33, OECD Publishing, Paris, https://dx.doi.org/10.1787/09ab162c-en.

[13] Zall, J., K. Ray and C. Rist (2004), Ten Steps to a Results-Based Monitoring and Evaluation System, The World Bank, https://openknowledge.worldbank.org/bitstream/handle/10986/14926/296720PAPER0100steps.pdf?sequence=1 (accessed on 22 August 2019).

Notes

← 1. Thus at the same time as the line ministries monitor their budgetary programmes

← 2. Education, Health, Social Development, Security and Justice

← 3. Art and Culture, Sports, Values and Citizen’s participation

← 4. https://www.inegi.org.mx/programas/endireh/2016/

← 5. This could also apply to the State Development Plan 2016-2021 and its implementation programmes

← 6. They were defined by the consultancy that supported the Council in defining the Plan

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.