Chapter 6. Norway’s results, evaluation and learning

This chapter considers how Norway plans and manages for results in line with the Sustainable Development Goals (SDGs), building evidence of what works, and using this to learn and adapt. Norway would benefit from a strengthened definition of results-based management and an expanded approach to results measurement that includes the strategic and portfolio levels. While Norway’s approach to strategic evaluations is strong, there remains room to improve the quality, methodology and analysis of decentralised evaluations. Developing a formal knowledge management system would help Norway to use results for direction and learning.


Management for development results

Peer review indicator: A results-based management system is being applied

The Norwegian government promotes results-based management as a core strategy for the management of public funds. In its development co-operation, however, Norway lacks a clear definition of results-based measurement. Although its approach to results may lead to greater domestic accountability and communication, it does not appear to be contributing effectively to direction and learning. A lack of systemisation, and a focus largely on the project or activity rather than the strategic or portfolio level, also limits the ability for results to drive policy formulation and strategic decision making, and contribute to better development outcomes by partners.

Developing a shared understanding of results-based management

As outlined in the white paper “Common Responsibility for Common Future” (Government of Norway, 2017), Norway seeks to align its development co-operation activities and objectives with the Sustainable Development Goals (SDGs). Although Norway states that it pursues results-based management as a core strategy, it has struggled to develop a shared understanding of results-based management across its aid administration, including how it might add value to delivery (Norwegian Agency for Development Co-operation [Norad], 2018a). To apply a results-based management approach, Norway would benefit from a clearly stated purpose for its results system as it relates to the strategic objectives of Norwegian development co-operation, and a strengthened commitment to learning from results (OECD, 2016a). Identifying more clearly and systematically measurable results and indicators against which progress will be measured at the portfolio and programme level, rather than only at the project level, may also help Norway to apply the results from its development co-operation more effectively to support learning and direction.

Recent revisions to the Grant Management Manual provided additional guidance for staff, clarifying the key concepts and methodologies associated with results and results management at the project level (Ministry of Foreign Affairs, 2017a). This includes the requirement since 2016 to incorporate a results framework into all projects, although this is not fully enforced in practice. Norway’s approach also gives significant responsibility for defining and reporting results to grant recipients. This may limit projects’ alignment with Norway’s national policy objectives.1 The focus on collecting data at the project level limits the capacity to aggregate data from several projects to give an indication of whether overall goals beyond individual projects are being achieved. Further, while detailed results frameworks do exist for a growing number of larger programmes and portfolios,2 such an approach is not systematically applied across programmes. The extent to which the results from these larger programmes are aggregated at the global level for learning across the whole system also varies, limiting the use of results information for learning and strategic decision-making.

Norad presents results from Norwegian aid on an annual basis and these are published online (Norad, 2017a). The focus to date on project level results has limited Norway’s ability to explain how actions contribute to change within Norway’s partner countries, globally across its programmes and ultimately, to the achievement of the SDGs (Zwart, 2017). The proposed creation of a new Results Portal is expected to include a combination of progress assessments and key results summaries for each project, narrative reports from specific programmes, and thematic results reports at the portfolio level. It will be important to ensure that the progress indicated by the planned Results Portal addresses the need to use results data to drive improvements or changes in policy decisions and in turn the development outcomes of Norwegian development co-operation, in addition to enhancing domestic accountability and transparency.

Strengthening results measurement at the individual grant and project level

Greater ongoing monitoring and progress reporting against results frameworks would also help strengthen results measurement. At the individual grant level, performance is assessed annually. Norad’s quality assurance department, AMOR, has programmes to develop staff capacity and review grant-management within the Ministry, embassies and Norad. AMOR also provides advice to staff on results-based management issues, largely from a quality perspective and mostly at the grant agreement stage, and undertakes grant management reviews at the embassy level, as requested by the Ministry. To date these reviews have largely focused on compliance. Expanding the focus of the reviews to address the quality of the grant application itself, as signalled by a pilot in 2019, might also strengthen results measurement.

A strengthened results culture will also improve staff understanding of and commitment to results measurement, and contribute to a more systematic approach. Prioritising efforts to develop a results culture within the administration, including by clearly articulating the purpose of results systems, developing a shared understanding of results-based management and adopting a more systematic approach to results measurement at the portfolio level – particularly in terms of programme logic –would facilitate aggregation and help Norway ensure that results are used to inform organisational learning and decision-making.

Using country-led results frameworks and statistical capacity-building

Reflecting its strong commitment to technical support and capacity-building, Norway is a leader in supporting statistical capacity in developing countries. In 2014-16, it ranked among the top ten bilateral and multilateral donors for statistical capacity building (Secretariat of the Partnership in Statistics for Development in the 21st Century, 2018), and among the top five donors in providing ODA towards statistics capacity in fragile countries.3 According to the Global Partnership for Effective Development Co-operation (GPEDC) 2016 monitoring round, Norway’s use of country-led results frameworks (61.6%) was below the OECD Development Assistance Committee (DAC) average (65.1%) (GPEDC, 2016).

In backing up its strong support for capacity-building through funding Norway could increase its use of country-led results frameworks, data and statistical systems. Given that it is scaling back budget support and that a significant proportion of project funding is channelled through civil society organisations (CSOs) and other implementing partners, Norway might also encourage its implementing partners to use country-led results frameworks. This will likely become increasingly important as Norway shifts its approach towards greater use of global instruments to deliver its bilateral aid.

Evaluation system

Peer review indicator: The evaluation system is in line with the DAC evaluation principles

Norway’s evaluation system, in line with the DAC evaluation principles, supports learning and decision-making. Maintaining the independence of the Evaluation Department will be important in the context of ongoing organisational reforms, and there remains room for Norway to improve the quality of decentralised evaluations. Norway is encouraged to increase its co-operation with partner countries to support capacity building, including through joint evaluations.

Norway’s evaluation system is in line with DAC evaluation principles

The Evaluation Department, currently situated within Norad, is responsible for initiating and implementing independent evaluations of Norwegian development co-operation, and communicating the results to the public and policy-makers. The key objective of the evaluations is to identify lessons learned for systematic use in policy development. The evaluations carried out by the Evaluation Department are guided by the DAC criteria (Norad, 2017b; OECD, 2016b) and are aligned with DAC evaluation principles.4

In accordance with instructions issued in 2015, the Evaluation Department is responsible for initiating and planning evaluations and studies covering all aspects of development co-operation, irrespective of the partner or whether Norway manages the funds (OECD, 2016b). Even though it does not have a separate budget, the Evaluation Department is governed under a separate mandate. It sets the programme of strategic evaluations on a three-year rolling basis, developing the programme in consultation with actors from within and outside the aid administration. Evaluation projects are selected on the basis of an assessment of significance, uniqueness and risk in Norwegian development co-operation, also considering the issues anticipated to be relevant during the programme period. The programme may be adjusted according to changes in needs and preconditions. The programme and the status of planned and ongoing evaluations are published online.

For all strategic evaluations, the Evaluation Department prepares terms of reference in consultation with stakeholders including the departments and embassies; external consultants typically carry out the evaluations. Quality assurance, recommendations and communication of findings are the responsibility of the Evaluation Department, which also supports the embassies and the Ministry of Foreign Affairs in strengthening evaluation methodology (OECD, 2016b).

The 2013 review encouraged Norway to ensure consistent quality across all evaluations including those that are decentralised to improve the aid administration’s evidence base (OECD, 2013). While Norway’s approach to strategic evaluations is strong, there remains significant room to improve the quality, methodology and analysis of decentralised evaluations. While there is a high level of use of decentralised evaluations within the administration, their quality varies significantly, and many reviews and decentralised evaluations are found to be methodologically weak (Norad, 2017c). Strengthening the terms of reference and methods applied for these decentralised reviews will be important in order to align the quality of decentralised evaluations with strategic evaluations managed by the Evaluation Department. New guidelines for developing terms of reference for decentralised evaluations from 2018, and the inclusion of an advisory function with regard to decentralised evaluations in AMOR’s mandate, signal a step in the right direction.

Administrative and budgetary independence

The Evaluation Department is currently led by the Evaluation Director, who reports to the secretaries-general of the Ministry of Foreign Affairs and the Ministry of Climate and Environment, and to the Head of Norad on administrative matters.5 There is no Evaluation Advisory Board or Committee (OECD, 2016b). As noted during the headquarters visit, the Minister of International Development is seeking to be more involved in evaluations, and has requested a summary of each evaluation, in addition to the existing annual report. In the context of ongoing organisational reforms and in line with the DAC evaluation principles, the future evaluation department will need to maintain the required technical expertise, as well as its administrative and financial independence, e.g. through an independent budget. Strengthening the requirements relating to evaluation follow-up may also enhance the impact of evaluations.

Evaluation partnerships and strengthening capacity

As the Chair of the OECD/DAC Evaluation Network, Norway shows willingness to engage in international fora to strengthen the development-evaluation field; this includes the DAC High Level Meeting-initiated process to adapt the evaluation criteria, peer reviews of evaluation functions and practices. Since 2013 the Evaluation Department has carried out joint evaluations and studies with Sweden, Denmark, the African Development Bank, the World Bank, UNDP and the OECD/DAC Evaluation Network, and has entered into partnerships with United Nations organisations with the aim of contributing to building of national evaluation capacity, through the development of a national tool for governments to assess and identify gaps in their evaluation capacity, providing guidance on gender responsive evaluations, and sharing lessons learned. Norway is encouraged to increase its co-operation with partner countries to support capacity-building, including by undertaking further joint evaluations.

Institutional learning

Peer review indicator: Evaluations and appropriate knowledge management systems are used as management tools

Norway lacks a clear approach to knowledge management within and across its development co-operation system. While the appetite for learning demonstrated by staff of both the Ministry of Foreign Affairs and Norad is supported by several new policy tools and guidance, a formal approach to knowledge management is lacking. Norway may benefit from more systematic follow-up of existing instruments (including evaluation findings), and from improving its approach to managing for development results.

Room to improve learning from and using evaluation findings

Follow-up of evaluations undertaken by the Evaluation Department (strategic evaluations) is the responsibility of the department, mission or agency responsible for the programme or activity that has been evaluated. Norway could improve its use of evidence to inform policy direction and decision-making including through the more formal uptake of evaluation findings. The 2013 peer review recommended that Norway improve its learning system by implementing a formal management-response system, including clarifying reporting lines and follow-up responsibilities on evaluation recommendations (OECD, 2013). Revised instructions issued in 2015 stipulate that follow-up plans are to be approved by the Secretary General of the Ministry concerned within six weeks of the evaluation’s publication, and within the course of a year, the responsible unit is to report on relevant measures implemented. To make further progress in this regard, Norway is encouraged to ensure that evaluation follow-up plans in response to decentralised evaluations are similarly systematic, and that follow-up is monitored and disseminated accordingly. In line with good practice, this might also include the requirement for a formal management-response plan. To further improve opportunities for learning from evaluation findings, Norway should continue to ensure that all strategic evaluations – and where possible decentralised evaluations – are also published online in a timely manner.

Norway lacks a strategy for knowledge management

Several recent white papers and strategies signal that Norway is committed to and investing in becoming more knowledge-oriented (Chapter 4), including by strengthening its approach to knowledge management (Ministry of Foreign Affairs, 2017b; Norad, 2016). The Research Strategy for the Foreign Service and Norad, 2017-2024 (Ministry of Foreign Affairs, 2017b), for example, aims both to promote research and knowledge-based policy and decision-making, and to support global knowledge production. Norway's Ministry of Foreign Affairs adopted a new human resources strategy in October 2018, which also signalled a strengthened focus on knowledge management. While staff demonstrate a strong willingness to learn, the absence of an overall, dedicated knowledge management strategy encompassing the Norwegian development co-operation system makes it difficult to determine how learning from individual projects and initiatives is fed formally back into the system to build knowledge and to influence overall objectives. This also significantly limits opportunities for direction. A greater focus on and understanding of results, and a significantly strengthened approach to results-based management (Section 6.1) would be useful first steps.

Despite requirements for results reporting by its partners, evaluations found that staff do not systematically use these data for their own management and learning (Norad, 2018a). Moreover, staff are often uncertain about the quality of results data, which may also limit its use to inform decisions (Norad, 2018a).6 Given the increased use of multilateral channels for Norway’s ODA, particularly vertical or intermediary funds (Chapter 3), Norway will need to consider how its results and evaluation findings from the multilateral system are being fed back into its development co-operation system and contribute to greater learning. To do so, Norway will need to develop and implement a formal knowledge-management system.

The new Knowledge Bank is an innovative approach to leveraging the expertise and experience of multiple government agencies for the benefit of the large technical co-operation programmes that it encompasses (Chapter 5). This also has the potential to facilitate greater knowledge and information exchange in the thematic and sectoral areas that it covers (Box 5.1). Building on the Knowledge Bank, clarifying the formal processes in place to share learning across programmes and actors – including the Ministry of Foreign Affairs, Norad, Norec, Norfund, and the Ministry of Climate and Environment – will be important to strengthen Norway’s approach to knowledge management – including learning between headquarters and embassies, particularly for those programmes not covered by the Knowledge Bank. In this regard, the experiences of other members in developing formal staff networks to share knowledge and learning may also be relevant.7


Government sources

Government of Norway (2017), “Common Responsibility for Common Future: The Sustainable Development Goals and Norway's Development Policy – Report to the Storting (white paper)”, Meld. St. 24, English summary, Government of Norway, Oslo,

Ministry of Foreign Affairs (2017a), “VO4 Guide to Assessment of Results and Risk management, including Cross-cutting Issues”, Norwegian Ministry of Foreign Affairs, Oslo.

Ministry of Foreign Affairs (2017b), Research Strategy for the Foreign Service and Norad, 2017-2024, Oslo, Norway,

Norad (2018a), Evaluation of the Norwegian Aid Administration’s Practice of Results-Based Management, Evaluation Department Report 4/2018, Norwegian Agency for Development Co-operation, Oslo,

Norad (2018b), Evaluation Programme 2018-2020, Evaluation Department, Norwegian Agency for Development Co-operation, Oslo,

Norad (2017a), Guide to Norwegian Aid Management, Evaluation Department Handbook, Norwegian Agency for Development Co-operation, Oslo,

Norad (2017b), Guidelines for the evaluation process and for preparing reports for the Evaluation Department, Norwegian Agency for Development Co-operation, Oslo,

Norad (2017c), The Quality of Reviews and Decentralised Evaluations in Norwegian Development Co-operation, Norwegian Agency for Development Co-operation, Oslo,

Norad (2016), Knowledge for Development: Norad's strategy towards 2020, Norwegian Agency for Development Co-operation, Oslo,

Norad (2014), Can We Demonstrate the Difference that Norwegian Aid Makes? Evaluation of results measurement and how this can be improved, Evaluation Department Report 1/2014, Norwegian Agency for Development Co-operation, Oslo,

Office of the Prime Minister (2018), The Jeløya-platform, Oslo, Norway,

Other sources

GPEDC (2016), Dashboard – Effective Co-operation (database), (accessed on 24 November 2018).

OECD (2016a), “Results-Based Decision Making in Development Co-operation: Providers’ use of results information for accountability, communication, direction and learning: Survey results”, OECD, Paris,'_use_of_results_information_for_accountability_communication_direction_and_learning.pdf.

OECD (2016b), Evaluation Systems in Development Co-operation: 2016 Review, OECD Publishing, Paris,

OECD (2013), OECD Development Co-operation Peer Review: Norway 2013, OECD Publishing, Paris,

Secretariat of the Partnership in Statistics for Development in the 21st Century (PARIS21) (2018), Partner Report on Support to Statistics: PRESS 2018, Secretariat of the Partnership in Statistics for Development in the 21st Century (PARIS21),

Zwart, R. (2017), "Strengthening the results chain: Synthesis of case studies of results-based management by providers", OECD Development Policy Papers, No. 7, OECD Publishing, Paris,


← 1. Significant responsibility for defining and reporting results rests with grant recipients. Ministry of Foreign Affairs and Norad staff do not prepare projects; instead, they receive applications from potential grant recipients, including the objectives of each project, and the indicators for monitoring performance and results (Norad, 2014).

← 2. Programmes for which detailed results frameworks do exist are the Norwegian International Climate and Forest Initiative (NICFI), Oil for Development, Fish for Development, Tax for Development and Norway’s global education initiative, the NORHED research programme.

← 3. Together with the World Bank, the United Nations Population Fund, the European Commission/Eurostat and Sweden, it contributed to over 80% of the total aid towards statistics capacity in fragile situations (Secretariat of the Partnership in Statistics for Development in the 21st Century, 2018).

← 4. The evaluations managed by the Evaluations Department should be distinguished from decentralised evaluations, which continue to face challenges in terms of ensuring methodological rigour and consistency.

← 5. The constitutional responsibility to manage the Norwegian aid budget is shared by the Ministry of Foreign Affairs and the Ministry for Climate and Environment. Issues related to Reducing Emissions from Deforestation and Forest Degradation are reported to the Ministry of Climate and Environment.

← 6. Further, an evaluation by Norad found that staff focus on actionable conclusions and recommendations in mid- and end-of-cycle reviews, rather than on methodological concerns, has led grant managers to take actions based on evidence of potentially poor quality (Norad, 2018a).

← 7. For example, in Switzerland and Spain. In Switzerland, local staff and implementing partners are also part of these networks, which include learning and face-to-face events on a two-year basis, and have proven useful in the context of a decentralised programme.

End of the section – Back to iLibrary publication page