6. The United Kingdom’s approach to results, evaluation and learning

DFID has a long tradition of monitoring and managing for results at project and corporate level. The new aid-spending departments have started to build similar capacities, backed by strong political buy-in.

DFID’s well-known approach to measuring results is evolving. In 2016, DFID moved away from its four-tier approach to results management at the corporate level1 to develop a set of 14 headline indicators that measure and report progress against its Single Departmental Plan. These headline indicators combine outcomes, outputs, inputs and quality standards and can include financial targets. All of these indicators are directly linked to the 2015 Aid Strategy (DFID and Treasury, 2015[1]), either directly referencing the four strategic objectives or the commitment to measure value for money. All headline indicators are published on a dedicated webpage2, with results data dating back to 2012.

In 2019, the United Kingdom developed an internal overarching monitoring framework for the 2015 Aid Strategy that covers all departments responsible for spending ODA (Chapter 4). This framework responds to National Audit Office concerns that the government did not put enough emphasis on measuring the extent to which overall aid strategy was being achieved (NAO, 2019[2]).3 However, partly due to varying experience with results management across departments, this first framework lacks consistency in the level of measurement (process, output or outcome). It is unclear whether the framework will be used for purposes other than accounting to parliament, or how it can capture the overall success of the aid strategy.

The results expected from projects and programmes are clear. Since DFID introduced the Smart Rules in 2014 (Chapter 4), each project has its own results framework that specifies the theory of change, baselines, indicators and targets, and indicates what contribution the project is likely to make to achieving the overall purpose (DFID, 2019[3]).These can be standard logical frameworks or similar alternatives depending on the project. Some quantitative measures, mainly outputs, are aggregated to communicate DFID’s corporate achievements. In line with DFID’s transparency commitments, project and programme frameworks, reviews and results are reported through the International Aid Transparency Initiative (IATI), published on DFID’s Devtracker website4 and shared with partners.

Other aid-spending departments are also strengthening their approaches to project results management, even if it is still mainly focused on outputs. For instance, the Foreign and Commonwealth Office (FCO) has established a central Portfolio Management Office to improve oversight, build capability and improve results reporting and impact. The FCO Permanent Under-Secretary chairs a Portfolio Board that meets every quarter to review progress. Nevertheless, of the seven departments and cross-government funds that account for more than 60% of non-DFID ODA expenditure, only two referred to the effectiveness of their spending in their annual reports. Other than aggregating individual projects’ expected results, it is difficult to get a sense of how the overall United Kingdom (UK) effort contributes to the development of each partner country. Indeed, full country results frameworks are no longer mandatory for DFID and have been phased out in most countries and, in the absence of public country strategies, no single document presents all UK activities and development objectives in partner countries (Chapter 5).

Finally, results are at the core of the United Kingdom’s partnerships with multilateral organisations. In 2016, DFID committed to “follow the outcomes” by further developing and scaling up the use of payment by results approaches when engaging with partners. Part of its core funding to multilateral partners is now tied to the achievement of pre-agreed results (Chapters 3, 5 and 7).

DFID has identified some perverse incentives in its previous approach to managing for results. The 2011-15 results framework enabled DFID to communicate its global reach and impact in selected areas in ways that resonated with domestic audiences. But it lacked a strong internal logic and flexibility in terms of programming and delivery by prioritising short-term pre-determined results and fulfilment of “reach” indicators5 over long-term impact or changes that were critical to partner countries’ development (OECD, 2017[4]). DFID also found it challenging to strike a realistic balance between meeting corporate communication and performance requirements, and enabling adaptive and flexible approaches to achieving results.

To address these limitations, DFID has been refining its approach since 2016, developing different sets of instruments and indicators to respond to the different objectives of results measurement – communicate to the public, be accountable to parliament, learn and manage for both project and portfolio results – to inform policy decisions. This increased attention to strategic management is a positive step forward.

As this new approach is rolled out, clear vision and guidance from, as well as a feedback loop to, senior management, combined with active engagement between headquarters and country offices selected to pilot the new approach, would help DFID to build on and learn from diverse experiences.

At the sector and portfolio level, DFID is pursuing a stronger focus on outcomes, qualitative results and causal pathways to change, while aiming at increasing its use of adaptive management.6 These efforts should strengthen DFID’s results orientation and are broadly in line with the Development Assistance Committee (DAC) Guiding Principles on Managing for Sustainable Development Results (OECD, 2019[5]).7 The Smart Rules (DFID, 2019[3]) have introduced more flexibility in programming with staff encouraged to clearly define expected outcomes while staying flexible in relation to activities and outputs (OECD, 2017[4]) (Chapter 4). Annual reviews of projects are expected to focus on milestones, results and potential adjustments to the theory of change. However, practice has remained slightly rigid, especially for projects funded by the Conflict, Stability and Security Fund (CSSF) which are still subject to more standard log frames. To build on this and enable more flexibility, the CSSF is encouraging alternative monitoring and evaluation approaches better suited to fragile and conflict affected states. In addition, even though DFID‘s new approach to results has reduced the number of standard indicators, it has increased attention to performance measures: the Portfolio Quality Index, used in DFID’s internal management information dashboard, is still mainly focused on output rating scores extracted from annual reviews and end-of-project reports.

DFID is an evidence-based organisation, systematically collecting data at the programme and strategy level. Under its Inclusive Data Charter Action Plan, it is committed to collecting and using data disaggregated by gender, age, disability status and geography to inform its policies and programmes in order to leave no-one behind (DFID, 2019[6]). In Kenya, efforts to map the geographic localisation of programmes against where the poorest and excluded people live have triggered critical reflection about who has benefitted from DFID’s interventions in recent years. Further efforts to clarify the governance of data and increase its inter-operability would also increase the use of data across the project cycle and by government departments.

DFID makes little use of partner countries’ own data, systems or results frameworks, mainly using data collected by implementing partners. In 2018, only 38.7% of the United Kingdom’s bilateral co-operation used country-owned results frameworks, a mere 22.3% used indicators drawn from these frameworks and 24.6% used partner governments own data and statistics (Chapter 5) (OECD/UNDP, 2019[7]). The United Kingdom’s uses internal data for spend and high-level results, such as portfolio quality scores and headline indicators. Use of external data is largely restricted to the design and planning stage (Powell et al., 2017[8]). This is not consistent with DFID’s support to building national capacities in statistics and runs counter to the DAC guiding principle of ownership.

In addition, although the United Kingdom refers to the SDGs in Single Departmental Plans and the National Statistics Office was active on the international taskforce that agreed to the SDG indicators, the United Kingdom does not carry reference to these indicators through its various results frameworks (including its overarching aid strategy framework, or at DFID and programme levels). Together with the limited use of partner countries’ indicators, this increases the risk of parallel reporting requirements.

The United Kingdom has developed two complementary approaches to evaluation: spending departments are responsible for evaluating their programmes and projects according to their own policies while the Independent Commission on Aid Impact (ICAI) conducts a small number of prioritised thematic reviews on strategic issues faced by the United Kingdom and reports directly to parliament. With a mandate to cover all of the United Kingdom’s ODA, and a growing number of departments managing ODA, ICAI’s role is increasingly important (Chapter 4). All reports directed to DFID have received a management response. Other departments also provide inputs when a cross-government response is required – this is good practice.

DFID has moved from the decentralised evaluation model set out in the 2013 Evaluation Policy (DFID, 2013[9]) and the 2014 strategy (DFID, 2014[10]) to a mixed decentralised and centralised evaluation model set out in a forthcoming strategy (see below). The decentralised model, along with the Smart Rules and focus on Value for Money (Chapter 4), has helped to embed an evaluation mind-set in programmes and to increase the use of evaluations. For decentralised evaluations, individual policy and programme spending units decide which programmes and interventions to evaluate. The decision to evaluate is strategic, based on eight criteria.8 The Evaluation Unit in the Evidence Department leads overall efforts to implement the evaluation policy and strategy and provides support – technical guidance, advice and professional development and training – to operational staff, disseminates and shares findings and promotes learning from evaluations. While funding for programme evaluations mainly comes from programme budgets, the Evaluation Unit can also fund priority evaluations. Decentralising the evaluation function has meant increasing capacity to carry out evaluations across DFID, with more than 32 evaluation advisors employed in policy or programme spending units, 150 staff accredited in evaluation and 700 people receiving basic training.

DFID also supports other aid-spending departments to strengthen their evaluation functions – for example, the Joint Funds Unit in charge of cross-government programmes has internal evaluation capacity. DFID supports them by providing access to formal accreditation as well as including staff in learning and training opportunities and leading a cross-Government ODA evaluation working group. All departments could do more to share their experience on how to design appropriate yet strategic evaluation plans.

The decentralised model has created some knowledge gaps that DFID is trying to address. The department is now implementing a new evaluation strategy to be launched in 2020 that will better balance centralised and decentralised evaluations to fill these knowledge gaps and strengthen strategic decision making. Under the new strategy, the Evaluation Unit will have resources to commission rigorous evaluations at central and regional levels in order to increase learning from evaluations at a portfolio level, improve identification of evidence gaps and evidence synthesis when commissioning evaluations, and examine how sustainable the reported impacts are. In fragile contexts, the new strategy is expected to support more rigorous evaluations using innovative evaluation methods. Making funding for portfolio evaluations available to country offices will be critical to ensure uptake of evaluation findings.

The reform to evaluation is occurring hand-in-hand with reforms to results-based management and the introduction of adaptive management. As part of the new evaluation strategy, DFID is planning to create a real-time database that compiles data on results, programme implementation and spend to support the collection of evidence. It will also clarify how internal mechanisms such as quality assurance and evaluation will support adaptive programmes, a necessary step to give staff incentives and processes to enable successful adoption of an adaptive management approach. In support of these efforts, DFID funds and manages the Global Learning for Adaptive Management initiative jointly with the United States Agency for International Development.9 Other DAC members could learn from DFID’s work in this area.

DFID is committed to building the evaluation capacity of its partner countries and multilateral agencies. It does so by supporting multilateral agencies to produce high-quality evidence and by funding initiatives such as the World Bank Strategic Impact Evaluation Fund, the Development Impact Evaluation group and the International Initiative for Impact Evaluation, as well as supporting professional evaluation associations, networks and South-South partnerships. However, DFID has not responded to the 2014 DAC peer review recommendation to increase the use of joint evaluations (OECD, 2014[11]) and is making little effort to engage in joint evaluations with partner governments or other partners. This is not consistent with DFID’s own evaluation policy and undermines its broader efforts to build local evaluation capacities.

Following an ICAI review on “How DFID Learns” (2014[12]), DFID increased its use of research, leading to better programme design. Part of this effort involved improving the uptake of evaluation findings. Bringing the Evaluation Unit into the Evidence Department and collaborating with the Chief Scientific Adviser have helped to ensure that strategic evaluations respond to evidence needs. DFID has also adjusted its professional evaluation cadre competencies to make sure that evaluation advisors have the ability to share messages and interact with other advisors and programme managers.

DFID draws on multiple tools for institutional learning such as its advisor cadres (Chapter 4),10 the aid management platform, learning champions, “What Works” programmes (Box 6.1), “Best Buys” papers,11 as well as the “Better Delivery” team that focuses on improving methods. DFID has also set up research hubs, a specialist Knowledge for Development helpdesk and benchmarking models to facilitate organisational learning and knowledge management across the department. This diversity of instruments could be an inspiration for other DAC members. Increasing the use learning champions12 and making better use of the knowledge and perspectives of locally-appointed staff would further enrich strategic thinking and analysis, as would more sharing of analysis with partners.

In fragile contexts, Third Party Monitoring contracts (Chapter 7) often include an operational research element to increase learning. Over the past 15 years, these contracts have allowed the United Kingdom to demonstrate its ability to deliver results and learn lessons in challenging contexts, which in turn has helped to raise the risk appetite of senior managers and ministers. Further innovation and adaption of programme management approaches – such as current trials with technological monitoring using satellite data, large data and e-monitoring – will support the United Kingdom to move towards a more risk-based approach.

While the share of ODA spent by other departments has increased, not all departments have structured processes for developing learning capabilities. As set out in a 2019 ICAI review (ICAI, 2019[13]), most new spending departments have made progress on institutional learning, especially those with bigger and more complex budgets, but learning is not always used to inform management decisions. Some departments also tend to outsource learning (through evaluations and research), limiting internal uptake.

As observed in Kenya (Annex C), to fully operationalise the Fusion Doctrine, other departments need access to the cross-sectoral expertise contained in DFID. The emerging cross-government learning networks can help achieve this. However, while the number of cross-departmental groups and forums where learning is exchanged is proliferating, not all are fully operational with a clear architecture and set of expectations. Learning across departments is further constrained by databases that are not inter-operable – staff working in FCO for instance cannot access information on DFID’s Aid Management Platform.

In addition to building the capacity of other departments to manage ODA (Chapter 4) DFID shares its skills, networks and tools, and has supported learning across government from its existing resources. In recognition of the value of this work, DFID was given a budget increase for 2020-2113. Predictable resources to support learning across government will strengthen the sustainability of this approach.


[6] DFID (2019), Inclusive Data Charter Action Plan, https://www.gov.uk/government/publications/leaving-no-one-behind-our-promise/inclusive-data-charter-action-plan (accessed on 12 February 2020).

[3] DFID (2019), Smart Rules - Better Programme Delivery (October 2019 update), Department for International Development, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/840802/Smart-Rules-External-Oct19.pdf.pdf.

[10] DFID (2014), DFID Evaluation Strategy 2014-2019, Department for International Development, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/380435/Evaluation-Strategy-June2014a.pdf.

[9] DFID (2013), International Development Evaluation Policy, Department for International Development, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/204119/DFID-Evaluation-Policy-2013.pdf.

[1] DFID and Treasury (2015), UK Aid: Tackling Global Challenges in the National Interest, Department for International Development and HM Treasury, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/478834/ODA_strategy_final_web_0905.pdf.

[15] Ford, L. (2019), British government takes global lead on violence against women and girls, https://www.theguardian.com/global-development/2019/nov/02/british-government-takes-global-lead-on-violence-against-women-and-girls (accessed on 30 January 2020).

[13] ICAI (2019), How UK aid learns A rapid review, Independent Commission for Aid Impact, https://icai.independent.gov.uk/new-icai-review-how-uk-aid-learns/ (accessed on 10 January 2020).

[14] ICAI (2016), DFID’s Efforts to Eliminate Violence Against Women and Girls - A Learning Review, Independent Commission for Aid Impact, https://icai.independent.gov.uk/html-report/dfids-efforts-eliminate-violence-women-girls/ (accessed on 13 January 2020).

[12] ICAI (2014), How DFID Learns, Independent Commission on Aid Impact, https://icai.independent.gov.uk/report/dfid-learns/ (accessed on 10 January 2020).

[2] NAO (2019), The Effectiveness of Official Development Assistance Expenditure, National Audit Office, https://www.nao.org.uk/report/the-effectiveness-of-official-development-assistance-spending (accessed on 16 January 2020).

[5] OECD (2019), Managing for Sustainable Development Results - OECD DAC Guiding Principles, OECD, http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=DCD/DAC(2019)37/FINAL&docLanguage=En.

[4] OECD (2017), Provider Case Studies: United Kingdom Department for International Development, Results in Development Co-operation, OECD Publishing, https://www.oecd.org/dac/peer-reviews/results-case-study-UK.pdf (accessed on 8 January 2020).

[11] OECD (2014), OECD Development Co-operation Peer Reviews: United Kingdom 2014, OECD Development Co-operation Peer Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264226579-en.

[7] OECD/UNDP (2019), Making Development Co-operation More Effective: 2019 Progress Report, OECD Publishing, Paris, https://dx.doi.org/10.1787/26f2638f-en.

[8] Powell, J. et al. (2017), Decision-Making and Data Use Landscaping - Better Data, Better Decisions, Development Gateway, https://www.developmentgateway.org/sites/default/files/2019-04/Better_Data_Better_Decisions_Data_Landscape_Study.pdf (accessed on 9 January 2020).


← 1. The 2011-15 DFID results framework measured results at four levels: progress on key development outcomes, DFID results, operational effectiveness and organisational efficiency.

← 2. The DFID results website is www.gov.uk/guidance/dfid-results-estimates (accessed on 06 March 2020).

← 3. The four goals of the 2015 Aid Strategy are: 1) strengthening global peace, security and governance; 2) strengthening resilience and response to crises; 3) promoting global prosperity; and 4) tackling extreme poverty and helping the world’s most vulnerable.

← 4. See https://devtracker.dfid.gov.uk.

← 5. Reach indicators is a term used to refer to indicators which count the number of beneficiaries who are reached by a service or intervention.

← 6. Adaptive programmes draw on systematic and deliberative learning from monitoring, evaluation and operational research to guide decision making.

← 7. The OECD/DAC Results Community has developed six guiding principles to help transform development agencies into more results-oriented, effective organisations. These principles are: support sustainable development goals and desired change; adapt to context; enhance country ownership, mutual accountability and transparency; maximise the use of results information for learning and decision making; foster a culture of results and learning; and develop a results system that is manageable and reliable.

← 8. DFID has developed an Evaluation Decision Tool to help country offices decide when an evaluation is relevant. The tool includes eight decision criteria and a set of related questions to guide thinking: 1) strategic importance to the spending unit; 2) strategic evaluation priority for DFID; 3) evidence gap as defined in the Annual Evaluation Plan; 4) scale up; 5) size/risk/innovation; 6) demand and utility; 7) feasibility; and 8) timeliness – see DFID’s 2014-18 Evaluation Strategy https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/380435/Evaluation-Strategy-June2014a.pdf.

← 9. The Global Learning for Adaptive Management initiative was launched in 2018 to enable evidence-based adaptive management through access to, use of, and learning from better and faster monitoring and evaluation evidence.

← 10. DFID’s 180 internal advisors receive dedicated training and participate in regular face-to-face meetings.

← 11. The What Works programmes and Best Buys papers are based on global evidence and internal research. Evidence from the What Works programme is publicly available.

← 12. Learning champions are senior staff who advocate for and support learning across the department.

← 13. Secretary of State Alok Sharma appeared before the International Development Committee on 17 October 2019 to discuss DFID priorities and noted that DFID had received an increase in its total operating costs of USD 25.5 million (GBP 20 million) for 2020-21, part of which will be used for supporting other government departments. The committee recording is available at http://www.parliamentlive.tv/Event/Index/a9279d7c-c8d3-40b7-b58f-b7f19f0cc7f1, minutes 17-40, accessed on 20 January 2020.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.