Chapter 6. Results, evaluation and learning

All of France’s official development assistance (ODA) is aligned with aggregate indicators, but even so, results-based management is not mainstreamed across ministries or their agencies. Moreover, the growing importance of the French Development Agency (AFD) will require it to be even more transparent and better at communicating the results it seeks and achieves. Evaluations of French aid are done by the three main providers of French aid, and are in line with DAC principles. But projects are not systematically screened for their ability to be evaluated or the quality of their support frameworks. This can make project quality assurance difficult. Nor does France invest as much as it could in strengthening the evaluation capacity of the authorities in partner countries; yet this approach would allow it to delegate more. A database and communities of practice make it easier to search for information on evaluation findings, but France has no formal mechanism for systematically disseminating evaluation results or the lessons learned.


Management for development results

Peer review indicator: A results-based management system is being applied

All of France’s ODA is aligned with aggregate indicators, but even so, the approach to results-based management is not uniform across ministries and their agencies. Updating the various performance indicators and revising the 2014 Orientation and Programming Law on Development and International Solidarity Policy (LOP-DSI) are opportunities to better align the various indicators with the Sustainable Development Goals (SDGs). Moreover, the growing power of AFD will require a more transparent approach and better communication of the outcomes sought and achieved.

France has introduced a list of common indicators for its ODA

Since 2013, France has reported its overall results using 31 indicators aggregated ex-post and listed in the annex to the 2014 LOP-DSI law, which constitutes the framework for French development policy. Of these indicators, 17 measure bilateral aid and 14 multilateral aid (JORF, 2014); they cover France’s thematic and cross-cutting priorities and are used in the twice-yearly reports to parliament.

All ministries and agencies use some of these indicators, all of which describe the output more than the result or impact – such as the number of agricultural businesses or holdings that receive AFD funding, or the reductions in greenhouse gas emissions brought about by AFD-funded projects. The multilateral indicators are broken down by sector and by multilateral agency, and are collected by the Ministry for Europe and Foreign Affairs (MEAE) and the General Directorate of the Treasury (DGT). The bilateral indicators are collected for the most part by AFD.

Whilst these indicators have the benefit of providing an overall picture of France’s activity, they are not yet in step with the SDGs. Nor are they aligned with the performance indicators defined for each budget programme and appended to the draft budget act (two examples of performance indicators include: the proportion of AFD’s commitment authorisations with a gender objective or a climate-related co-benefit). Exercises to align the various indicators with the SDGs are currently under way. This would be a good time to review and harmonise these different exercises. Even within AFD – an agency that is at the forefront of aid financing and growing in importance globally – monitoring and results are seen as mechanisms of control and accountability rather than as tools for improving project implementation or learning. For them to be used in this way will mean strengthening the human and logistical resources allocated to monitoring and results management.

The focus of results is on projects, not countries

France aligns its goals with those of its partner countries. As far as possible, it adopts results frameworks based on their national or sectoral plans, and that concentrate on France’s contribution rather than attributing results to French actions. This is a good thing. AFD is embarking on a large-scale exercise to streamline project indicators and devise a set of standard sector indicators for projects that align to the SDGs. For example, in the urban planning sector, which originally had 200 or so indicators, AFD now has 7 key indicators from which project leaders can choose the most appropriate (2 of these 7 indicators1 are on the list of the aggregate indicators appended to the LOP-DSI). This good practice makes it easier to evaluate projects. But AFD could also compare and harmonise its indicators with those of other donors. Choosing common indicators would have the benefit of strengthening the coherence of projects within AFD.

On the other hand, France has devised very few country strategies. Even the strategies developed by AFD lack a performance framework to collect the data and results from the various projects and match these with the partner country’s desired results. This makes it hard to know how France’s interventions within a country overall are consistent with partner countries’ results frameworks. Likewise, it is impossible to have an overall picture of the outcomes to which France has contributed in each partner country. In reality, apart from the 17 aggregate indicators and information collected at project level, France has not identified the outcomes it seeks to achieve at country, programme or thematic level. That makes results-based management all the harder and means that France cannot assess the year-on-year changes in its development aid programme or measure the true impact of its financial support.

Results-based management is not streamlined across the administration, although there are steps in the right direction

AFD project leaders announce the indicators for each bilateral aid project at the start of each year. A single sector may have up to 200 indicators, collected manually by the agency’s head office in Paris and grouped under 10-20 indicators according to sector or thematic priority. In turn, these go to make up the 17 “meta-indicators” which must be reported to parliament. Thus the data collected are far more detailed than the 17 aggregate bilateral indicators, but AFD does not as yet systematically use these data to improve its management, programming or learning. AFD’s annual reports talk about the agency’s various co-benefit targets and report on 12 of the 17 aggregate indicators for bilateral aid. Nevertheless, there is still much to be done to show how far French co-operation furthers the development goals of its partners as well as its own. This need is even greater given the agency’s plans to increase the volume of its activities (Sections 3.1 and 4.3).

Evaluation system

Peer review indicator: The evaluation system is in line with the DAC evaluation principles

Evaluations are important for learning and now play a more strategic role. They are commissioned by the three main providers of French aid, and are in line with DAC principles. Nonetheless, projects are not systematically assessed for their ease of evaluation or the quality of their support frameworks. This can make project quality assurance difficult. France could conduct more joint evaluation and could invest more in strengthening the evaluation services of the authorities in partner countries, an approach that would allow it greater freedom to delegate project and programme evaluation and to take on more of an advisory role.

The evaluation service is spread over three institutions, but is well co-ordinated

The evaluation of French development assistance reflects its institutional architecture consisting of three separate entities: the MEAE, the General Directorate of the Treasury and AFD. The LOP-DSI of 2014 provided for the creation of an Observatory for Development Policy and International Solidarity. This independent body was supposed to have access to all the information of the evaluation services and comprises 11 members: 4 members of parliament and 7 representatives from each group in the National Council for Development and International Solidarity (CNDSI). The Observatory only formally met for the first time in April 2018. This does not appear to have been problematic, given that the three evaluation services work closely together and meet more often than the required four times a year to plan evaluations, monitor and consider recommendations and conduct joint interministerial evaluations (DGT, 2017). The two ministries represent the state on AFD’s Evaluations Committee.

The evaluation systems for French co-operation all follow the evaluation principles of the Development Assistance Committee (DAC), namely: impartiality and independence, credibility and utility, donor and beneficiary participation, and co-ordination among donors (OECD, 2010). Even so, these last two principles could be further strengthened. Every evaluation, whether commissioned centrally or not, is done by outside experts selected on a competitive basis. The evaluation is guided by a Reference Group consisting of French government officials, representatives of the project oversight structure in the partner country, sectoral researchers and experts, operators and non-government organisations (NGOs). This was the case, for example, for the Irrigation Evaluation Reference Group and the French Muskoka Fund.2 The Reference Group advises and monitors the terms of reference, the implementation of the evaluation and the reports produced by the consultant.

Lastly, the Cour des Comptes analyses the execution of the national budget for each remit and programme, including those relating to ODA, and conducts its own investigations into thematic issues, such as official assistance to health in 2018. While the budget execution notes are made public, the thematic reports are rarely made public.3

Evaluations are more strategic, with an eye to future projects

Following the recommendation of the 2013 Peer Review (Annex A) France now conducts more strategic programming of evaluations, and they are better co-ordinated among the MEAE, AFD and the DGT. The MEAE conducts three to four new “strategic” evaluations per year, after consulting geographic and sectoral services. AFD programmes evaluations on the basis of four criteria: (1) knowledge deficit; (2) AFD’s strategy priorities; (3) evaluability; and (4) the added value brought by evaluation. AFD carries out 30-35 evaluations a year, 25 of which are (decentralised) project evaluations. Historical records for a number of decentralised evaluations are now available to the public.4 The DGT evaluates France’s contributions to banks and multilateral funds (such as the World Bank, regional development banks and sectoral funds) before these funds are replenished; it also evaluates bilateral aid it finances under budget programme 110 for grants and programme 851 for loans.5

AFD is increasingly conducting mid-term evaluations in order to learn lessons that will guide the next stages of projects; this more strategic approach was noted in its work with Morocco. Similarly, an impact evaluation of drinking water infrastructure in Uvira, Democratic Republic of the Congo,6 allowed the project to change direction (AFD, 2018).

Analysis of a “cluster” of projects takes the form of meta-evaluations which yield more cross-cutting conclusions, on climate for example. This very positive development could usefully be taken further. AFD has also experimented with videoed evaluations which allow more contextualised conclusions to be drawn; but these are very expensive.

Project evaluability is not systematic, and the robustness of support frameworks is not guaranteed

AFD’s evaluation team is currently analysing the ease of evaluation and the robustness of project support frameworks – this is to be commended. However, it lacks both the mandate and resources to determine the evaluability of all the logical frameworks and arrangements for all the agency’s projects7, and there is no other means of ensuring that indicators and project objectives match. At the same time, MEAE’s evaluation department is also working to improve the ease of evaluation of the projects it funds (Solidarity Fund for Innovative Projects, civil society, the francophonie and human development and PISCCA). Project managers at AFD are not obliged to read earlier evaluations of similar projects before submitting a new project document. Consequently, many projects are hard to evaluate, because no harmonised in-depth work exists on logical frameworks and indicators, despite the fact that the evaluation service holds twice-yearly training events on logical frameworks for project leaders and technical staff. In other DAC countries, this work is done by operational entities as part of project and programme monitoring.

The Cour des Comptes has also pointed to the lack of external evaluation of French ODA. It recommends that a greater number of projects be evaluated and reported on more often and in more detail – to AFD’s Board of Directors in particular – given that the agency’s volume of activity is set to increase considerably. In fact, AFD is in the process of reviewing its procedures to improve evaluability, manage expectations and provide quality assurance through its team of 15 evaluators. Hopefully, this will mean that project completion reports – almost mere “box ticking” at the moment – play a part in ex-post accountability, enabling project leaders to appraise projects at both the end stage and the initial stage of preparing loans or grants.

Proparco, the AFD subsidiary for the private sector, carries out ex-ante evaluations of the impact of aid financing on the number of jobs created, carbon dioxide emissions avoided, and taxes paid by the corporate sector. With the help of consultants, it conducts around four ex-post studies a year on dedicated lines of financing (for agriculture, for example) or on investment funds. These studies are accessible to shareholders, including the subsidiary’s supervisory bodies, but they are not made public.

France does not prioritise building evaluation capacity in countries and delegates few evaluations to government

AFD plans to delegate more evaluations to in-house staff or experts, or to peers, as currently done by KfW. It also plans to start participatory evaluations. The agency does not delegate evaluation contracts to national authorities in partner countries, who only participate through the Reference Group. Neither does France support the evaluation functions in partner country governments. It does, however, support national statistical institutes, especially in French-speaking Africa. Decentralised, or project, evaluations are not tied to strategic evaluations but are programmed on demand. Consequently, there is no real strategic oversight allowing project requests and planning from head office to be grouped according to strategic themes. The evaluation service is keen that a team based at head office should conduct one-third of all decentralised evaluations, to promote the sharing of experience gained in the field and improve the quality of project evaluation.

Institutional learning

Peer review indicator: Evaluations and appropriate knowledge management systems are used as management tools

France has no formal mechanism for systematically disseminating evaluation results and lessons learned; however, a database and communities of practice make it easier to search for information on evaluation findings. In addition, AFD and the Interministerial Committee on International Co-operation and Development (CICID) are studying ways of improving knowledge management.

There is no proper link between evaluations, capitalising on experience and knowledge management

France has no formal system for ensuring that recommendations are acted on or that evaluation findings are used and any lessons arising learned. Whilst evaluations are targeted and tailored to the requirements of operational staff, the results are not systematically used when new phases or new projects are being prepared. Greater effort by embassies and AFD to capitalise on the knowledge arising from evaluations would strengthen the strategic importance of decentralised evaluations, which are rarely disseminated within or across the agency’s departments, to partners or the general public.

It should be noted, however, that progress has been made on institutional learning. The communities of practice in AFD’s social network (“La Ruche”, or “The Hive”) now share information and lessons gained from evaluations, and AFD staff can do keyword searches of a database that contains some 450 evaluations. But it is not possible to ascertain how far these instruments are helpful in the preparation of new projects.

AFD is considering making information sharing more widespread, for example by providing summaries of decentralised evaluations or sectoral memos detailing evaluation findings. The MEAE and AFD are working closely here with F3E (Evaluate, Exchange, Explain), a network of NGOs and local authorities which organises a number of spaces for knowledge sharing. AFD assists the work of the F3E network as part of its support for NGO initiatives and for organising workshops.8 The DG Treasury, together with F3E, moderates a group of the French Society of Evaluation specialising in development evaluation.

The conclusions of the CICID meeting in February 2018 (MEAE, 2018) state that evaluation results will be reported annually to the National Development and International Solidarity Council (CNDSI). Up until now, the three evaluation departments published a summary of their evaluations to complement the biannual report to parliament on the French development strategy. Reporting directly to the CNDSI will allow for further assessment of the effectiveness of France’s commitments to development co-operation (MEAE, 2018).

The implementation of an internal communication system and communities of practice within AFD’s social network (“La Ruche”) are significant steps towards a culture of knowledge management. AFD has also worked on the sharing and exchange of knowledge through measures to capitalise on experience. A culture of more structured knowledge capitalisation, plus mechanisms for sharing experience among the various actors making up the AFD Group, would enable the MEAE and the DGT to improve internal learning in the French co-operation system.


AFD (2018), “Évaluer les investissements dans les infrastructures d'eau potable” (in French), French Development Agency, Paris,

BGC (2018), Barefoot Guide 5: Mission Inclusion, The Barefoot Guide Connection,

DG Treasury (2017), La politique d'évaluation des activités de développement de la direction générale du Trésor, MEF, Paris,

JORF (2014), “Loi n° 2014-773 du 7 juillet 2014 d'orientation et de programmation relative à la politique de développement et de solidarité internationale”, Legifrance, Official Journal of the French Republic (in French), (accessed 26 February 2018).

MEAE (2018), “Comité interministériel de la coopération internationale et du développement (CICID) 8 février 2018. Relevé de conclusions” (in French), MEAE, Paris,

OECD (2010), Quality Standards for Development Evaluation, DAC Guidelines and Reference Series, OECD Publishing, Paris,


← 1. (1) Number of passengers using public transport on routes funded; and (2) number of residents in disadvantaged districts whose living environment is improved or made safer.

← 2. The French Muskoka Fund (FFM) aims to reduce maternal, newborn and infant mortality by reinforcing healthcare systems in 10 French-speaking countries in Africa and Haiti.

← 3. The budget execution notes from 2014 to 2016 are available on the website of the Cour des Comptes. The 2016 note can be found here:

← 4. See

← 5. Such as the French Global Environment Facility and the debt reduction and development contract (C2D).

← 6. This evaluation showed that almost a quarter of all cholera cases reported in this city during the period 2009-14 were directly attributable to regular malfunctions of the drinking water treatment plant.

← 7. Although the “sustainable development opinion mechanism” is studying this issue, it is not able to monitor the follow-up to its recommendations.

← 8. Through its co-operation with F3E, which is a member of the Barefoot Guide Alliance, AFD is involved in the Alliance’s workshops – for example those on transformative evaluation (BGC, 2018).