Reader’s Guide

Part I of this Framework provides information for readers seeking high-level guidance on the principles of conducting reliable impact evaluation. Of particular importance is the Six Steps to Heaven tool, which identifies progressively more reliable levels of evaluation based on features of the treatment-control group match.

Part II is most relevant for readers interested in evaluation findings and their implications for SME and entrepreneurship policy. It sets out the findings and policy messages from meta-evaluations and 50 individual high-quality evaluations. Table 4.1 summarises the findings of each evaluation. Section 4.3 and chapter 5 discuss the policy issues that emerge.

Part III is recommended for readers with an interest in exploring how to adjust the mix of SME and entrepreneurship policy to focus on the more effective parts of the policy portfolio. Chapter 7 explores the relative effectiveness of “Hard” and “Soft” support, the importance of targeting policy beneficiaries, and the importance of “Macro” interventions.

Part III also provides information for readers interested in how to improve SME and entrepreneurship policy evaluation. Chapter 8 highlights a number of areas for improvement in evaluation, such as better specifying policy objectives and increasing the scale of evaluation. Readers seeking more detailed insights on potential indicators, data sources and methodologies for evaluating specific SME and entrepreneurship programmes can examine the descriptions of the 50 high-quality evaluations set out in Annex B by type of policy intervention. Those interested in how to set up an evaluation programme to evaluate the impact of the government emergency support measures for SMEs and entrepreneurship introduced during the COVID-19 crisis, and in the role of evaluation for crisis responses more generally, can gather information from Chapter 9.

The remainder of this Reader’s Guide sets out in further detail the main content included within the different parts of the Framework.

Part I covers the “what, why and how” of SME and entrepreneurship policy evaluation. It reaffirms the case made in (OECD, 2007[1]) that it is vital to conduct reliable evaluations of SME and entrepreneurship policies and programmes. It also addresses the state of current SME and entrepreneurship policy evaluation practice, arguing that there is insufficient reliable evaluation evidence in the field of SME and entrepreneurship policy and setting out what can be done about it.

In more detail, Part I covers:

  • The meaning and role of evaluation in SME and entrepreneurship policy.

  • Weaknesses in current SME and entrepreneurship policy evaluation practice.

  • Lessons for evaluation.

Part II reviews methods and findings from an international selection of 50 impact evaluations drawn from a range of OECD countries and policy intervention areas. All meet high standards for methodological reliability (i.e. they are placed on Step V or Step VI of the Six Steps to Heaven framework). These are exemplars for evaluation methodologies, offering models for the data sources used and the analytical tools employed. Furthermore, policy makers can be confident that where conclusions are reached, they are based upon sound data and appropriate analytical techniques. The profile of each evaluation follows a standard template covering aspects of the programme assessed, the evaluation methodology used and the evaluation findings.

In more detail Part II covers:

  • Evidence from international meta-evaluations.

  • Methods of individual high-quality evaluations.

  • Evaluation findings.

  • Lessons for policy.

  • Lessons for evaluation.

Part III draws out the lessons on how to improve evaluation and policy. It explores the major finding, already highlighted in the earlier Parts of the Framework, that the results of reliable policy evaluations are mixed, in terms of whether or not SME and entrepreneurship policy is judged to be effective and efficient across a range of objectives of the policy. While some evaluations estimate positive impacts, others find no impacts on key outcomes such as sales growth, employment growth or business survival, and others still find impacts on some targeted variables but not on others. The discussion considers why. For example, the probability of impact may be related to the contexts in which different programmes are delivered, the timing of the evaluation or the nature of the policy pursued.

A key distinction is made between “Hard” and “Soft” support programmes. Hard programmes involve an important element of financial support, whereas Soft programmes focus on aspects of training, advice and mentoring. A hypothesis is put forward that governments might be able to increase the overall impact of their SME and entrepreneurship policy portfolios by recognising that the current evidence base is pointing to clearer impacts from “Hard” than from “Soft” programmes. This is just a hypothesis at the current time, but points to the need for evaluation programmes that are able to make the comparisons between Hard and Soft policy intervention types and increase the evaluation evidence for Soft policies, for which current reliable evaluations raise doubts about effectiveness.

Part III also contains a section on the role of evaluation for government crisis response measures for SMEs and entrepreneurship in times of economic shock. It illustrates the issues through an exploration of recent government COVID-19 SME and entrepreneurship support interventions, showing how key issues in this Framework need to be given greater prominence, for example in terms of setting out clear objectives and expenditures and using counterfactual evaluation methods. It stresses the need to build evaluation arrangements into future government policy crisis response measures and proposes international co-operation in the impact evaluation of the COVID-19 support.

In more detail Part III covers:

  • Problems with the focus and design of SME and entrepreneurship policies.

  • Possible avenues for rebalancing SME and entrepreneurship policies to increase impact.

  • Improvements needed in SME and entrepreneurship policy evaluation practice.

  • Applying the lessons of this report to the evaluation of COVID-19 SME and entrepreneurship support and other policy responses to shocks.

  • Conclusions from the Framework overall, including 13 key recommendations for SME and entrepreneurship policy makers.

Annex A provides outlines the template used to prepare the profiles of the 50 evaluation cases presented in detail in the report, explaining the information sought and the rationale for it.

The template includes information relevant to judging the quality and reliability of each evaluation based on how far it is in line with key evaluation principles set out in this Framework, such as clearly specifying the objectives to be evaluated against, the impact measures used, whether both survivors and non-survivors are tracked and whether control groups are established at high levels of the Six Steps to Heaven tool. These measures of quality and reliability were used to select the 50 exemplar cases described in detail in Annex B.

Annex A also gives the details and rationale for other key information contained in the evaluation profiles, such as the programme area and the target populations.

Annex B provides systematic information for each of the 50 reliable evaluations identified and reviewed for this Framework through a completed template for each evaluation study. Table 1 outlines the types of information provided for each evaluation.

Table 2 classes the individual reliable evaluations by main category of policy intervention – access to finance, business advice, internationalisation etc. Further information is given for each evaluation on our categorisation of whether the intervention is largely “Hard” or “Soft”, and on our assessment of the level of reliability of the evaluation as judged by our view of its level on the Six Steps to Heaven tool, the clarity of objective setting of the policy evaluated and our Evaluation Quality Score (explained in Annex A). This is aimed at supporting readers in browsing for information by particular types of policy interventions and in assessing the evaluation methodology used by scanning across our reliability indicators.

The full information on each evaluation can be consulted in Annex B by using the evaluation reference number given in Table 2.

Annex C provides a list of 25 other relevant evaluation studies that are not included in the report, with information on the country, topic, year of the study and source of evidence. These are high-quality evaluations that could have been selected for inclusion, but were excluded on grounds of achieving diversity in the examples provided. Further high-quality evaluations could also have been listed and are available in the literature.

Annex D provides a table with information on key evaluation methods used in many of the selected evaluations. It provides a high-level overview of the methodological approaches and references to further reading.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at